Why is DoubleVerify burying its big news with a December 23rd press release?

PR experts use a trick when they need to release news they really don’t want covered broadly, peer reviewed or scrutinized.  The trick: drop the announcement when everyone is focused on other things.  The Friday afternoon before a long weekend and the last business day before a major national holiday are prime dump days.  The Bush White House used this tactic to announce Koran abuse at Gitmo and the indictment of Scooter Libby.  Celebrities routinely use it to announce divorces or rehab stints.

And on December 23rd, just as the media world shut down for Christmas, Double Verify (DV) used it to announce its new “BrandShield” solution.  Of particular note in DV’s release is that it seems to imply (the wording is quite cagey) that DV can perform page-level quality filtering on “nearly 100% of impressions”, even when ads are served within iframes, by effectively “seeing through” the iframes to determine “which…page the ad is actually delivered on”.
Taken at face value, this sounds like a huge advance in page-level quality filtering technology, which obviously requires page-level visibility to work.  However, regular readers of this page will remember our recent post on the problems posed by iframes for 3rd party page-level filtering.  Specifically, that “seeing through” iframes is impossible for an ad buy – like the vast majority of ad network buys – the composition of which is not known in advance.

So why would a (to date) publicity-hungry startup like DV announce seemingly ground-breaking technology in a way that recalls the indictment of a senior White House staffer?  The only reason I can think of is that this announcement amounts to either a) an admission that DV is using the methods of hackers to exploit holes in browser security and enable collection of data that all commercial browsers prevent for important privacy reasons or b) a clumsy and misleading attempt to confuse the market about what is technically possible.

The former would raise extremely troubling privacy concerns, particularly against the backdrop of increased scrutiny on collection of user data for BT.  The latter is obviously not particularly comforting either, but at least it doesn’t open unsuspecting agencies and brands up to PR backlash, consumer lawsuits and/or government sanctions.  Either way, prospective DV clients considering this solution should ask tough, direct questions about how this apparent iframe miracle is performed before touching it with the proverbial ten foot pole.  Specifically, buyers’ technical staffs should seek to understand clearly and precisely how each page in an ad buy would be conclusively identified and filtered, including each page where the ad is displayed within an iframe.  As I mentioned above, be sure to consider the case where the composition of the buy is not known in advance, like most ad network buys.

Rest assured that we will be working with our agency partners to fully explore these claims and will share whatever facts we uncover on this page.  Please feel free also to share with me anything you know or find out.  As we set about that work (or at least until DV is good enough to clarify their release), I would renew my call for a New Year’s resolution:  let’s elevate the dialog from misleading marketing claims to honest discussion and execution of the cutting edge solutions that sophisticated clients demand and deserve.

Is BT Just a Sales Tool? (Redux)

This post is a continuation of my article in last Monday’s AdExchanger about some serious challenges with BT for Brand marketers.  Interested readers should start there and then continue reading below, as I make some of my points here in the context of the example presented in that original article.

As I mentioned, BT does not outperform other approaches in driving offline sales.  Specifically, Brand.net’s studies with Nielsen have proven that our campaigns deliver impressive offline sales impact.  These results were achieved without BT;  instead Brand.net uses high-quality media with contextual, demographic and geographic targeting managed to high composition, with controlled frequency and cost.

The average ROI of 141% on these Brand.net campaigns is roughly comparable to the average ROI generated by Nielsen’s largest offline measurement partners over hundreds of studies using the purchase-based / look-alike targeting approach I described in my original article, refined over nearly a decade.  The Nielsen-powered BT those others use is state of the art; BT doesn’t get any better for branding.  If it fails to deliver substantial ROI upside to other approaches in driving offline sales – we as brand marketers really need to question the utility of BT in general.

In addition to this fundamental problem, BT poses a variety of other important problems that brand marketers should consider carefully.

First, there are no standard definitions within the industry for behavioral categories so there’s a huge degree of subjectivity in defining which users are a close-enough match to the core users to qualify as “look-alikes.”  This is a big deal because, as I outlined, 99.9% of the users in a typical BT campaign are based on look-alike modeling.   In the context of the specific example I used, how similar does a user need to be to an actual CPB Baker to qualify for inclusion the behavioral category?  What’s to keep the network doing the modeling from stretching that definition to create more inventory, particularly if there’s no direct measurement on the campaign?

Another related issue is lack of portability.  Since there’s no consistent definition for any behavioral target, if an advertiser does find something that works with a particular vendor, the advertiser is stuck with that vendor.  They can’t say, “CPB Bakers work great.  Let’s figure out the best way to buy them.” because the CPB Bakers from one source could be completely different from the CPB Bakers from another source due to different look-alike definitions.  Furthermore, if the vendor whose CPB Bakers “worked” changes look-alike definitions, loses access to data or goes out of business, the advertiser must start from scratch.  BT can’t be used as a basis for a scalable, repeatable, progressively improved strategy driven by the advertiser/agency unless the advertiser is the one building the profiles from scratch – something that is far beyond what most advertisers today are willing to do.

Due to cookie churn and simple inventory volatility, impression delivery is extremely hard to predict for any reasonably focused BT target (and forget about reach or pricing).  This makes forward delivery guarantees almost impossible – another barrier for scalable use by large brands that typically plan a significant portion of their spend in advance.

BT can also be used by networks or publishers as a way to mask inventory quality issues.  Would an advertiser/agency want the media included in a BT buy if they actually knew what they were purchasing?  Would they be willing to pay the same rate?  I doubt it, but the glossy BT story effectively launders this sketchy inventory into a desirable commodity.

Finally, there are obviously high-profile privacy issues swirling around BT, and it’s anyone’s guess where those will settle out.  I would hate to have a platform or media strategy built around BT if (when?) our friends in Washington decide that “opt-in” will become the law of the land.

Marketers considering significant or sustained investments in BT would be advised to think carefully about all of these issues and ask tough questions of their partners before proceeding.

CBS’ Decision

Some quick comments on this morning’s Ad Age article on CBS stepping away from networks. The “publishers vs. networks” issue has ebbed and flowed pretty consistently since I started in this business at Yahoo! in 2002.  It seems to ebb when revenue is scarce and flow when demand picks back up, with clear evidence of both trend and seasonality.  There is obviously some rationality to this pattern, but I have always thought that the “turn ‘em on, turn ‘em off” approach is a blunt instrument that doesn’t serve publishers, particularly in the long term.

For example, in this article, CBS draws a distinction between the third party networks they are turning off and agency-owned networks (e.g., Vivaki) with whom they will continue to do business. As Michael Zimbalist of NY Times points out in another recent article, from a publisher perspective these agency-owned entities have a lot in common with third party networks.  So it’s unclear how leaving them “on” makes sense if the best solution for third party networks is “off”.

Apart from this inconsistency, two other big issues with the on/off approach are lack of resolution and poor responsiveness to dynamic market conditions.  While networks overall may monetize at a lower rate than direct sales efforts, certain networks will be more or less competitive for certain inventory (resolution) and at different times (dynamics).  RTB was designed to address these two “hard coding” issues (amongst others), but neither AdX 2.0 nor Right Media are close to ready to be relied upon as sole indirect demand channels.  Internal agency network efforts are still nascent as well.  The bottom line is that vastly more demand still flows through third party networks than through of any of these channels.

So rather than bowing out of a significant majority of the quickly evolving ad ecosystem, I think the right publisher solution is a framework that coordinates direct and indirect sales efforts to create the competition for inventory that drives maximum revenue for the publisher. Based on my long experience at Yahoo!, I laid out the broad strokes of such a framework in an article for MediaPost earlier this year. Publishers that learn fastest and best how to apply such a framework in their particular circumstances will achieve levels of monetization that increasingly distinguish them from their more isolationist peers.

None of us is as smart as all of us; the key to staying on the cutting edge of monetization is coordinating the best efforts of both direct and indirect channels on a dynamic basis.  Today and for the foreseeable future, third party ad networks are an important part of that picture.

IAB’s Rothenberg down under

Some interesting thoughts in this conversation between IAB CEO Randall Rothenberg and Ben Shepherd of Australia’s Business Spectator.  While the whole discussion is interesting, I’d like to call out in particular Rothenberg’s assessment of the top 3 challenges facing IAB and the industry at large.

I think he has them right.

The swirling privacy issues don’t impact Brand.net (we don’t do BT for a variety of reasons – more about that on this page soon), but as BT becomes ubiquitous privacy issues represent a significant overhang to many other players and the industry overall.

The other two issues he mentions, though – measurement standards and branding – are near and dear to us at Brand.net.  It may not be immediately obvious, but these two issues are intimately related.  Online DR is easier and bigger than branding online today.  This is partially because investment in technology has disproportionately focused on DR, but measurement standards are a major factor as well.

The standard for DR is easy: CPA.  Attribution models are a topic of constant discussion (especially given some of Atlas Institute’s work), but for DR at least the goal metric is very clear.  For brand advertisers, who may not have near-term direct sales objectives and/or who are generating 95+% of their revenue with offline sales, it’s not so simple.  These advertisers need a variety of measurement approaches to understand the impact of their online campaigns on attitudes, online activities and offline sales.

Brand.net offers a complete portfolio of brand measurement capabilities and our platform is designed to deliver media that drives results, however they are measured.

Echoes of Exchange 3.0

Just a quick note pointing to a short, but interesting post today that echoes my recent article in Ad Age.  Clearly Pete Kim and whoever he was talking to understand that it’s not all about DR.  Kudos to them.

Again, today’s re-energized battle for display is just warming up.  The long-term winner will be the one that provides brand-focused capabilities on top of the evolving supply platforms to help brand budgets follow audiences online.

Brand.net’s breakthrough

Interesting article by Joe Mandese on MediaPost this AM.  “Unsavory adjacencies” (which would be a great band name by the way) are indeed a huge concern for the largest brand advertisers as they ramp up their online investments.  That’s why Brand.net pioneered preventative page-level content filtering with the launch of SafeScreen almost a year ago.  Abbey Klaassen at Ad Age and Laurie Sullivan at MediaPost both covered the launch back in February.

Since then, while others have been in development, we’ve been busy protecting our customers.  In the past year, SafeScreen has provided 8 of the top 10 CPGs, dozens of other Ad Age 100 spenders and each of the top agency holding companies with the cleanest inventory available on the web, preventing millions of “unsavory adjacencies” each week.

While we’re on the topic, I will reiterate the point I made in my iMedia article a couple months back – that quality is a page-level issue, not a site-level issue.   The reason I bring this up is that in order to do any sort of page-level quality filtering, it’s necessary to know exactly which pages are requesting ads – i.e., which pages need filtering.  This is a very difficult challenge due to common usage of iframes by publishers.  This recent blog post provides a great background on iframes for the uninitiated.

SafeScreen works because Brand.net does the buying and the filtering.  So if we want to buy from a publisher that uses iframes we can take steps in advance to make sure we have accurate page-level visibility so SafeScreen can work.  The recently announced quality assurance products seem to suggest in their marketing claims that they can be dropped in front of a random, arbitrary ad buy and ensure safety.  This simply isn’t technically possible due to the prevalence of iframes.

Buyers considering these “stand-alone” solutions should ask hard questions.  If they do they will find they aren’t going to be nearly as safe as the marketing suggests.

Great minds think alike

Nice short piece this AM from Peter Kafka of allthingsd re: Microsoft’s plans to enter the Exchange 2.0 landscape with a re-tooled AdECN.  Very much in line with my post earlier this week in Ad Age.  As I wrote, the next 12-36 months will be interesting indeed…

In Search of Exchange 3.0

I thought readers of this blog may also be interested in my guest post for Ad Age, where I give a brief history of the evolution of the display advertising exchange ecosystem and suggest what I believe is the next step.  This post for Ad Age follows up on my previous post here.

As always, let me know what you think!

A very smart publisher (redux)

Another tremendously insightful article yesterday from Michael Zimbalist of NYT.  This guy is sharp.  His analysis of the situation is dead on and I completely agree with the rough bucketing of potential outcomes and associated implications for the various ecosystem players.

However, I want to make it clear that the key to Zimbalist’s positive outcome scenario (scenario 3) is the emergence of capabilities that aren’t widely available today.  As Dan Ballister wrote in his comment to the article, “If buyers are going after audience in real-time auctions, will they make peace with having to forfeit control over ad environment and delivery predictability?  What good is it to reach your audience when they don’t want to be found, or to only run 15% of your back-to-school campaign on time because you kept getting outbid?”

Well put.

In order for Brand marketers to fully leverage the emerging exchange ecosystem they will need sophisticated technology for page-level quality filtering, pricing & delivery prediction, R/F & composition management, delivery smoothing, offline impact measurement, etc.   In case it’s not obvious, that’s a very different toolset than the fine targeting and CPA-driven optimization engines of which the market has produced scores of copies thus far – on both the demand side and the supply side.

Stay tuned for some more in depth thoughts on this topic shortly.

Are you actually buying what you think you’re buying?

Great article by Mike Shields in MediaWeek yesterday. According to the article, Tremor Media was running ad for major brands that were a) in-banner video as opposed to pre-roll, b) below the fold and c) adjacent to questionable content.

I want to highlight two separate issues in the context of this article: quality control and credibility.

Quality control is something that has been top priority for Brand.net since inception, because it was the number one concern of our branding-focused clients. I have written extensively on this topic and Brand.net’s SafeScreen platform is the premier quality assurance platform on the web, providing page-level filtering to ensure quality for every impression that runs through the Brand.net network. The monitoring capabilities Adam Kasper mentions in the article can be useful, but it’s far better to prevent quality incidents in the first place. He’s dead on when he says that quality control is the ad network’s responsibility. Unfortunately, too often that responsibility is not fulfilled.

On my second point, ad networks at large have a reputation for not always being completely honest with clients. This is another issue I’ve written on in the past. In response to this particular incident, Shane Steele, Tremor’s VP marketing, was quoted saying, “It’s a very nuanced space, which makes it complicated”. Actual pre-roll video is indeed more complicated than display, but the ads at issue here were display ads. They just happened to be running video creative. So answers like this don’t help build credibility for networks and they certainly don’t help build the trust that’s so essential for major brands to fully leverage the medium.

Tod Sacerdoti summed it up well when he said, “There is a gap between what an advertiser thinks they are buying and what they are [actually] buying.” This gap occurs far too often today and it needs to be closed before the medium can fully mature.