A fishing fleet without nets

ComScore released another solid piece of work yesterday.

As readers of this page will remember, comScore has been outspoken on the failings of the ubiquitous click as a metric. Some of that in this report, but much more as well.  From my perspective, the most interesting thread in the report ties together a couple of their numbered points.

First, as comScore correctly points out, cookie-deletion creates real problems for cookie-based targeting and measurement approaches. comScore data shows that 30% of all US internet users delete their cookies monthly or more often. Furthermore, many computers see routine use by multiple users. These factors create “noise” in targeting that often results in much lower true composition against the target than is claimed or described. Consumers’ ever-growing concern about privacy will only make this worse. Probably much worse. More evidence (if any was needed) that measurement of campaign impacts against meaningful metrics is critical – especially when a targeting approach sounds like magic.

Secondly, comScore highlights the tradeoff between targeting and scale. This tradeoff is intuitively obvious, but often overlooked. Equally often, credulous buyers willingly suspend disbelief in favor of a nice-sounding pitch.

Consider the example of one of our clients, with a large online footprint of some 25 million accounts.  Of these 25M, this client has actionable cookies on <5M, with data of varying depths and value (and all of these cookies, of course, are subject to the churn challenge presented above). So this client can (and does) employ the most sophisticated targeting and re-targeting approaches on all of these 5M customers. But what about the other 20M customers they can’t talk to this way?  What about the 100M adults that aren’t customers yet? 30-spots?

For the online advertising to grow to its full potential (and necessary size as “offline” media erodes), we must more fully develop a broader approach to complement our myriad fine targeting approaches.

Sometimes it is best to fish with a hook, other times with a net. As an industry we need a good supply of both.

Look for more on this topic in subsequent posts, but wanted to make sure to call out comScore’s work while it was fresh.  Worth a read.

One Platform to Rule Them All…

Just a quick post to thank Adam Cahill for his shout out on ClickZ yesterday.  It has been great to see the market get behind the futures model as a necessary complement to spot.

Adam also raises an interesting point about one vs. many DSPs.  Today it’s clearly necessary to use at least 2 to get full-funnel, futures (Brand.net) & spot (others) capability.  But I think we’ll see this pretty quickly follow a path towards increased efficiency, i.e. towards a single unified platform that enables agencies and clients to manage spend against any campaign, any objective, using a common interface.

It will no doubt be an interesting road to get there, but it’s just a matter of time.

Happy Birthday, SafeScreen!

For any of you who may have missed our press release yesterday, SafeScreen, the industry’s first preventative page-level brand safety solution, turned 2 years old earlier this month.  As we proudly celebrate this milestone, I wanted to take a moment to reflect on market developments since Brand.net introduced the digital media market to the notion of page-level, preventative quality filtering for brand safety (or “ad verification” as it has come to be known).

Last year certain of the multiple ad verification technologies that followed SafeScreen to market in 2009 added preventative “blocking” capability to their original retrospective “reporting” offerings.   We congratulate them on their progress, but while 2011 promises to be another action-packed year for digital media, we believe it will also bring some new challenges for third party verification providers.  These new challenges will stem from false positives and billing discrepancies, which add another layer of cost in terms of both cash and cycle time to 3rd party verification (above and beyond the well-documented problems with page-level visibility due to iframes).

False positives cause friction in the context of retrospective reporting, but that friction goes to an entirely new level when ads are preemptively blocked.  Look for this friction to generate increasing heat as blocking implementations become more common.  Ditto for discrepancies, an issue primarily associated with blocking as the verification provider must actually hold up the ad call while deciding whether or not the page content is safe.  This additional hop in the serving chain introduces latency which is a source of material ad serving discrepancies.

So add 5% of spend to the $.10 verification fee to account for discrepancies, 1% for extra manual overhead, another 0.5% for false positives and it’s not too much of a stretch to see 15% of spend going just to verification.

Stepping back for a moment, would we tolerate this in any other market?  For example, would we accept it if the GIA report for a diamond added 15% to the purchase price (whether we paid this fee to GIA or the jeweler did and passed it along)?  Would we accept a 15% SEC fee on each and every stock trade (whether or not our broker “paid it for us”)?  Apparently not, because current SEC fees on equity transactions are 1/800th of 1%.  At up to 15% of spend, verification fees are currently some 10,000 times higher than SEC fees.

It doesn’t have to be this way.

For example, SafeScreen is free, and because Brand.net controls both the filtering and the serving the operational issues of false positives and latency aren’t left to the advertiser and publisher to resolve.  This may appear shamelessly partisan, but I re-introduce the alternative architecture here primarily to make a broader point;  I have been quite surprised that preventative brand safety technology hasn’t yet been incorporated on the server side by one or more of the major exchange platforms.  In doing so they could not only help market principals avoid latency and billing disputes, but would be in a position to minimize refURL-related visibility issues as well.

It will be interesting to watch things shake out in 2011 and in particular whether the need for quality and efficiency drives towards consolidation (happy investors) or aggressive disruption of the emerging verification market (unhappy investors).

What do you think?

Love that Ad Exchanger

Great article by John Ebbert himself on Ad Exchanger today.

John draws very insightful parallels between the data ownership issues that are an important factor in the current trade dispute between American Airlines and several online travel agencies and a similar data ownership conflict brewing in the online ad space.

I should have referenced this example in my recent post on the latter topic.  It’s a great example.  I was aware of the American story, but didn’t make the connection.

Nice job, John!

An Inconvenient Truth

An interesting piece yesterday from Adam Cahill of Hill Holliday, with some great thinking on how to address the quality challenges posed by the evolving real-time digital media landscape.

As Adam correctly points out, for most Brand campaigns delivering results is about more than just protecting a Brand from objectionable content.  That itself is very important (and we’re very good at it, by the way), but it’s only the beginning – “necessary, but not sufficient” as they would say back at MIT.  Media quality involves not just the the text of a page, but the editorial environment in which it exists.  That second bit makes this an even harder technical problem, particularly when you consider that quality is a page-level issue.  So we’re currently left with the false choice between audience and content that Adam correctly suggests we reject.

Let’s push a bit further though.

Without taking too much license (Adam, please feel free to chime in), I think I can safely say that “Audiences vs. content” is essentially a compact way of describing the choice between two different operational approaches.  “Audiences” is shorthand for scalable, efficient, automated buying via RTB on exchanges.  “Content” is shorthand for manual, site-by-site buying.  In the rush for operational efficiency, “audience” buying has grown very quickly over the last 2 years while “content” buying has stagnated, resulting in well-documented challenges for many high-quality publishers.

Audience buying works great when fast, high-resolution feedback on a financial goal metric is possible.

For example, let’s assume Netflix’s goal for a big chunk of its marketing spend is profitable subscriber acquisition and they have conversion value and attribution models that they trust.  Then, just as long as they have a scalable way of keeping their ads away from damaging content (porn, profanity, etc), they can pretty much ignore the editorial quality / “shades of goodness” issue Adam focuses on in his piece.  The tie between editorial quality and performance will show up in the CPA numbers and cause money to move appropriately.   So, for this block of DR money, Netflix can optimize based on their conversion metrics and they’re done.

For a brand campaign, the situation is different.

Brand metrics (e.g., awareness, consideration, intent) take longer to measure, they take longer to translate into financial value and that financial value is most often (95% of the time) realized in an offline transaction.  This means there is no fast, high-resolution feedback on a financial goal metric for Branding, but the push for enhanced efficiency of audience buying is no less acute.  What to do?

Unfortunately, today’s “solution” most often involves substituting for the meaningful data that is lacking some mix of a) meaningless, but conveniently accessible metrics like CTR or b) nice-sounding audience descriptions (like “peanut butter bakers”).   Once these substitutions are made, Brand campaigns can smoothly run through the DR-tuned “audience” infrastructure.  The problem is that these simplifying substitutions require a huge leap of faith at best and are very often detrimental to performance against the metrics that really matter.

The right way to leverage the new real-time online ad infrastructure for Branding is first to carefully test and measure the impact of different scalable, repeatable targeting criteria on *meaningful* metrics (like purchase intent or offline sales).

This process is conceptually similar to the Netflix example I detailed above; i.e., test, measure, optimize.  However, because Brand measurements involve longer time lags and lower resolution, there will need to be some manual effort applied to the process itself before intelligent instructions can be fed into the real-time execution machine.  The machine can’t do all the work itself.

It’s an inconvenient truth, but it’s the truth nonetheless.

Unfortunately, these “meaningful, but harder to get” metrics are too often not even gathered today, so the convenient lie persists.

Reading Adam’s article in this context, the richer standards for quality that he’s calling for essentially represent another set of scalable, repeatable targeting criteria added to the mix, one that he expects to have high correlation with results for brand marketers.  I wholeheartedly agree there would be a lot of value there.  We’ve certainly seen the impact of media quality in our own results.

But I also think it’s important to underscore the higher-level point raised here.  In order for the real-time digital ad infrastructure to be complete, it needs native support for branding that is sadly lacking today.

Privacy, Data Ownership and the Digital Media Value Chain

Regular readers of this page know that I have written multiple posts on the general topic of privacy concerns with online ad targeting.  More recently, I have highlighted a lower-profile, but equally important facet of the privacy discussion:  data ownership.

2010 was a turning point in the data ownership/privacy discussion.  So as 2011 kicks off, I thought it would be worthwhile spending a moment to tie these threads together in the context of the digital advertising value chain.

The value chain begins with users, who move from publisher to publisher and page to page consuming content that interests them.  In most cases publishers provide this content free of charge in exchange for the opportunity to present ads to users who consume it.  Publishers then sell the ad inventory so created directly to media agencies (who buy on behalf of advertisers) or through some mix intermediaries including SSPs, exchanges, DSPs and Ad Networks.

Increasingly, agencies are choosing to buy (and thus publishers – sometimes reluctantly – are choosing to sell) through intermediaries.  Therefore, the value chain for a typical advertising transaction is as follows: user, publisher, ad network or DSP, agency, advertiser.

Sitting in the middle of this value chain are ad networks and DSPs.  As has been discussed, it’s often difficult to assign a given company cleanly to one bucket or the other, but this link in the chain generally aggregates publisher ad inventory and agency demand, providing agencies with targeting and optimization capabilities and increasing operational efficiency for both publishers and agencies.

Here’s a typical example:

A network or DSP runs a campaign for an eyeliner product from a large CPG advertiser on a group of womens’ content sites.  The network/DSP collects data on which users it encountered on which sites or site sections (e.g., beauty tips, product reviews), who clicked on and/or engaged with the eyeliner ad and on which publisher pages/sites they did so.  Depending on how the campaign is configured and measured, the network/DSP may even collect some activity data from the advertiser’s site.  The network/DSP then turns around and sells media based on that data – say by a) retargeting those users on other sites or b) offering those users or look-alike users to other advertisers or c) some combination of both.

The activities this example illustrates are commonplace, but the appropriate legal permissions for this type of data use are almost never explicitly granted today.  In fact, in many cases some or all of these activities are expressly prohibited.  Like users who are becoming increasingly concerned about the extent to which data about them and their behavior has been bought and sold without their knowledge, many advertisers and publishers would be surprised (shocked) at how their data is being used.

Which brings me back to the value chain.

Of all the entities in this value chain – user, publisher, network or DSP, agency, advertiser – intuitively, which entities have the strongest claims to ownership of the valuable data generated by an online ad campaign?  I would argue that the ends of the value chain – the user and the advertiser – have the strongest claim to ownership of this data, with other parties’ claims weakening dramatically from the ends to the middle.  Who has more rights in a user’s behavioral data than that user?  Who has more rights in an advertiser’s performance data than the advertiser who paid for the campaign?  It’s patently obvious.

Of course, these data owners may choose to license some of their inherent rights to others in exchange for something of value.  For example, a user may be OK with a publisher recording and using his browsing habits to deliver more targeted content or sell ads to subsidize free content.  Or an advertiser might be OK with their agency recording and using ad performance data to improve the return of their campaigns over time.

However, in full knowledge and understanding, would the average user really be OK with an ad network or DSP, with whom the user has no relationship, constructing a comprehensive view of her life (anonymous or not) and selling those details to the highest bidder?

The industry generally defends this practice by extolling the user value of relevant advertising.  This argument has been proven valid in Search advertising, but is a tenuous proposition at best in Display.  Regardless, each user should make the decision on the value of ad relevance vs. privacy, not the industry on behalf of all users.

Similarly, would the average advertiser be OK with an ad network or DSP using data about how its campaigns perform to improve performance of direct competitors’ campaigns?  I’m not sure what the industry’s “pro-data-owner” argument would even be in this case. Yet, again, this type of activity is routine in today’s digital ad market.

So I would argue that the privacy debate that rages today is fundamentally a reflection of the simple property rights issues these activities raise.  Users and advertisers at the ends of the value chain own the data, but that data is being used and monetized primarily by the players in the middle of the value chain.  The vast majority of this data use and monetization is unlicensed, representing a free ride on the gravy train for about half of the companies on LUMA Partners’ ubiquitous landscape chart.

The government appears to be leaning towards addressing this set of issues on behalf of users with a “do not track” list, but even without do not track – as many are skeptical of the speed of government to act – the private sector is rapidly innovating.  New versions of browsers from Microsoft and Mozilla will ship with privacy protections built-in.  For those who don’t want to upgrade, browser extensions are also providing private, user-controlled do not track capability.  Another new technology, from Bynamite, is taking a different approach by providing the user a way to control – and profit from – distribution of their data.

In defense of corporate data owners, companies like Krux Digital are providing tools to help publishers keep from getting their virtual pockets picked.  I am not aware of any company providing similar data security audit solutions for advertisers, but this is an essential technology representing a huge opportunity.  I am sure a solution is on the way.

The landscape is evolving quickly and it’s still unclear as to how it all will end up, but one thing is certain.  The long term solutions to the “privacy” issue will give data owners at each end of the value chain dramatically increased visibility of, control over and stake in how their data is used by players in the middle.

And as these capabilities allow the data gravy train to begin charging for tickets, you’re going to see fewer riders.

“Focus on branding helps display…close the gap”

Just a quick post to call attention to today’s article in eMarketer.

Display will grow significantly faster than Search over the next several years, with Display well on pace to be the largest online ad segment by the end of the decade.  Maybe there’s a reason Google’s so focused on display these days.

Of particular interest is the source of the strong growth:  brand budgets moving online.  Certainly not a surprise to us and (as they say in poker) there’s plenty more behind.

This is going to be a big pot.  I love it when a plan comes together.

Holy Grail Found!

My team forwarded me an article from yesterday’s Merc’ on the trend of large cap tech companies loading up on economics talent from academia.  It’s an interesting trend, if not exactly a new one.  Hal Varian’s high-profile hire by Google occurred in May 2002.  Yahoo! began building a team of economists around the same time.

The thing that really caught our eyes though, was this sentence:

For instance, Yahoo’s economists have been searching for a holy grail of advertising — tangible evidence that online ads actually make people buy stuff in a real-world store.

Not only does this evidence exist, but it is abundant.  Exhibit A is our fantastic SalesLink results.  For every $1 spent on Brand.net media in these campaigns, our clients drove >$4 in incremental sales (primarily in “real-world” offline stores).

Our most recent result – a campaign for a major national dog food brand – was even more impressive, driving >$1.5M in incremental offline sales on a $200K media investment.  That’s an additional 2.2 million pounds of dog food sold due to this campaign.  In case you’re wondering, 2.2 million pounds of dog food is enough to fill about 100 standard 20’ intermodal shipping containers.  If these containers were loaded on semi-trailers bound for a real-world destination, it would create a bumper-to-bumper caravan that would stretch more than half a mile.

I am no economist, but this would seem to qualify as “tangible evidence”.

Ironically, the very same Nielsen measurement capability that proves this offline impact was originally created in partnership with Yahoo! and launched jointly…in 2003.

If the Merc’s reporting is correct, Yahoo! may want to add a few archaeologists to the mix.

A Housewarming for Procurement

There was a really interesting article in Ad Age last week that underscored the increasingly important role of corporate procurement, or strategic sourcing, groups in the media buying process.

This is not a new development by any means.  We first referenced this trend on this page last year and it began years before that.  However, the appointment of a formal ANA task force designed to improve the relationship between procurement groups and their partners – in this case marketing teams and agencies – shows how far this trend has gone.

The boxes are unpacked and the renovations have started.  The new neighbors are here to stay.

One might ask why such a task force is necessary.  One study, for example, presents a pretty stark picture of the need.  There’s clearly a significant perception gap between procurement groups, marketing groups and agencies; each has a very different perspective on the role and value add of the others.

Procurement teams see their role as constructive.  Their involvement helps improve the return on a critical corporate investment with a focus on increasing value rather than reducing cost, a view clearly expressed in this Q&A with several prominent procurement executives.

But according to the study results, agencies and marketing teams apparently do not unanimously agree.  Marketing teams and agencies clearly feel that procurement can err on the side of the numbers, ignoring important qualitative, creative or relationship factors.  There are also perceived skill gaps; only 14% of agency executives, for instance, said procurement “is knowledgeable in advertising/marketing”.

The ANA task force appears dedicated to ironing out these differences in perception to improve the efficiency and tranquility of the “neighborhood”, if you will.  Part of their remit is to help procurement teams get up the learning curve quickly on what for some is a new domain – marketing.  There will also no doubt be attention paid to focusing procurement efforts on areas where they can add the most value the fastest.  Getting points on the board quickly is a key ingredient to successful change management.

To that end, I think there are some helpful suggestions in another Ad Age article.  The author casts procurement as a tool to help marketers and agencies build working budgets by improving efficiency, accountability and control.  That’s a state I think all constituents would agree represents success.

Renegotiating agency compensation is one thing on which the three constituencies could reasonably have tension.  But there are many issues on which agreement should be fairly straightforward.  Would anyone argue against a lean, streamlined briefing process, or for travel when Webex would suffice?  Does one account really need a sprawl of different agencies?  These seem like relatively obvious areas where experienced procurement practitioners can leverage experience from other domains to deliver significant savings that could be channeled back into working media budgets.

Even more strategic would be leveraging procurement’s experience in sourcing other direct and indirect materials to drive improvements to the processes for planning and sourcing media – particularly in digital.  As the author mentions, this is an area of great potential due to its rapidly growing share of budget and the extreme complexity in today’s digital process/ecosystem.

I couldn’t agree more, but as I mentioned in my own Ad Age article, procurement teams need new technology to help them add this value.   Accurate forecasts, meaningful delivery commitments, guaranteed quality – these are all indispensible tools to help procurement teams do what they do best in other domains.  These capabilities are just as critical in digital media, but the solutions have been sorely lacking.

One author goes so far as to suggest we replace the entire process (procurement, agencies, marketing – apparently the whole kit and caboodle) with what would have to be the worlds gnarliest optimization model.  You know when you’re comparing the complexity of your model to those that (attempt to) predict the weather you’re off to a bad start.  But even if all the neat stuff described in this futuristic piece was possible today, you’d still need accurate forecasts of capacity and price, and the ability to reliably deliver against forecasts to get real value out of this magic box.

My personal advice is for the incumbents in the media value chain to welcome procurement.  In my experience, procurement teams are very much aligned with the objective of helping marketers and agencies build working budgets by improving efficiency, accountability and control.  These new partners are smart, focused, disciplined allies that understand that advertising combines art and science.  They are here to help.

We’re proud to offer MFP On Demand to all of our customers, particularly the client procurement teams whose needs have largely been ignored by the Silicon Valley (and ‘Alley) technology communities thus far.

Old habits die hard

There was another installment yesterday in the seemingly endless series of articles on the counter-productivity of CTR optimization.  This “billion-dollar mistake” has been covered on this page in depth and repeatedly.  With the ever-growing abundance of research on the topic you’d think that more buyers would stop focusing on CTR “optimization”.

It’s surprising how hard old habits die.

I thought this particular article was worth calling out because it suggests that the ultimate answer to the evil of optimizing for CTR is optimizing for conversions instead.  That’s right of course, if it’s possible (and assuming your attribution models are correct – not a trivial assumption).

But what about the vast majority of valuable commercial activity that can’t be cleanly tracked with a pixel?  Like, for example, the 95% of retail sales that occur offline?  As I discussed in a recent post on the trillion-dollar O2O opportunity, this is a huge gap in the thinking behind and the capabilities of most online advertising solutions today.

Brand.net’s ground-breaking Media Futures Platform was designed specifically to attack this opportunity, driving profitable offline sales measurably and scalably.

These are the results that really click for our customers.