Consolidation curve?

An insightful post from investo-blogger Jerry Neumann yesterday on Ad Exchanger.  I like what he’s thinking about in the post and agree with much of it, but there’s an important meta-point that he didn’t mention.

Jerry’s first point was that there is a huge shortage of experienced talent in the online ad industry and what does exists is primarily clustered within the myriad tech vendors in the ecosystem.  Agree.  His second point was that even as the exchange ecosystem (which at its core promises increased efficiency through a common set of pipes) grows, we see continued fragmentation of supply / demand relationships.  Agree.

But I would also argue that these two observations are causally related.  The reason things continue to fragment is largely that there are too many tech companies making too many pitches to too many media buyers and sellers that are still coming up the learning curve.  Tech company convinces still-learning buyer or seller to participate in “private market” promising some advantage in terms of functionality or monetization.  Careful A/B testing is hard to do without committing even more limited time/resources (hence it’s rarely done at all).  Whatever advantage was expected may or (more likely) may not actually be delivered, but such decisions are infrequently revisited.  As a practical matter, once the sale is made the arrangement has tremendous inertia, regardless of relative value add.

So Jerry’s “thin exchange standards” may well become necessary, but I think that would have much more to do with folks not thoughtfully using the tools that already exist rather than a “real” need.

“Private markets” are rarely the most efficient alternative.  The more participants in the market the better, assuming careful thought is given to structure and business rules.  I saw frequent examples of the private market dynamic in my time at Yahoo!.  Some enterprising salesperson would convince a content group GM to dedicate a placement to a particular advertiser.  Such arrangements almost always under-monetized relative to an open, competitive market for the same placement.  There was just an article last week in the ‘Journal offering up some more evidence from Goldman’s experiment with private markets.    Or coming at it from another angle, have you ever tried to sell anything locally on craigslist, failed, then posted on eBay?  eBay’s national market with huge liquidity almost always closes the deal at a fair price.

The faster we collectively get up the learning curve, the faster things will consolidate so we can actually realize some of the efficiency gains we’ve all been chasing.

Adnetik adds nets

A couple interesting articles by/about Adnetik earlier this week.

The first, written by Adnetik CEO Ed Montes (formerly Manager Director for Havas Digital, North America), argues against the over-reliance on last click / last view attribution – what Montes terms a “false positive”.   Microsoft has published extensive research on this topic as well, but last click / last view is still all too standard in the world of online advertising.

As Montes lays out, the bulk of online ad infrastructure is designed and tuned around last click / last view, leading the industry to “throw smarter money away”.  This itself is bad enough, but it’s even worse when you consider than in many cases the target of all this “optimization” is an online metric that has very little relationship to the ultimate objective of offline sales.

So for example, let’s say you’re a big CPG company and you invest the time and brainpower necessary to move beyond last click / last view attribution.  Montes’ point is that this change in attribution may drive some fundamental shifts in your media mix, which will make you much more efficient in driving online “conversions”.  That’s great if these conversions are meaningful, but if you’re selling toothpaste doesn’t the really meaningful “conversion” happen at an offline point of sale 95% of the time?  Shouldn’t you be spending 95% of your time figuring out better ways to drive those offline conversions?

The second article, an AdWeek editorial piece, presents another interesting angle.  Adnetik is taking the position that last click / last view is distorting publisher economics as well as advertiser economics – essentially undervaluing premium content (I agree).  They hope to address this issue and ever-present privacy concerns using a targeting approach that focuses on quality and context rather than behavioral micro-segmentation.

Adnetik’s approach here is similar to Brand.net’s (although we add demographics and geographics) and in addition to reducing privacy concerns, it also enables dramatically increased scalability.  So we applaud them for moving the dialog along.

It’s good to see Adnetik adding some more nets to the fleet!

A fishing fleet without nets

ComScore released another solid piece of work yesterday.

As readers of this page will remember, comScore has been outspoken on the failings of the ubiquitous click as a metric. Some of that in this report, but much more as well.  From my perspective, the most interesting thread in the report ties together a couple of their numbered points.

First, as comScore correctly points out, cookie-deletion creates real problems for cookie-based targeting and measurement approaches. comScore data shows that 30% of all US internet users delete their cookies monthly or more often. Furthermore, many computers see routine use by multiple users. These factors create “noise” in targeting that often results in much lower true composition against the target than is claimed or described. Consumers’ ever-growing concern about privacy will only make this worse. Probably much worse. More evidence (if any was needed) that measurement of campaign impacts against meaningful metrics is critical – especially when a targeting approach sounds like magic.

Secondly, comScore highlights the tradeoff between targeting and scale. This tradeoff is intuitively obvious, but often overlooked. Equally often, credulous buyers willingly suspend disbelief in favor of a nice-sounding pitch.

Consider the example of one of our clients, with a large online footprint of some 25 million accounts.  Of these 25M, this client has actionable cookies on <5M, with data of varying depths and value (and all of these cookies, of course, are subject to the churn challenge presented above). So this client can (and does) employ the most sophisticated targeting and re-targeting approaches on all of these 5M customers. But what about the other 20M customers they can’t talk to this way?  What about the 100M adults that aren’t customers yet? 30-spots?

For the online advertising to grow to its full potential (and necessary size as “offline” media erodes), we must more fully develop a broader approach to complement our myriad fine targeting approaches.

Sometimes it is best to fish with a hook, other times with a net. As an industry we need a good supply of both.

Look for more on this topic in subsequent posts, but wanted to make sure to call out comScore’s work while it was fresh.  Worth a read.

One Platform to Rule Them All…

Just a quick post to thank Adam Cahill for his shout out on ClickZ yesterday.  It has been great to see the market get behind the futures model as a necessary complement to spot.

Adam also raises an interesting point about one vs. many DSPs.  Today it’s clearly necessary to use at least 2 to get full-funnel, futures (Brand.net) & spot (others) capability.  But I think we’ll see this pretty quickly follow a path towards increased efficiency, i.e. towards a single unified platform that enables agencies and clients to manage spend against any campaign, any objective, using a common interface.

It will no doubt be an interesting road to get there, but it’s just a matter of time.

Happy Birthday, SafeScreen!

For any of you who may have missed our press release yesterday, SafeScreen, the industry’s first preventative page-level brand safety solution, turned 2 years old earlier this month.  As we proudly celebrate this milestone, I wanted to take a moment to reflect on market developments since Brand.net introduced the digital media market to the notion of page-level, preventative quality filtering for brand safety (or “ad verification” as it has come to be known).

Last year certain of the multiple ad verification technologies that followed SafeScreen to market in 2009 added preventative “blocking” capability to their original retrospective “reporting” offerings.   We congratulate them on their progress, but while 2011 promises to be another action-packed year for digital media, we believe it will also bring some new challenges for third party verification providers.  These new challenges will stem from false positives and billing discrepancies, which add another layer of cost in terms of both cash and cycle time to 3rd party verification (above and beyond the well-documented problems with page-level visibility due to iframes).

False positives cause friction in the context of retrospective reporting, but that friction goes to an entirely new level when ads are preemptively blocked.  Look for this friction to generate increasing heat as blocking implementations become more common.  Ditto for discrepancies, an issue primarily associated with blocking as the verification provider must actually hold up the ad call while deciding whether or not the page content is safe.  This additional hop in the serving chain introduces latency which is a source of material ad serving discrepancies.

So add 5% of spend to the $.10 verification fee to account for discrepancies, 1% for extra manual overhead, another 0.5% for false positives and it’s not too much of a stretch to see 15% of spend going just to verification.

Stepping back for a moment, would we tolerate this in any other market?  For example, would we accept it if the GIA report for a diamond added 15% to the purchase price (whether we paid this fee to GIA or the jeweler did and passed it along)?  Would we accept a 15% SEC fee on each and every stock trade (whether or not our broker “paid it for us”)?  Apparently not, because current SEC fees on equity transactions are 1/800th of 1%.  At up to 15% of spend, verification fees are currently some 10,000 times higher than SEC fees.

It doesn’t have to be this way.

For example, SafeScreen is free, and because Brand.net controls both the filtering and the serving the operational issues of false positives and latency aren’t left to the advertiser and publisher to resolve.  This may appear shamelessly partisan, but I re-introduce the alternative architecture here primarily to make a broader point;  I have been quite surprised that preventative brand safety technology hasn’t yet been incorporated on the server side by one or more of the major exchange platforms.  In doing so they could not only help market principals avoid latency and billing disputes, but would be in a position to minimize refURL-related visibility issues as well.

It will be interesting to watch things shake out in 2011 and in particular whether the need for quality and efficiency drives towards consolidation (happy investors) or aggressive disruption of the emerging verification market (unhappy investors).

What do you think?

Yet more on CTR’s failure

Nice piece late last week by MediaMind’s Ariel Geifman.

The article provided a thoughtful, compact summary of a wide variety of different sources and research approaches that all drive to the same conclusion:  CTR is a convenient metric, but an inadequate one at best and most often a misleading one.  Obviously a topic on which we have written on multiple occasions, but I liked Ariel’s treatment and he wove in a few interesting angles I hadn’t seen before.

Worth a read.

Love that Ad Exchanger

Great article by John Ebbert himself on Ad Exchanger today.

John draws very insightful parallels between the data ownership issues that are an important factor in the current trade dispute between American Airlines and several online travel agencies and a similar data ownership conflict brewing in the online ad space.

I should have referenced this example in my recent post on the latter topic.  It’s a great example.  I was aware of the American story, but didn’t make the connection.

Nice job, John!

An Inconvenient Truth

An interesting piece yesterday from Adam Cahill of Hill Holliday, with some great thinking on how to address the quality challenges posed by the evolving real-time digital media landscape.

As Adam correctly points out, for most Brand campaigns delivering results is about more than just protecting a Brand from objectionable content.  That itself is very important (and we’re very good at it, by the way), but it’s only the beginning – “necessary, but not sufficient” as they would say back at MIT.  Media quality involves not just the the text of a page, but the editorial environment in which it exists.  That second bit makes this an even harder technical problem, particularly when you consider that quality is a page-level issue.  So we’re currently left with the false choice between audience and content that Adam correctly suggests we reject.

Let’s push a bit further though.

Without taking too much license (Adam, please feel free to chime in), I think I can safely say that “Audiences vs. content” is essentially a compact way of describing the choice between two different operational approaches.  “Audiences” is shorthand for scalable, efficient, automated buying via RTB on exchanges.  “Content” is shorthand for manual, site-by-site buying.  In the rush for operational efficiency, “audience” buying has grown very quickly over the last 2 years while “content” buying has stagnated, resulting in well-documented challenges for many high-quality publishers.

Audience buying works great when fast, high-resolution feedback on a financial goal metric is possible.

For example, let’s assume Netflix’s goal for a big chunk of its marketing spend is profitable subscriber acquisition and they have conversion value and attribution models that they trust.  Then, just as long as they have a scalable way of keeping their ads away from damaging content (porn, profanity, etc), they can pretty much ignore the editorial quality / “shades of goodness” issue Adam focuses on in his piece.  The tie between editorial quality and performance will show up in the CPA numbers and cause money to move appropriately.   So, for this block of DR money, Netflix can optimize based on their conversion metrics and they’re done.

For a brand campaign, the situation is different.

Brand metrics (e.g., awareness, consideration, intent) take longer to measure, they take longer to translate into financial value and that financial value is most often (95% of the time) realized in an offline transaction.  This means there is no fast, high-resolution feedback on a financial goal metric for Branding, but the push for enhanced efficiency of audience buying is no less acute.  What to do?

Unfortunately, today’s “solution” most often involves substituting for the meaningful data that is lacking some mix of a) meaningless, but conveniently accessible metrics like CTR or b) nice-sounding audience descriptions (like “peanut butter bakers”).   Once these substitutions are made, Brand campaigns can smoothly run through the DR-tuned “audience” infrastructure.  The problem is that these simplifying substitutions require a huge leap of faith at best and are very often detrimental to performance against the metrics that really matter.

The right way to leverage the new real-time online ad infrastructure for Branding is first to carefully test and measure the impact of different scalable, repeatable targeting criteria on *meaningful* metrics (like purchase intent or offline sales).

This process is conceptually similar to the Netflix example I detailed above; i.e., test, measure, optimize.  However, because Brand measurements involve longer time lags and lower resolution, there will need to be some manual effort applied to the process itself before intelligent instructions can be fed into the real-time execution machine.  The machine can’t do all the work itself.

It’s an inconvenient truth, but it’s the truth nonetheless.

Unfortunately, these “meaningful, but harder to get” metrics are too often not even gathered today, so the convenient lie persists.

Reading Adam’s article in this context, the richer standards for quality that he’s calling for essentially represent another set of scalable, repeatable targeting criteria added to the mix, one that he expects to have high correlation with results for brand marketers.  I wholeheartedly agree there would be a lot of value there.  We’ve certainly seen the impact of media quality in our own results.

But I also think it’s important to underscore the higher-level point raised here.  In order for the real-time digital ad infrastructure to be complete, it needs native support for branding that is sadly lacking today.

Privacy, Data Ownership and the Digital Media Value Chain

Regular readers of this page know that I have written multiple posts on the general topic of privacy concerns with online ad targeting.  More recently, I have highlighted a lower-profile, but equally important facet of the privacy discussion:  data ownership.

2010 was a turning point in the data ownership/privacy discussion.  So as 2011 kicks off, I thought it would be worthwhile spending a moment to tie these threads together in the context of the digital advertising value chain.

The value chain begins with users, who move from publisher to publisher and page to page consuming content that interests them.  In most cases publishers provide this content free of charge in exchange for the opportunity to present ads to users who consume it.  Publishers then sell the ad inventory so created directly to media agencies (who buy on behalf of advertisers) or through some mix intermediaries including SSPs, exchanges, DSPs and Ad Networks.

Increasingly, agencies are choosing to buy (and thus publishers – sometimes reluctantly – are choosing to sell) through intermediaries.  Therefore, the value chain for a typical advertising transaction is as follows: user, publisher, ad network or DSP, agency, advertiser.

Sitting in the middle of this value chain are ad networks and DSPs.  As has been discussed, it’s often difficult to assign a given company cleanly to one bucket or the other, but this link in the chain generally aggregates publisher ad inventory and agency demand, providing agencies with targeting and optimization capabilities and increasing operational efficiency for both publishers and agencies.

Here’s a typical example:

A network or DSP runs a campaign for an eyeliner product from a large CPG advertiser on a group of womens’ content sites.  The network/DSP collects data on which users it encountered on which sites or site sections (e.g., beauty tips, product reviews), who clicked on and/or engaged with the eyeliner ad and on which publisher pages/sites they did so.  Depending on how the campaign is configured and measured, the network/DSP may even collect some activity data from the advertiser’s site.  The network/DSP then turns around and sells media based on that data – say by a) retargeting those users on other sites or b) offering those users or look-alike users to other advertisers or c) some combination of both.

The activities this example illustrates are commonplace, but the appropriate legal permissions for this type of data use are almost never explicitly granted today.  In fact, in many cases some or all of these activities are expressly prohibited.  Like users who are becoming increasingly concerned about the extent to which data about them and their behavior has been bought and sold without their knowledge, many advertisers and publishers would be surprised (shocked) at how their data is being used.

Which brings me back to the value chain.

Of all the entities in this value chain – user, publisher, network or DSP, agency, advertiser – intuitively, which entities have the strongest claims to ownership of the valuable data generated by an online ad campaign?  I would argue that the ends of the value chain – the user and the advertiser – have the strongest claim to ownership of this data, with other parties’ claims weakening dramatically from the ends to the middle.  Who has more rights in a user’s behavioral data than that user?  Who has more rights in an advertiser’s performance data than the advertiser who paid for the campaign?  It’s patently obvious.

Of course, these data owners may choose to license some of their inherent rights to others in exchange for something of value.  For example, a user may be OK with a publisher recording and using his browsing habits to deliver more targeted content or sell ads to subsidize free content.  Or an advertiser might be OK with their agency recording and using ad performance data to improve the return of their campaigns over time.

However, in full knowledge and understanding, would the average user really be OK with an ad network or DSP, with whom the user has no relationship, constructing a comprehensive view of her life (anonymous or not) and selling those details to the highest bidder?

The industry generally defends this practice by extolling the user value of relevant advertising.  This argument has been proven valid in Search advertising, but is a tenuous proposition at best in Display.  Regardless, each user should make the decision on the value of ad relevance vs. privacy, not the industry on behalf of all users.

Similarly, would the average advertiser be OK with an ad network or DSP using data about how its campaigns perform to improve performance of direct competitors’ campaigns?  I’m not sure what the industry’s “pro-data-owner” argument would even be in this case. Yet, again, this type of activity is routine in today’s digital ad market.

So I would argue that the privacy debate that rages today is fundamentally a reflection of the simple property rights issues these activities raise.  Users and advertisers at the ends of the value chain own the data, but that data is being used and monetized primarily by the players in the middle of the value chain.  The vast majority of this data use and monetization is unlicensed, representing a free ride on the gravy train for about half of the companies on LUMA Partners’ ubiquitous landscape chart.

The government appears to be leaning towards addressing this set of issues on behalf of users with a “do not track” list, but even without do not track – as many are skeptical of the speed of government to act – the private sector is rapidly innovating.  New versions of browsers from Microsoft and Mozilla will ship with privacy protections built-in.  For those who don’t want to upgrade, browser extensions are also providing private, user-controlled do not track capability.  Another new technology, from Bynamite, is taking a different approach by providing the user a way to control – and profit from – distribution of their data.

In defense of corporate data owners, companies like Krux Digital are providing tools to help publishers keep from getting their virtual pockets picked.  I am not aware of any company providing similar data security audit solutions for advertisers, but this is an essential technology representing a huge opportunity.  I am sure a solution is on the way.

The landscape is evolving quickly and it’s still unclear as to how it all will end up, but one thing is certain.  The long term solutions to the “privacy” issue will give data owners at each end of the value chain dramatically increased visibility of, control over and stake in how their data is used by players in the middle.

And as these capabilities allow the data gravy train to begin charging for tickets, you’re going to see fewer riders.

“Focus on branding helps display…close the gap”

Just a quick post to call attention to today’s article in eMarketer.

Display will grow significantly faster than Search over the next several years, with Display well on pace to be the largest online ad segment by the end of the decade.  Maybe there’s a reason Google’s so focused on display these days.

Of particular interest is the source of the strong growth:  brand budgets moving online.  Certainly not a surprise to us and (as they say in poker) there’s plenty more behind.

This is going to be a big pot.  I love it when a plan comes together.

%d bloggers like this: