Go Neal!

Some very interesting tidbits in Neal Mohan’s post on the Google blog yesterday.

First is that Google sees display inventory per user declining 25% by 2015.  This is a pretty interesting prediction given the conventional “wisdom” that online ad inventory is unlimited.  This hasn’t ever really been true (provided one cares about quality of placement) and if Neal’s correct, it will become even less true over time as the industry collectively realizes there’s there is too much low quality inventory out there and it’s not doing anyone in the ecosystem – advertisers, publishers or users – any good.

Combine this prediction with the forecast growth in display spend over the next decade and it’s pretty clear we’re heading for a much more constrained inventory landscape.  As these constraints start to bite, it will be interesting to see what happens to today’s auction-driven RTB infrastructure where delivery is not guaranteed.

Expect some serious turmoil as delivery rates drop and volatility increases.  Maybe that’s why Google has begun work on a reserved inventory product

I also want to amplify Neal’s point about 35% of campaigns measured on other metrics than clicks and conversions by 2015, particularly offline sales.  Those campaigns comprise the orange box on this graphic – some $6B in spend last year.  So Neal’s saying the orange share of online spend will grow from just over 20% in 2010 to 35% in 2015.

That prediction certainly syncs with the qualitative discussion in the eMarketer article I cited above, and if you combine the spend estimates there with Neal’s 35% share forecast, you end up with an online brand advertising market of  ~$15B in 2015.  Using Barclay’s market sizing estimates for the base you end up with ~$18B.   So the online brand advertising market will more or less triple by 2015.

That’s great news for the ecosystem as a whole and particularly for the relatively few of us delivering targeted solutions for Brand advertisers.

Just because you can, doesn’t mean you should

Today, I wanted to highlight and echo some recent commentary from two very smart online ad veterans, Dave Morgan and Doug Weaver.

Their thesis in a nutshell is that the online advertising ecosystem has pursued an arms race of targeting upon targeting to the point that it has confused brand marketers and backed itself into a DR-only corner.  When it comes to hyper-targeting, as Dave Morgan put it, “Just because you can, doesn’t mean you should.”

I completely agree and would encourage readers to visit the linked articles – there’s a lot more there to think about.

The market’s apparent addiction to overtargeting is especially puzzling given the performance data.  I have obviously written on this topic myself quite extensively over the years and just last week a new piece of research came out of MIT, with yet more evidence for the prosecution.

The MIT study, using data from agency giant Havas, found that highly personalized creative underperformed generic creative except for users who were already well down the funnel.  I understand that creative (this study) is different than media targeting (commentary above), but the two are opposite sides of the same coin and this result is another point on the same line; i.e., overtargeting is just that.

Or, as MIT researcher Catherine Tucker put it, “just because you have the data to personalize, it doesn’t mean you always should”.

More on GDN Reserve

Today’s Ad Exchanger published more commentary on Google’s GDN Reserve announcement last week.

Views from senior execs at VivaKi (Publicis) and Group M (WPP) rounded out the additional commentary from Google that was posted Monday.  John also published some additional perspective from Elizabeth and others.

A few things in the various posts caught my eye.  One was simply the difference in perspective between Publicis and WPP – clearly two different strategies at work there.  Another was the refinement in Google’s messaging between the earnings call last week and Monday’s spokesperson (PR) commentary.  Seemed like a careful balance between agency and advertiser in the messaging this time around (what frienemy?).  I also thought VivaKi’s mention of Yahoo! in this context was interesting.  Y! was once the dominant global player in online branding and reserved display marketplaces.  I would have expected more from them sooner, but it’s nice to see the old alma mater at least in the game.  If there’s any road back for Y!, this is it.

I’ll close with a quote from VivaKi’s Curt Hecht:

“While our spending continues to grow in the spot marketplace, clients and publishers still desire the controls and forecasting offered in a guaranteed market around context, price and performance.”

I couldn’t have said it better myself.

This being my 100th sermon from the Brand.net pulpit, it’s nice to see the gospel is spreading.

Hallelujah!

Google launches Display Network Reserve

John Ebbert of AdExchanger and we here at Brand.net noticed the same thing on Google’s earnings call last night – the launch of Google Display Network Reserve, “which gives advertisers the opportunity to buy premium inventory on a guaranteed basis.”

This is interesting for a couple reasons.

First, it’s a clear signal that Google understands where the growth will come from in the display market:  large brands moving traditional media budgets online to follow their customers.  eMarketer laid out the case in December and it’s clear Google understands and agrees; Google launched the guaranteed product because “it’s how brand advertisers are telling us they want to buy inventory.”  We’ve been hearing the same thing loud and clear.

Second, to anyone that still had any doubts about Google’s commitment to or progress in Display:  Wake Up.  Since acquiring DoubleClick 4 years ago this week, Google has moved in a fast, focused way  to lock up all the key pieces of the transactional infrastructure for Display.  They haven’t been shy about it , especially over the past year, but I still don’t think the market fully appreciates how close they are to the endgame:  extending the hammerlock they have on Search to all elements of the Display market.  Scalable, efficient forward buying is the last piece of the puzzle.  It has been Google’s soft underbelly, but they are clearly doing sit-ups like crazy.

It’s crunch time.  AOL, Microsoft and Yahoo!:  If you’ve got a second wind in you, now’s the time.  Accenture, Adobe, Akamai, Apple, Cisco, IBM, Oracle and others:  If you’re serious about bringing your expertise in enterprise class infrastructure and service to the huge advertising market, your opportunity is slipping away.  And Agencies:  I agree with John’s emphasis on the particular phrasing of the announcement.  Don’t let frienemy Google steal a march on you.  If they take the Brand business client direct, that’s a big problem.  Microsoft has Windows and Office to fall back on.  You don’t.

It’s amazing how fast this market is moving.

Dogs eating the dog food

For those of you that missed our joint presentation with Del Monte at Ad Tech San Francisco yesterday AM, here are the slides Doug Chavez and I presented.

The top line is that $200K of Brand.net-managed media drove $1.5M of incremental offline sales for Del Monte’s Kibbles’n’Bits brand.   That’s an additional 2.2 million pounds of dog food sold due to this campaign alone.  That much dog food would fill a bumper-to-bumper caravan of semi-trailers stretching from the Empire State Building to the Bryant Park Hotel – more than a half mile.

Tangible evidence indeed that Brand.net’s Media Futures Platform delivers tremendous results for many of the world’s largest marketers.

Consolidation curve?

An insightful post from investo-blogger Jerry Neumann yesterday on Ad Exchanger.  I like what he’s thinking about in the post and agree with much of it, but there’s an important meta-point that he didn’t mention.

Jerry’s first point was that there is a huge shortage of experienced talent in the online ad industry and what does exists is primarily clustered within the myriad tech vendors in the ecosystem.  Agree.  His second point was that even as the exchange ecosystem (which at its core promises increased efficiency through a common set of pipes) grows, we see continued fragmentation of supply / demand relationships.  Agree.

But I would also argue that these two observations are causally related.  The reason things continue to fragment is largely that there are too many tech companies making too many pitches to too many media buyers and sellers that are still coming up the learning curve.  Tech company convinces still-learning buyer or seller to participate in “private market” promising some advantage in terms of functionality or monetization.  Careful A/B testing is hard to do without committing even more limited time/resources (hence it’s rarely done at all).  Whatever advantage was expected may or (more likely) may not actually be delivered, but such decisions are infrequently revisited.  As a practical matter, once the sale is made the arrangement has tremendous inertia, regardless of relative value add.

So Jerry’s “thin exchange standards” may well become necessary, but I think that would have much more to do with folks not thoughtfully using the tools that already exist rather than a “real” need.

“Private markets” are rarely the most efficient alternative.  The more participants in the market the better, assuming careful thought is given to structure and business rules.  I saw frequent examples of the private market dynamic in my time at Yahoo!.  Some enterprising salesperson would convince a content group GM to dedicate a placement to a particular advertiser.  Such arrangements almost always under-monetized relative to an open, competitive market for the same placement.  There was just an article last week in the ‘Journal offering up some more evidence from Goldman’s experiment with private markets.    Or coming at it from another angle, have you ever tried to sell anything locally on craigslist, failed, then posted on eBay?  eBay’s national market with huge liquidity almost always closes the deal at a fair price.

The faster we collectively get up the learning curve, the faster things will consolidate so we can actually realize some of the efficiency gains we’ve all been chasing.

Adnetik adds nets

A couple interesting articles by/about Adnetik earlier this week.

The first, written by Adnetik CEO Ed Montes (formerly Manager Director for Havas Digital, North America), argues against the over-reliance on last click / last view attribution – what Montes terms a “false positive”.   Microsoft has published extensive research on this topic as well, but last click / last view is still all too standard in the world of online advertising.

As Montes lays out, the bulk of online ad infrastructure is designed and tuned around last click / last view, leading the industry to “throw smarter money away”.  This itself is bad enough, but it’s even worse when you consider than in many cases the target of all this “optimization” is an online metric that has very little relationship to the ultimate objective of offline sales.

So for example, let’s say you’re a big CPG company and you invest the time and brainpower necessary to move beyond last click / last view attribution.  Montes’ point is that this change in attribution may drive some fundamental shifts in your media mix, which will make you much more efficient in driving online “conversions”.  That’s great if these conversions are meaningful, but if you’re selling toothpaste doesn’t the really meaningful “conversion” happen at an offline point of sale 95% of the time?  Shouldn’t you be spending 95% of your time figuring out better ways to drive those offline conversions?

The second article, an AdWeek editorial piece, presents another interesting angle.  Adnetik is taking the position that last click / last view is distorting publisher economics as well as advertiser economics – essentially undervaluing premium content (I agree).  They hope to address this issue and ever-present privacy concerns using a targeting approach that focuses on quality and context rather than behavioral micro-segmentation.

Adnetik’s approach here is similar to Brand.net’s (although we add demographics and geographics) and in addition to reducing privacy concerns, it also enables dramatically increased scalability.  So we applaud them for moving the dialog along.

It’s good to see Adnetik adding some more nets to the fleet!

A fishing fleet without nets

ComScore released another solid piece of work yesterday.

As readers of this page will remember, comScore has been outspoken on the failings of the ubiquitous click as a metric. Some of that in this report, but much more as well.  From my perspective, the most interesting thread in the report ties together a couple of their numbered points.

First, as comScore correctly points out, cookie-deletion creates real problems for cookie-based targeting and measurement approaches. comScore data shows that 30% of all US internet users delete their cookies monthly or more often. Furthermore, many computers see routine use by multiple users. These factors create “noise” in targeting that often results in much lower true composition against the target than is claimed or described. Consumers’ ever-growing concern about privacy will only make this worse. Probably much worse. More evidence (if any was needed) that measurement of campaign impacts against meaningful metrics is critical – especially when a targeting approach sounds like magic.

Secondly, comScore highlights the tradeoff between targeting and scale. This tradeoff is intuitively obvious, but often overlooked. Equally often, credulous buyers willingly suspend disbelief in favor of a nice-sounding pitch.

Consider the example of one of our clients, with a large online footprint of some 25 million accounts.  Of these 25M, this client has actionable cookies on <5M, with data of varying depths and value (and all of these cookies, of course, are subject to the churn challenge presented above). So this client can (and does) employ the most sophisticated targeting and re-targeting approaches on all of these 5M customers. But what about the other 20M customers they can’t talk to this way?  What about the 100M adults that aren’t customers yet? 30-spots?

For the online advertising to grow to its full potential (and necessary size as “offline” media erodes), we must more fully develop a broader approach to complement our myriad fine targeting approaches.

Sometimes it is best to fish with a hook, other times with a net. As an industry we need a good supply of both.

Look for more on this topic in subsequent posts, but wanted to make sure to call out comScore’s work while it was fresh.  Worth a read.

One Platform to Rule Them All…

Just a quick post to thank Adam Cahill for his shout out on ClickZ yesterday.  It has been great to see the market get behind the futures model as a necessary complement to spot.

Adam also raises an interesting point about one vs. many DSPs.  Today it’s clearly necessary to use at least 2 to get full-funnel, futures (Brand.net) & spot (others) capability.  But I think we’ll see this pretty quickly follow a path towards increased efficiency, i.e. towards a single unified platform that enables agencies and clients to manage spend against any campaign, any objective, using a common interface.

It will no doubt be an interesting road to get there, but it’s just a matter of time.

Happy Birthday, SafeScreen!

For any of you who may have missed our press release yesterday, SafeScreen, the industry’s first preventative page-level brand safety solution, turned 2 years old earlier this month.  As we proudly celebrate this milestone, I wanted to take a moment to reflect on market developments since Brand.net introduced the digital media market to the notion of page-level, preventative quality filtering for brand safety (or “ad verification” as it has come to be known).

Last year certain of the multiple ad verification technologies that followed SafeScreen to market in 2009 added preventative “blocking” capability to their original retrospective “reporting” offerings.   We congratulate them on their progress, but while 2011 promises to be another action-packed year for digital media, we believe it will also bring some new challenges for third party verification providers.  These new challenges will stem from false positives and billing discrepancies, which add another layer of cost in terms of both cash and cycle time to 3rd party verification (above and beyond the well-documented problems with page-level visibility due to iframes).

False positives cause friction in the context of retrospective reporting, but that friction goes to an entirely new level when ads are preemptively blocked.  Look for this friction to generate increasing heat as blocking implementations become more common.  Ditto for discrepancies, an issue primarily associated with blocking as the verification provider must actually hold up the ad call while deciding whether or not the page content is safe.  This additional hop in the serving chain introduces latency which is a source of material ad serving discrepancies.

So add 5% of spend to the $.10 verification fee to account for discrepancies, 1% for extra manual overhead, another 0.5% for false positives and it’s not too much of a stretch to see 15% of spend going just to verification.

Stepping back for a moment, would we tolerate this in any other market?  For example, would we accept it if the GIA report for a diamond added 15% to the purchase price (whether we paid this fee to GIA or the jeweler did and passed it along)?  Would we accept a 15% SEC fee on each and every stock trade (whether or not our broker “paid it for us”)?  Apparently not, because current SEC fees on equity transactions are 1/800th of 1%.  At up to 15% of spend, verification fees are currently some 10,000 times higher than SEC fees.

It doesn’t have to be this way.

For example, SafeScreen is free, and because Brand.net controls both the filtering and the serving the operational issues of false positives and latency aren’t left to the advertiser and publisher to resolve.  This may appear shamelessly partisan, but I re-introduce the alternative architecture here primarily to make a broader point;  I have been quite surprised that preventative brand safety technology hasn’t yet been incorporated on the server side by one or more of the major exchange platforms.  In doing so they could not only help market principals avoid latency and billing disputes, but would be in a position to minimize refURL-related visibility issues as well.

It will be interesting to watch things shake out in 2011 and in particular whether the need for quality and efficiency drives towards consolidation (happy investors) or aggressive disruption of the emerging verification market (unhappy investors).

What do you think?

%d bloggers like this: