Is BT Just a Sales Tool? (Redux)

This post is a continuation of my article in last Monday’s AdExchanger about some serious challenges with BT for Brand marketers.  Interested readers should start there and then continue reading below, as I make some of my points here in the context of the example presented in that original article.

As I mentioned, BT does not outperform other approaches in driving offline sales.  Specifically, Brand.net’s studies with Nielsen have proven that our campaigns deliver impressive offline sales impact.  These results were achieved without BT;  instead Brand.net uses high-quality media with contextual, demographic and geographic targeting managed to high composition, with controlled frequency and cost.

The average ROI of 141% on these Brand.net campaigns is roughly comparable to the average ROI generated by Nielsen’s largest offline measurement partners over hundreds of studies using the purchase-based / look-alike targeting approach I described in my original article, refined over nearly a decade.  The Nielsen-powered BT those others use is state of the art; BT doesn’t get any better for branding.  If it fails to deliver substantial ROI upside to other approaches in driving offline sales – we as brand marketers really need to question the utility of BT in general.

In addition to this fundamental problem, BT poses a variety of other important problems that brand marketers should consider carefully.

First, there are no standard definitions within the industry for behavioral categories so there’s a huge degree of subjectivity in defining which users are a close-enough match to the core users to qualify as “look-alikes.”  This is a big deal because, as I outlined, 99.9% of the users in a typical BT campaign are based on look-alike modeling.   In the context of the specific example I used, how similar does a user need to be to an actual CPB Baker to qualify for inclusion the behavioral category?  What’s to keep the network doing the modeling from stretching that definition to create more inventory, particularly if there’s no direct measurement on the campaign?

Another related issue is lack of portability.  Since there’s no consistent definition for any behavioral target, if an advertiser does find something that works with a particular vendor, the advertiser is stuck with that vendor.  They can’t say, “CPB Bakers work great.  Let’s figure out the best way to buy them.” because the CPB Bakers from one source could be completely different from the CPB Bakers from another source due to different look-alike definitions.  Furthermore, if the vendor whose CPB Bakers “worked” changes look-alike definitions, loses access to data or goes out of business, the advertiser must start from scratch.  BT can’t be used as a basis for a scalable, repeatable, progressively improved strategy driven by the advertiser/agency unless the advertiser is the one building the profiles from scratch – something that is far beyond what most advertisers today are willing to do.

Due to cookie churn and simple inventory volatility, impression delivery is extremely hard to predict for any reasonably focused BT target (and forget about reach or pricing).  This makes forward delivery guarantees almost impossible – another barrier for scalable use by large brands that typically plan a significant portion of their spend in advance.

BT can also be used by networks or publishers as a way to mask inventory quality issues.  Would an advertiser/agency want the media included in a BT buy if they actually knew what they were purchasing?  Would they be willing to pay the same rate?  I doubt it, but the glossy BT story effectively launders this sketchy inventory into a desirable commodity.

Finally, there are obviously high-profile privacy issues swirling around BT, and it’s anyone’s guess where those will settle out.  I would hate to have a platform or media strategy built around BT if (when?) our friends in Washington decide that “opt-in” will become the law of the land.

Marketers considering significant or sustained investments in BT would be advised to think carefully about all of these issues and ask tough questions of their partners before proceeding.

Thoughts on the latest OPA report

A blockbuster report from the OPA late last week, at least if one were to judge by how it lit up the blogosphere (as AdExchanger humorously put it, “Is the OPA the greatest link baiting organization in advertising, or what?”).  I reviewed some of the coverage and the report itself over the weekend and I have to say, with all due respect to the OPA and its members, this report doesn’t measure up to their previous efforts.

Here’s my take:

1) Most networks are focused on DR metrics and not the upper-funnel branding metrics that are the focus of the OPA study.  So even if we stop right there, it’s not shocking that that the study shows weaker results for networks.  This difference in focus is fundamental to Brand.net’s business by the way.  Unlike other networks, the Brand.net platform offers a  full suite of capabilities designed from the ground up to help brand marketers leverage the web to reach their audience efficiently and effectively drive these upper-funnel metrics.

2) The OPA report didn’t include or consider cost data.  If you believe the >10:1 spread between publishers’ direct and network deals cited in last year’s IAB research, this is a critical omission.  OPA pubs performing 50% better than networks doesn’t look so good in the context of a >10:1 price ratio.  Obviously the devil’s in the details here – the IAB research isn’t perfect either for reasons I have discussed previously on this page – but it’s clearly perilous to draw the sweeping conclusions OPA is going for without considering costs.

3) I don’t wish to cast aspersions on the study or methodology overall, but a couple of the data points just seemed counterintuitive to me.  For example, slide 19 of the OPA results deck states that ad networks deliver insignificant improvements in purchase intent for the financial services category.  This particular point caught my eye, because I know that well over $1B has moved through ad networks from hundreds of financial services companies over the past 5 years, the vast majority of which has been measured on a CPA – as in actual purchases, not just purchase intent.  It’s extremely hard for me to believe this money would have continued to flow in such volume over such a long time period if it wasn’t actually driving purchases.  If you agree, then we’re left with only 2 possible explanations: a) the data referenced to make this point is somehow not representative or b) purchase intent as measured by DL was not correlated with actual purchases.  Neither is particularly comforting.

4) In addition to the metrics OPA focuses on in this report, I would have liked to see an analysis of actual sales lift – i.e., the ultimate result that improvement in the attitudinal metrics discussed in the report is intended to drive over the long term.  This certainly isn’t easy for every client on every campaign, but it’s a powerful capability that proves real business results for many.  For the next study I would be interested in seeing similar data from OPA.

Some of these thoughts have already been expressed by others, including some who commented directly on WSJ’s coverage of the report, but I thought there was enough new here that it was worth joining the discussion.

Let me know what you think.

%d bloggers like this: