A blockbuster report from the OPA late last week, at least if one were to judge by how it lit up the blogosphere (as AdExchanger humorously put it, “Is the OPA the greatest link baiting organization in advertising, or what?”). I reviewed some of the coverage and the report itself over the weekend and I have to say, with all due respect to the OPA and its members, this report doesn’t measure up to their previous efforts.
Here’s my take:
1) Most networks are focused on DR metrics and not the upper-funnel branding metrics that are the focus of the OPA study. So even if we stop right there, it’s not shocking that that the study shows weaker results for networks. This difference in focus is fundamental to Brand.net’s business by the way. Unlike other networks, the Brand.net platform offers a full suite of capabilities designed from the ground up to help brand marketers leverage the web to reach their audience efficiently and effectively drive these upper-funnel metrics.
2) The OPA report didn’t include or consider cost data. If you believe the >10:1 spread between publishers’ direct and network deals cited in last year’s IAB research, this is a critical omission. OPA pubs performing 50% better than networks doesn’t look so good in the context of a >10:1 price ratio. Obviously the devil’s in the details here – the IAB research isn’t perfect either for reasons I have discussed previously on this page – but it’s clearly perilous to draw the sweeping conclusions OPA is going for without considering costs.
3) I don’t wish to cast aspersions on the study or methodology overall, but a couple of the data points just seemed counterintuitive to me. For example, slide 19 of the OPA results deck states that ad networks deliver insignificant improvements in purchase intent for the financial services category. This particular point caught my eye, because I know that well over $1B has moved through ad networks from hundreds of financial services companies over the past 5 years, the vast majority of which has been measured on a CPA – as in actual purchases, not just purchase intent. It’s extremely hard for me to believe this money would have continued to flow in such volume over such a long time period if it wasn’t actually driving purchases. If you agree, then we’re left with only 2 possible explanations: a) the data referenced to make this point is somehow not representative or b) purchase intent as measured by DL was not correlated with actual purchases. Neither is particularly comforting.
4) In addition to the metrics OPA focuses on in this report, I would have liked to see an analysis of actual sales lift – i.e., the ultimate result that improvement in the attitudinal metrics discussed in the report is intended to drive over the long term. This certainly isn’t easy for every client on every campaign, but it’s a powerful capability that proves real business results for many. For the next study I would be interested in seeing similar data from OPA.
Some of these thoughts have already been expressed by others, including some who commented directly on WSJ’s coverage of the report, but I thought there was enough new here that it was worth joining the discussion.
Let me know what you think.