|
||
Kellogg and Brightroll put vendors to the test and find discrepancies
Soon it may be commonplace for advertisers to make digital buys based on viewable – rather than served – impressions, meaning they’ll only pay for ads that appeared on a real human’s screen. But as a currency, viewability is only as good as the vendors that measure it, especially in video where viewability technology is still nascent and the market is inexperienced.
According to a new study from The Kellogg Company and Brightroll, advertisers have reason to worry.
Kellogg, which has made a strong push into the programmatic space and spends 20-30% of its ad budget on digital media, wanted to know how much it could trust video viewability measurement, so in Q4 2013, it teamed up with online video platform Brightroll and put four independent, anonymous providers to the test. It found major disparities in performance, with several of the providers correctly assessing viewability less than half of the time, and clashing with one another over viewability rates in live campaigns.
“We were surprised at the disparity between measurement companies,” says Brightroll senior vice-president, marketing operations, Tim Avila. “Once we had time to interpret the results, we realized that it probably makes sense, given that there is no single standard for the technology that’s been broadly adopted yet.”
Who can walk the talk?
Brightroll selected four independent measurement providers – independent meaning they do not buy media for clients – that have accessible, easy-to-use reporting services. Brightroll then tested them in controlled environments without their knowledge. In the first part of the test, it applied each service to a series of fake campaigns, where Brightroll controlled the host domain and the scroll behaviour of the “users.” It tested for a variety of viewability durations (video start, 1 second, and each quarter completed) and varied conditions like the browser being used, whether the video was in an iFrame, and how much of the video frame was on-screen.
The chart below shows the results of the tests. In test 1, the video was viewable for its entire duration, while in test 2, it was non-viewable for the entire duration. In test 3, the user started in viewable position, and then after 2 seconds scrolled to non-viewable. Test 4 was the same, except after 12 seconds the video scrolled back to viewable. In the last test, the user switched tabs a quarter of the way through the video, so it was no longer on-screen. This last test is important because active tabs have, until recently, been a point of difficulty for measurement.
The percentages indicate how many of the trial conditions each provider got correct for a given scroll behaviour. So for instance, when vendor C was tested on the first scroll condition across all four browsers, with and without iFrames, and for varying durations, it gave the correct result 60-80% of the time. The other 20-40% of the time, it either gave an incorrect answer, or wasn’t able to measure the video.
As you can see, vendor A performed well on most of the tests while the others struggled. Brightroll says the vendors had the biggest difficulty with certain combinations of browsers and iFrames, likely because they’re using browser data to determine whether the video is on-screen, and each browser uses different attributes to report this. Vendor D was especially limited, because it couldn’t measure any video that was served in an iFrame. All of the vendors struggled with determining whether the video was in an active tab on the user’s browser.
In the second part of the test, Brightroll and The Kellogg Company applied all four measurement services to the same live pre-roll video campaign, and compared their results. The graph below shows how each provider scored viewability on a particular site where Brightroll expected viewability to be low based on page placement.
Vendor D predictably gave a very low score, since it couldn’t measure anything in iFrames. Vendor C had the bizarre finding that viewability increased drastically as time went on. We wouldn’t expect that more viewers have the video on-screen later in its duration, because viewers tend to exit videos before they end – that’s why 100% completion rates are usually lower than 25% completion rates.
Vendors A and B both had reasonable results, but they were disparate enough to leave Brightroll wondering how viewable the site actually is. “What is your definition of viewability?” asks Avila. “And who do you want to believe?”
Digital buyers beware
The message of the study: as of three months ago, many of the viewability solutions on the market were not reliable enough to support financial decisions. With the move to viewable impressions, a lot is riding on the ability of independent providers to accurately and reliably assess whether impressions can be seen.
The Media Ratings Council, a U.S. non-profit that assesses measurement methodology, recently announced that it believes viewability measurement has reached currency-quality for display advertising, but it doesn’t think viewability will be ready for video until the end of June.
Marketing asked MRC associate director David Gunzerath whether, given Brightroll’s findings, video viewability won’t be ready for June, and the advisory could be left up longer. He said that was unlikely. “We’re advising the marketplace to observe another 90 days, but we really don’t anticipate making another communication on this. So as of June 30th, [the advisory] will just go away.”
Avila at Brightroll says he has seen significant improvements in video viewability offerings even since the test was conducted at the end of last year, and that he too is confident that the market will be ready to trade on viewability by June. But that doesn’t mean that every provider’s solution is currency-quality. One of the most important takeaways from the study, Avila says, is that there are large differences in how well providers do the job.
Vendor A was the clear winner in the study – but that doesn’t help advertisers much, since it was anonymized. So how can advertisers determine which providers are most reliable? Gunzerath says they should look for MRC accreditation, which shows the MRC has audited the provider and determined their technology and methodology are sound. Although some industry voices have been skeptical of the MRC’s ability to assess rapidly advancing digital measurement technology, Avila said that in the Brightroll study, the better-performing vendors tended to be those with MRC accreditation.
One thing advertisers should not do, says Avila, is trade on viewability data from first-party sources, like DSPs and publishers. “There is an inherent conflict… The DSP would have an incentive to measure more aggressively than a third party,” he says. Brightroll is itself a DSP which provides measurement. “We believe that independent, and in particular MRC-accredited, measurement is generally speaking more accurate.”