Canadian online video advertisers whose campaigns hinge on metrics that don’t take into account non-human and non-viewable traffic could end up wasting almost a third of their budget on bad ads, according to a new study from TubeMogul and Integral Ad Science.
The study found that when video buyers optimize based on how often their videos register a completion (without taking into account whether a real human is watching), they will make bad optimization decisions 29% of the time. Optimizing based on click-throughs isn’t much better, resulting in bad decisions 17% of the time.
That means that if every dollar of the $266 million that Canadian advertisers spent on online video in 2014 were to be optimized based on completion rate alone, then $77 million would go to sites and networks that would’ve been screened out by a fraud- and viewability-aware algorithm.
Of course that’s a very hypothetical estimate — many advertisers are taking viewability and fraud into account, and many more are buying direct from publishers and therefore not optimizing at all.
“What we really wanted to do with this research was say, ‘Look, as a marketer, how should you address these issues of non-human traffic and viewability, and specifically what is it worth to you to be paying attention to this stuff?'” said Taylor Schreiner, vice-president research at TubeMogul.
To compare the various optimization strategies, the researchers looked at 12 million Canadian impressions that were served through 88 pre-roll ad placements bought with TubeMogul’s platform and measured by Integral Ad Science’s (IAS) third-party reporting tools.
For each placement, IAS measured the average completion rate, suspicious traffic rate and viewability rate, and assigned a quality rank for that placement. Then it looked at how often an optimization algorithm looking for just standard completions or CTRs would choose a lower-ranked placement over a higher-ranked one.
TubeMogul Canada president Grant Le Riche said the study’s findings illustrate the need to qualify campaigns with more than a single surface-level metric.
“As soon as [a marketer says], ‘this is the exact metric that I’m going to value, and literally place all my budget against,’ just know that there are a ton of people in the background trying to manipulate that metric,” he said.
TubeMogul and IAS also examined how often an algorithm would make bad decisions by looking at either fraud or viewability in isolation. They found optimizing just on IAB-standard viewability would still lead to bad decisions 7% of the time — more evidence that high viewability won’t solve the fraud problem on its own.
As for fraud, Schreiner said the researchers weren’t able to catch enough fraudulent impressions in the Canadian market to define a stable sample, which could be good but is probably bad. Schreiner said it’s more likely the study underestimated the impact of fraud because it was looking only at campaigns measured by IAS on the TubeMogul platform, which would have meant there was some selection bias for clients that were already using a verification provider.
“It just means we couldn’t see it,” he said. “But if you aren’t paying attention… it’s going to have a big impact on your business.”
You can read the full U.S. version of the report here.