|
||
Ian Hewetson, vice-president of client services for Canadian demand-side provider and ad server eyeReturn, says it’s not enough to react to fraud on a case-by-case basis. Blacklisting individual sites and publishers who’ve been caught cheating on a campaign, or who’ve been outed in the media, leaves the door open for all the suspicious traffic that you don’t know about. To significantly limit exposure, programmatic buyers need to be actively ferreting out sources of suspicious traffic, all the time.
One of the ways EyeReturn finds fraud – specifically botnets, the most prevalent source of bad traffic – is by running constant “honeytrap” campaigns. Several times a month, eyeReturn buys placements programmatically across a variety of sites, but instead of running normal display ads, it runs “bluff ads”: just the eyeReturn logo, and sometimes nothing but a white box.
They work just like bait cars, sitting out on the street in high-crime neighbourhoods, waiting for thieves to try and steal them. Bluff ads are set up so that even if a real human clicked on one, nothing would happen. But algorithms can still trigger a click signal. So if an ad’s clicked, eyeReturn knows the user is a bot.
The next step is to follow the bot around, see how it behaves and what other sites it visits. By following known bot-users, eyeReturn can figure out which domains are getting bad traffic. Bad traffic could be coming from a traffic generator the site owner is paying, or it could be directed to the site by someone else who wants to boost the site’s impression volume, like a supply-side network the publisher is part of.
At the end of the day, eyeReturn has a list of browser IDs and domains that it knows are no good. Bots that get caught go in the “penalty box”: a list of users that eyeReturn’s ad server refuses to send ads to when they visit sites. Bad domains get blacklisted or, if they belong to a major publisher or supply network, the owners get a call.
In one campaign held this quarter, the team identified 7,680 suspicious browsers, which led them to 214 suspicious domains. Each suspicious browser generates many times more clicks than a human would, so between them, the bots generated millions of fake clicks in just the 12 hours the campaign was active. That’s several million fake clicks that eyeReturn prevented its clients from paying for, says Hewetson.
The eyeReturn team noticed that a lot of the bots were visiting well-known, legitimate domains, owned by major publishers. The inventory was being sold on big-name exchanges that use filters to detect fraudulent traffic. It turned out the domains were being “spoofed” by the inventory owner – the ads weren’t showing up on the big publisher’s site, but on empty shell sites that used its name on the exchanges. Whatever techniques the supply-side provider and exchange were using to filter out fraud had been fooled, Hewetson says.
Despite the size of the haul that trap campaigns can real in, they aren’t the real fraud-detection workhorse, he says. eyeReturn uses bluff ads to learn about how bots behave – what kinds of ads they respond to, what kind of content they look at, and what times of day they’re active. “We take very scientific approach,” says Hewetson. “We also do ads that are only clickable in the first half-second the ad shows up on a page, for example, or ads that have just a tiny part of the ad available to click, like a pixel… The different reactions for each, we then put into different result piles, and start to sift through those.”
The campaigns are trying to recreate the conditions of fraud, to enhance the power of eyeReturn’s algorithms to distinguish bot-like behaviour from human-like behaviour. The real hammer drops when eyeReturn uses those bot-catching algorithms to analyze the vast network of sites it has access to through its ad server. Bot users show up by the thousands; bad domains show up bright red with bot hits.
At least for the time being, bots are fairly easy to spot if you have the right tools, Hewetson says. Although a fraudster could probably program an algorithm to fool eyeReturn’s bluff ads and network analysis, why bother? There’s no need to be sophisticated, because there’s so much easy prey out there.
So if fraudsters are so easy to catch, why are there so many of them out there? Although DSPs and ad verification providers have become good at spotting bots, it’s not easy to identify the hackers behind them. And even if the perpetrators are exposed, there’s not much anyone can do other than name and shame them. What they’re doing isn’t a crime, and they can’t be fined or sent to jail for it. In fact a lot of the people responsible for fraud who do get shut down can easily swap their public identity and start up again. So why stop?
Without some law enforcement body taking an interest, it’s hard to imagine getting a handle on fraud. Estimates vary widely about how much fraud is out there, and whether it’s growing or shrinking. Even if things are getting better at the moment, there’s always a danger that one or several new botnets, or an inventive fraud scheme, could explode the volume of fake traffic. Especially since it only takes a single botnet-infected computer to create thousands of false daily impressions.
“[Bots] punch well above their weight,” says Hewetson. “If you have 1% of the computers out there that are infected, if that doubles to 2% or 3%, it’s going to grow the overall rate of fraud by much more than that. It’s one of those things that has the potential to spike in a big way.”