SEARCH for a term like “tennis balls” using Google, Bing or Yahoo, and two types of link appear. The majority form a long list of “organic” results. Companies pay the search engines nothing for these. But those at the very top and on the right-hand side of the screen are paid links, a form of advertising that accounts for most of the revenue of search engines. These search ads appear to solve a puzzle that has preoccupied advertisers since John Wanamaker, the 19th-century founding father of marketing, reportedly declared: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” But new research shows that the simple measures often used to assess the impact of search ads may be exaggerating their effectiveness.

Establishing cause and effect in offline advertising is hard. Ads are difficult to target: space on billboards and in newspapers is seen by lots of shoppers. Some of these eyeballs are worth spending money on; others, either because they belong to existing customers or to people who never will be, are not. And even when big ad campaigns are followed by strong sales, the intuitive conclusion—that rising sales are the result of good ads—can be misleading. Advertising budgets often rise in good times so that spending and sales grow together, even if the advertisements are useless. The ads and the sales have a common cause—strong demand—but may have no causal link.

Internet advertising seems to offer a solution to both these problems. First, internet search ads are targeted: the links that search engines show are based on a combination of the search term a user has typed in and his browsing history. Second, because firms can track whether visitors to their websites come from search-engine links they have paid for, they can work out whether ads convert into sales.

Not so fast. Spurious correlations are also rife in the online world, as a 2011 paper* by Randall Lewis, Justin Rao and David Reiley, a trio of economists then working for Yahoo, shows. Individuals use the web in a lumpy way. On some days lots of sites are visited and many purchases made; on others usage is lighter. This makes comparisons across time unhelpful. On a high-activity day people will tend to perform a lot of searches (and see lots of ads) as well as make many purchases. The relationship between the ads and the purchases looks causal, but may not be.
To test this problem of “activity bias”, the authors recruited volunteers online and split them into two groups. The first group watched a video promoting Yahoo, and the other group watched a political broadcast. The first group used Yahoo around three times more after seeing the ad, giving the impression it was very influential. But the control group—those subjected to a bout of politics but no Yahoo promotion—also used Yahoo a lot more. Both groups happened to be in an active period of internet use. This is why they were recruited in the first place and why they used Yahoo more than in previous periods. Lumpy internet use created a false sense of advertising impact.
The problem of activity bias means that in order to assess the effect of search ads, a proper control group is needed. A 2013 study by Chris Nosko of Chicago University and Steven Tadelis of the University of California, Berkeley, shows how such a test can be designed. Together with Thomas Blake of eBay they examined how important it was for the auction site to buy ads that appear when the term “eBay” is used in a search (“eBay tennis socks”, for example). In March 2012 they switched off eBay’s brand advertisements on Yahoo and Bing, but kept paying for them on Google as a control.

The finding was striking. When the sponsored ad was turned off, search-engine users simply switched to the first “organic” link that mentioned eBay. Overall, the site retained 99.5% of its traffic. Users who type in a brand-specific search are already trying to navigate to eBay’s site. Even if they appear lower down, free search results work just as well as ones that are paid for.

Calling Mr Draper
Firms like eBay don’t just pay for adverts when their brand is mentioned, of course. They place ads in response to millions of other words that indicate the presence of a potential customer. So a second test also investigated ads associated with non-branded keywords (“tennis socks”, for instance). The researchers tracked spending on ads and the number of “attributed sales” (sales made within 24 hours of clicking on a paid Google link) over time. A simple correlation analysis showed a familiar result: ads and sales tend to rise and fall together. A 10% increase in spending seems to raise revenues by 9%. The ads appear to work.

To check these results the authors split America into 210 geographical segments. A third were picked at random, with all Google advertising switched off. Of the rest, the researchers selected regions where patterns of internet activity closely resembled those where the ads were turned off. This allowed them to isolate sales variations that were caused by ads, rather than lumpy activity. The isolated impact is far smaller: a 10% increase in ad spending raises revenues by just 0.5%. (Results for users who had never previously used eBay were stronger, however, suggesting that firms with lesser-known brands may gain more from ads.)

Bosses should still take Wanamaker’s fear seriously: a rise in sales after an ad campaign does not automatically mean that the ads worked. But it also shows how the online world is getting closer to solving the conundrum he posed. Far from being an industry where cause and effect remain murky, online advertising may yet become one area where the dismal science can predict how to get costs down and profits up.

Sources
Here, there, and everywhere: correlated online behaviors can lead to overestimates of the effects of advertising”, by R.A. Lewis, J.M. Rao and D.H. Reiley, Proceedings of the 20th international conference on World wide web, 2011
Consumer Heterogeneity and Paid Search Effectiveness: A Large Scale Field Experiment”, by Thomas Blake, Chris Nosko and Steven Tadelis, NBER, 2013