What’s better? Narrow or broad targeting?
Some answers from a recent study in International Journal of Research in Marketing by Iman Ahmadi, Nadia Abou Nabout, Bernd Skiera, Elham Maleki, and Johannes Fladenhofer. Full paper here.
Targeting decisions are hard!
Targeting is difficult! Why? Because many of the big ad platforms, such as Google or Facebook, offer advertisers a huge range of audience segments for targeting. For example, at the time of our study, Facebook offered more than 600 audience segments for targeting, including location-, demographic-, interest-, and behavior-based segments. Advertisers can even combine these segments to make the pool of users larger, or they can target users at the intersection of multiple segments, making the audience smaller. Ideally, one would test them all in randomized controlled trials (RCTs) and then decide which segments to keep. But given the large number of segments (and their combinations), that is not feasible, as most advertisers have time and budget constraints. What makes this decision even more difficult is that we would want a segment to not only be profitable, but to be more profitable than an untargeted campaign. Otherwise, it’s better for the advertiser to just not target at all.
To solve this problem, we have developed a model that enables advertisers to calculate the break-even performance of an audience segment to make a targeted campaign at least as profitable as an untargeted one. We suggest using our model to calculate the break-even performance for many segments and then ordering these segments by break-even performance from smallest to largest. The literature on targeting shows that an increase in CTR larger than 100% is unlikely. Thus, we suggest the advertiser to focus on testing segments that require a smaller lift. We thereby help advertisers narrow their options when selecting audience segments for RCTs.
Our model also allows us to derive some interesting insights:
Most audience segments offered on Spotify will require the click-through rate to double compared to an untargeted campaign. Unfortunately, doubling is too high for most ad campaigns (as documented by the literature on targeted advertising; e.g., Aziz & Telang, 2016; Farahat & Bailey, 2012; Rafieian & Yoganarasimhan, 2021).
More specifically, targeting narrow segments will most likely be unprofitable due to the massive increase in performance needed to compensate for the loss in reach (narrow segments mean fewer users to target).
Bad news: When data quality is poor (as segments are not always 100% accurate; e.g., Neumann, Tucker, and Whitfield, 2019), things worsen (think of a male segment containing 40% female users). Inaccurate segments essentially mean that we will overestimate the reach of a segment. Our model allows advertisers to consider such inaccuracies and predicts that inaccuracies will hurt narrow segments more. Why? Because the required break-even performance will have to increase more for narrow (vs. broad) segments, assuming the same level of inaccurate data.
Luckily, we were able to empirically test whether decreases in the data quality indeed favor broader targeting. When Apple introduced ATT (leading to a decrease in data quality of audience segments with iOS users), narrow (vs. broad) segments suffered more, leading to higher CPMs and lower CTRs for narrow segments. This finding also extends to regulations restricting third-party data access because these regulations essentially decrease data quality. Thus, it's important for advertisers to carefully assess the performance of narrow segments with higher levels of inaccuracy.
Overall, our paper informs advertisers of the break-even performance necessary to make a targeted ad campaign as profitable as an untargeted one. We hope that advertisers will be able to make better targeting decisions using our model!
If you've gotten this far, you might like reading the paper. Thanks to WU Vienna, the paper is open source so that you can download it for free here.
Get in touch with Nadia ABOU NABOUT to learn more about the project!