MBA Masterclass Measuring Advertising Effectiveness
with Bradley Shapiro
Listen to Bradley Shapiro as he shares his expert insights on how to think about advertising, marketing strategy, and more.
- November 15, 2022
- MBA Masterclass
Bradley Shapiro: Hey, I'm glad you're all, I'm glad you're all joining us this morning. Thrilled to be here. This Masterclass is on Measuring Advertising Effectiveness, which is my main research expertise. So, we'll talk a little bit about how I think about advertising. If you, if it wets your appetite for more. There's a lot more content where that came from if you take my marketing strategy class here at Booth.
Alright, so let's get going. So a bit about me. Kara already introduced me with my credentials and whatnot. On the practical side, my family is in the wine importing and sales business, and I'm involved in that business and my research interests are in advertising. And I've largely had applications in the health and pharmaceutical industry. So if people ask me what I'm generally interested in, I often say I'm interested in drugs and alcohol.
All right, so, you know, first I wanna start off by trying to dispel some common wisdom about advertising. I think there's a lay belief about advertising that it sort of exists in a vacuum, and it's sort of a means of cleverly tricking customers into buying things they don't really want or need. So I'd like to argue that that's not a good way of thinking about your advertising. It's actually not a good way to think about any part of your marketing. And so I'll talk a little bit about how I think about what the point of marketing is and how advertising fits into that. So advertising is going to be a compliment to your full marketing strategy, and it can't by itself be your whole marketing strategy, and just keep this in mind as we go. So just a bit about how I think about marketing strategy broadly, and it's all gonna start with customers. And we have customers. there are these people in the world that have some needs. We didn't give them their needs. People need to get to work. We didn't make them need to get to work. People are cold and need to be warm. Like we didn't make them have that need. They just had these needs. These needs lead to preferences for products, in particular, things about products that help people meet their needs. And when people have these preferences, they have willingness to pay over and above the next best alternative. And the reason they're willing to pay us is because that we provided something for them that will meet their needs.
So this way of thinking about marketing makes sort of the incentives of the consumers and the firms aligned in some sense. We make money only if we help them meet their needs. Our job as marketers is two fold. First we wanna recognize the needs and estimate their preferences, and that's what we'll call segmentation. And then second, we want to design and deliver the product offering, which is what we'll call targeting. And advertising is gonna compliment both of these parts of the marketing process. In terms of segmentation, what we're mostly gonna talk about today is how to measure how much people respond to advertising. And that's where advertising's gonna go into segmentation. And then in terms of designing and delivering the product offering, we will want to think about advertising as sort of a functional compliment to the product. So imagine there's a product that that would really, really help you meet your needs, but you never knew it existed in the first place, you're never gonna buy it. So advertising can serve a function like that of helping customers realize that your product is available. So with that, we're gonna start with an example of one of the most famous ads ever run. And then we're gonna think about, you know, how can we measure whether and to what extent it worked. So this next ad, it won a whole bunch of film festival awards, a bunch of creative acclaim. You'll probably recognize this ad, but let's watch it and then we can have a little discussion afterwards.
Speaker 1: Hello ladies, look at your man. Now back to me. Now back at your man. Now back to me. Sadly he isn't me, but if you stop using ladies scented body wash and switch to Old Spice, he could smell like he's me. Look down, back up. Where are you? You're on a boat with the man your man could smell like. What's in your hand? Back at me. I have it. It's an oyster with two tickets to that thing you love. Look again, the tickets are now diamond. Anything is possible when your man smells like Old Spice and not a lady. I'm on a horse.
Bradley: So really, really famous ad. This character, Mr. Mustafa, became super iconic. Does anybody know, recognize what the first thing that they said in this ad was? Go ahead and put it in the chat if you remember the first line in this ad. Exactly Dustin, hello ladies. Why do you think he said hello ladies? Why is ladies the first word that he's using in this ad? Why do you suspect? So Stephanie says that's his audience. That's the target audience, targeting females. Well this is a, this is a body wash for men. Ladies influence men, good example. Good, good answers everybody.
So I think there's two reasons why he says ladies to start this ad. One is that women buy most of the household products in heterosexual couples. And so they want to convince ladies that this is gonna smell good and their man should smell like this. But second, I think a big problem at the time this ad came out in 2010 is that a lot of men felt like body wash wasn't very manly. So the addressing directly to the ladies sort of hinted to men that women wouldn't find this type of body wash unmanly. So, you know, if you think about Old Spice's brand, it's changed a bit over the years. So in fact this brand has been around for a long time. So when the 72-year-old P&G brand was planning their new advertising campaign for a shower gel, it faced this challenge. It's research suggested that women purchase as much as 70% of the shower gel for men in their households. But using body wash struck some men as unmanly. I think another big issue of why they decided to run this ad campaign was this brand is 72-years-old and the name old is literally in the name of the brand. So they're trying to convince a new audience to consider using the brand of Old Spice. Some men told marketers they were reluctant to use nylon webbing sponges called Poofs, which raised a lather but were off putting because of their dainty name and appearance.
In 2009, Old Spice introduced a pouf and a rubber grip and called it a Deck Scrubber. So at the same time we're introducing these new products, we're running this big ad campaign. And we're complimenting these new products with things like this Deck Scrubber. Old Spice is running these ads across all different types of media, so it's not just on TV. Here's an an ad that shows up in magazines. We've got Mr. Mustafa again, he's got a fire breathing dragon or lizard of some variety wrapped around his neck, which is breathing fire on these fireworks. That dragon seems to be ridden by a beauty queen. There's a tiger playing badminton. There's this guy on a kayak, kayaking down his chest, and apparently this is supposed to give you some idea of what this body wash and deodorant might smell like. Old Spice then had one of the first ever sponsored YouTube channels. So this was a really remarkable idea. They put their ads up online and people would seek them out to watch the ads. This is a remarkable thing, right? Because we usually think of advertising as something that we're forcing in front of people, interrupting their preferred content. But in these cases people are actually searching to come and watch the ad. They also started one of the first ever Facebook fan pages for a brand. So here we've got a different spokesman, it's not Mr Mustafa anymore, it's this guy who apparently the Old Spice makes his hair so dextrous that he can use it to squeeze out his phone number onto this hot dog in mustard. And this young lady apparently finds that somewhat appealing. But again, these Facebook campaigns, this Facebook fan page would serve ads to people who voluntarily signed up to have these ads served to them. So it's kind of a bizarre idea. They did display ads. Here we've got Mustafa in the shower with a suds built motorcycle and the suds handlebar mustache and mullet. And again there's this focus on a lady finding that very attractive. So you know, they say, you know, P and G says to themselves, look, we've got people searching for our ads and watching them themselves, and you know, signing up to be delivered ads on Facebook. We can just lay off most of our ad team cause it's free to advertise on Facebook. So they say in the digital space with things like Facebook and Google and others, we find that the ROI of the advertising when properly designed, when the big idea is there, can be much more efficient. One example is our Old Spice campaign where we had 1.8 billion free impressions, and there are many other examples I can cite from all over the world. So let's pause there for just one minute and think about those 1.8 billion free impressions. These are the impressions that come from people actively searching on YouTube for the ad or signing up for the Facebook fan page. Do you think those free impressions are more valuable or less valuable than the impressions that you would pay for in like a TV ad? Go ahead and answer in the chat. Several people saying more, I'm gonna argue less, about 50 50. So here's how I'm gonna argue less. The reason is, is these people who are seeking out the ads for themselves, they're selected in the sense that they probably were already very likely to purchase. So a big idea with advertising is what you really wanna do is you wanna change someone's behavior from what they were doing before to something new. In the case of these free impressions, you might well not be changing any behavior because those people were already behaving the way you wanted.
Bradley: So we're gonna think carefully about this kind of dynamic as we go through the measurement process. So was this campaign successful? By the numbers, the first thing that the ad agency says as well, there were nearly 105 million YouTube views of the campaign, 1.2 billion earned media impressions, including features on broadcast networks. 2,700% increase in Twitter followers, 800% increase in Facebook fan interaction, 300% increase in traffic to OldSpice.com. Old Spice has become the number one most viewed sponsored YouTube channels. Thank you Dustin. You asked exactly the right question. There's nothing here which says anything about whether or not sales increased. And really what we wanna know is did sales increase, and not only did sales increase, but did sales increase by enough to pay off the campaign and more? Another thing I wanna alert you to here, agencies love to say things like 2,700% increase in Twitter followers. You gotta be careful with percentages because it all depends on what the baseline was. So if they started with zero followers to begin with, a 2,700% increase isn't so impressive. As my old calculus teacher used to say, it's rude to divide by zero. So anyway, so like let's get into the actual numbers to see if we can say something about ROI. So they claim the campaign is having a significant impact on sales, both for body wash and the franchise, but Old Spice has month over month strengthened its position and is now the number one brand of body wash and deodorant. There's still no numbers here, so let's try to put some numbers to this. Okay, so here's the Twitter statistics again, they're starting from nearly zero so that 2,700% isn't super exciting, but here's some actual numbers. So what we're gonna do here is here we've got the sales for Old Spice Red Zone Body Wash in 2010, and we're gonna compare that to the sales in 2009. So to sort of try to control for seasonality, we're gonna just compare the same set of months in 2010 to the same set of months in 2009. And the argument here is that in 2009 there wasn't the ad campaign in 2010 there was. So if we look and try, look carefully and try to do this math here, it looks like in 2010 there's about 100,000 additional bottles, maybe let's call it 200,000 additional bottles of Old Spice sold versus 2010 in April May. It looks like 1.1 million versus 700,000. So that's 400,000 additional bottles. And let's say from June July to June July, 1.5 million to 700,000, that's 800,000. I total this up to 1.4 million additional bottles of Old Spice. And if we multiply it by the average price of a bottle, which is let's say $5, you end up with the, you know, about 7 million in incremental revenue. Now a lot of you people are putting into the chat very good concerns.
So one is that, you know, things have changed between 2009 and 2010. For example, a global financial crisis and maybe during a global financial crisis people are, you know, buying private labels. And in 2010, as we're recovering, people are going back to buying name brands. That doesn't have anything to do with your advertising. On the other hand, some of you are also pointing out that maybe there's spillovers onto other Old Spice products. You know, that's also true, in which case we might be underestimating the total impact here. And you could also argue that, okay, well maybe if they stopped the advertising, we've established a new baseline. On the other hand you could argue, well if they stopped the advertising, maybe it's gonna go back to the original baseline. So we don't have enough evidence here to say too much about how profitable this was. Turns out there's other things happening at the same time here as well. There's a big change over time where men are switching from bar soap to body wash. So if you think about the dynamics of who's buying and not buying, over time the older men who were just using bar soap are eventually not as their time expires on the world. And the people who are newly selecting into buying body wash, the young people, are disproportionately buying body wash over bar soap. So there's a lot of things we're missing out on here. And I think on net I'm actually worried that we're overestimating the effect of advertising on sales here. But let's just take it as given, that we had a $7 million incremental sales benefit. The cost of the TV campaign alone was nine and a half million dollars. Now keep this in mind, this was one of the most successful ad campaigns of all time by agency standards. It won a ton of awards, and it's not even obviously profitable. It might be profitable, but it's not obvious that it was profitable. And so this sort of illustrates this idea that it's very difficult to hit this narrow window of making your advertising profitable and to be able to measure it. So the rest of the lecture today, we're gonna try to go through and see how we can measure the effects of advertising so we can try to hit this narrow window of profitability. So it turns out, you know, there were other things happening in addition to advertising. And here, I mentioned this before, we see the entire category of body wash going up over this time period. In fact, Gillette goes up by a higher percentage than Old Spice. Nivea goes up a little bit less, but still goes up a lot. Axe of course goes down. So if you ask yourself what's similar between Old Spice and Gillette during this time, it's not Mr. Mustafa's ads, it turns out what it is is a whole bunch of deep discounts. So the thing they have in common are multiple national drops of high value coupons. They included a buy one get one free offer from Old Spice, up to $4 off a single bottle of Nivea for men. It reflected unprecedented levels of promotional intensity in the category. How much of Old Spice's recent gains came from Mr. Mustafa's ads, and how much came from the coupons? It's impossible to know, said P&G spokesman Mike Norton. Impossible to know. We're spending millions and millions of dollars, and we're saying it's impossible to know.
So I hope by the end of this Masterclass, you'll believe me that it's not impossible to know. We just need to be deliberate in how we set up the campaign and how we do our measurement. So the classic dilemma in advertising comes from John Wanamaker. He says, "Half the money I spend on advertising is wasted." "The trouble is I just don't know which half." So we're gonna try to answer some of this by the end of the lecture and come back to it and see what we think about it. So some people in the chat already mentioned something about this. Here's a quote from a more modern marketer says, "With the advent of trackable internet advertising," "it appeared that Wanamaker's dilemma might finally" "be solved by being able to monitor what ads" "readers click on, it's possible to determine" "which advertising is most effective and which is wasted." So let's pause and and meditate over this quote for a minute and try to decide if it's true. So you know, another way of saying this is that in the age of big data, we don't have to worry about this, we can just directly measure how much advertising moves sales. So can you think of any way that this might be wrong?
Bradley: Okay, so, Bechard says right clicks need to translate to actual sales. So there's a lot of clicks in which nobody buys. So in that case, just looking at clicks, we would overestimate how much advertising drives sales. Alternatively, on the other side, it could be that somebody sees an ad, doesn't click on it and ultimately buys, which would cause us to underestimate the impact of advertising on sales. So right away this quote is just not true on its face, and we don't even know which direction we might get it wrong. We could get it over or we could get it under. So this is where I like to say that big data doesn't solve our inference problems. Big data just gives you a whole lot more precision. So if you have big data and you don't have a strategy for getting causal inference, you're gonna be precisely wrong. All right, so we wanna know whether our advertising works. I'm gonna go over three main complications to measuring advertising effectiveness. Lots of you have gotten these intuitively in the chat, but I'm gonna try to be a little bit more specific and precise about it. So these are all three different versions of the correlation doesn't imply causality. So one is reverse causality, and this comes from potentially even good targeting that perhaps you're sending ads to the markets where your product is strongest. And so if you just compare places where there's a lot of advertising versus a little advertising, it's really the strong sales is what caused the advertising and not the advertising that caused the sales. Self selection. We talked about this with that earned media, so that people who are choosing themselves to watch or click the ad might be very different from the people who choose not to. And then finally we'll talk about other things happening at the same time. The fancy economics word for this is simultaneity, but really you can think about things like those coupons happening at the same time as the ad campaign. So the first measurement issue, I'll call reverse causality. And I'm just gonna give you a very simple stylized example of how we can mistake correlation for causality through this reverse causality channel. So the Dorfman-Steiner Theorem is a famous theorem in the economics of advertising which says if you have a monopolist and the only thing that the monopolist is allowed to change is advertising in prices, then the optimal advertising to sales ratio is equal to the ratio of the advertising elasticity of demand divided by the price elasticity of demand. All right, so if we think that you know, this alpha, this advertising elasticity is constant and this price elasticity is constant across places, then the place that we're gonna advertise the most is mechanically going to be the place with the most sales, right?
So if we increase the denominator on the left hand side, it's gonna make this whole thing smaller, and we're gonna need to advertise more. And if we compare places with the most advertising versus the least advertising, we're gonna get this false impression like the advertising caused the high sales. But in fact it was the reverse, the sales caused the advertising. The real question you're gonna want to answer is how much would have been purchased if you didn't advertise or if you followed some alternative advertising strategy? So if you think about this, this is exactly what the problem of advertising at Christmas, around Christmas time advertising for often gifted products increases. So you see a lot of ads for electric razors, for cars and for diamonds, at least in the U.S., and you see sales of these things increase, and you have to ask yourself, would the sales of these these products have increased without the advertising? Absolutely they would have. Christmas cause most of these sales, not the advertising. Now advertising possibly caused some of them, but we have to be really careful with this reverse causality channel. Then you might ask yourself, well you might say, well that's kind of obvious. Does anybody actually do this? Well if you Google, you know, how do I set my advertising budget? Some of the first things that come up are things like this percentage of sales method. And so in this case, this company says, well what we really do is we say, you know, we just set our advertising to sales ratio at some constant, and if we always have 5% of our sales in advertising, we're again gonna have this reverse causality problem, just like we talked about before. Okay, so moving to the second measurement issue, which is selection. So I think this illustration from this now defunct Dutch newspaper, the correspondent was really nice. So let's picture this. Luigi's Pizzeria hires three teenagers and hands out coupons to passers by. After a few weeks of flyering, one of the three turns out to be a marketing genius. Customers keep showing up with coupons distributed by this particular kid. The other two can't make any sense of it. How does he do it? When they ask him, he explains, well I stand in the waiting area of the pizzeria. Okay, so when you see that, it's quite obvious, it's plain to see that this guy's no marketing wiz. Pizzerias do not attract more customers by giving coupons to people who are already planning to order. {pizza} five minutes from now. Right, so this is very much like the idea of that earned media and very much like the idea of people deciding to click on display ads. People who were already on their way into the store were probably going to buy anyway, and they're not comparable to the people who weren't already walking into the store. So here we think about display ads, the people who choose to click, the people who click are more likely to buy. And that's in exactly because they had more preexisting purchase intent, not because of the ad. And so I think this is exactly how you should be thinking about those free impressions that that Old Spice earned. The final measurement issue I wanna talk about is simultaneity, which is just other things happening at the same time. This is where you wanna think about the advertising and the coupons happening at the same time, as well as the Deck Scrubber being introduced. And you wanna know how much you can attribute your sales to the ads or whether you just wanna attribute the sales to those coupons or to the Deck Scrubber.
So here's an example here. Big cars are having somewhat of a renaissance right now thanks to better fuel efficiency coupled with falling gas prices. For General Motors GMC brand, that renewed interest meant a spike in sales. Now the brand is planning to double its marketing investment to boost GMC through new ad campaign, CMO Today reports. So in this case we've got all sorts of things happening at the same time. We have better fuel efficiency, we have falling gas prices, of course this isn't today, but when this happened. So all these things are happening at the same time, and the ads might be causing some sales and some of those sales might be long term, but it also is the case that these other things are causing sales as well. Yes, and so Carlos just came up in the chat, doesn't quality matter? Of course. So what we're trying to do now is to say okay, you know, giving, holding, fix to the ad campaign that we've decided to run, can we measure how well it works? So the bottom line is this measurement problem is difficult, and we wanna figure out circumstances under which correlation is actually going to equal causation.
Bradley: So we want all else to be held equal. So we wanna be comparing equal quality of products, whether or not there's advertising or not. And we also wanna know what would have happened under an alternative strategy. So how are we gonna do that? Under what circumstances can we guarantee that all else is equal and we can evaluate what would've happened under an alternate strategy. Very good, AB testing through consumer segmentation, AB testing, exactly. We wanna run a randomized controlled trial. So you need something that looks like an experiment, or an actual experiment, to measure the effect. What randomization gives us is that the group of people that we randomly choose to see the ads are going to by construction be all else equal to the people that we've chosen randomly not to see ads. So we're gonna call this good variation. This something that looks like an experiment. So there's lots of potential ways that you can think about good variation. An AB test is the simplest way, but there's other ways you can think about it. So one is you could think about just shutting off all your advertising at some random moment in time. So if you wanna see how much a light switch works or doesn't work, what do you do? You choose a random time and you flip the switch, and you see if the light turns on and off. But importantly, when you do it that way, you need to make sure other things aren't happening at the same time and you can only measure that effectiveness at a single point in time. The second case is you could do these AB tests, so you randomly allocate some advertising dollars with a control group to measure effectiveness. And finally we'll talk a little bit about taking advantage of obscure rules or limitations and targeting. We're gonna call these natural experiments. I'll give you one such example that I came up with in my dissertation. So here's sort of a traditional ad agency way to quantify the effects of promotions in sort of a non-modern measurement way. They say, well we analyze the correlation between TV advertising and sales outcomes or digital advertising and sales outcomes. So you start a TV campaign and observe that sales go up, conclude that the campaign was successful, observe that a large number of people who click on our search ads purchase the product. So again, like these are things that you should be wary of when your agency says this because there's no real plan here to hold all else equal. So let me show you an example of a problem that this can cause. So here's an example of a display ad that was shown on Yahoo's homepage. How to read this chart is in day zero, that's the day that the ad is shown. And on the vertical axis, we see the probability of ending up on the Yahoo network. So the Yahoo network is what the ad is for. The ad shows up on Yahoo's homepage on day zero, and we see on that day when the ad shows up, the typical user's probability of clicking on or of making it to the Yahoo network is almost 90%. And before it was only between 30 and 40%. So it looks like this is a big effective advertising if we take this sort of traditional approach of analyzing this correlation. Now these things can be misleading, and I think the first red flag when you look at this picture should be is before day zero, on day one, you already see this upward trend.
That's always a red flag cause it's impossible for some treatment, like an ad, to have an effect on the past. That's just not how time works, right? So what you'd really like to do is you'd like to have a control group of people who were on the same webpage where the ad was shown to some people, also not see the ad. So in fact, Yahoo did run this as an experiment and we have a control group, and here's the control group is in red. So what we see is that the control group has essentially the same probability of making it to the Yahoo network as the treatment group. And you know, maybe there's an advertising effect that you can see there between the blue dot and the red cross, but it's much smaller than you would think just from the spike. So what's going on here? Well it turns out that if you're on Yahoo's homepage today, it's more likely that you were on Yahoo's homepage yesterday than the day before. So really what's happening here is there's this simultaneous correlation of who's on the Yahoo homepage, who's eligible to see an ad, and who sees an ad. So in this case, the correlation is really, really misleading and you really need that control group to see that the ad wasn't very effective. So what do you do here? Again, we need some sort of random assignment. We need some sort of clever random events. So one thing that TV stations often do is they move the timing of ads. You can use something like that or we can create your own experiments. So for the rest of the talk, the Masterclass today, I'm gonna go through a few different examples of an academic research and in sort of industrial research, how they've gone about doing a good job of measuring advertising effectiveness and how you might be able to apply the same thing to your applications. Big data's not enough, to that quote earlier. If you have big data and you don't have some sort of random variation and random assignment in advertising, you're gonna have really, really precise correlations, which don't give you an accurate view of your true advertising effect. All right, so the first paper I'm gonna talk to you is a paper that's published in a "Econometrica" by Blake, Nosko, and Tadelis in 2015. So Chris Nosko was my former colleague here at Booth. He left to go work at Amazon, and now he's close to the top of the company at Uber. Really, really remarkable scholar. It was sad that we lost him, but certainly the tech industry gained a gem. Steve Tadelis is at UC Berkeley. Tom Blake I believe is now at Amazon. So Blake, Nosko, and Tadelis are at eBay, and they ask themselves, look, we're spending millions and millions of dollars on page search ads. We wanna know whether or not they actually work as we intend them to work. Cause some of it seems intuitively like it maybe shouldn't work and we wanna be sure. So their goal was to evaluate the effectiveness of paid search and eventually extend this methodology to other channels.
So they go to the, they go to their higher ups at eBay, and eBay says, Well we have an ad agency who can compute advertising effectiveness for us. Why don't you give them a shot and see what they come up with. If you're unsatisfied with what they say, we'll let you do some sort of a randomization, and we'll see what happens. So for those of you unfamiliar with paid search, here's what paid search looks like. If somebody searches for the word eBay on Google, oftentimes the first thing that comes up is a paid search ad. So in this case you see this yellow box that has this click and it says it's an ad related to eBay. And so when you search for eBay, the first thing that comes up is this paid search ad for eBay. And the second thing that comes up is the natural search link. And so you could look at that and say to yourself, well obviously if we take away this ad, wouldn't the first thing that people see just be this natural link? And maybe it seems obvious that that's true, but eBay was spending tens of millions, even hundreds of millions of dollars on these brand keyword paid searches. They were also spending a lot of money on non-brand keyword paid searches. So this would be if somebody searched for the word convertible. All of the ads are these things in the red dots that sometimes eBay would have an ad within this set of, within these set of things in the red dots. Usually that would turn up on the sidebar somewhere over here, maybe it would be a little bit further down in the paid searches. And then if you would scroll down the page into the natural searches, you'd eventually find a link from eBay motors for example. So brand keyword is for people searching the word eBay.
Bradley: Non-brand keyword is for people searching for products essentially. So what Google says, here's how to estimate the ROI of your campaign, take the revenue that resulted from your ads, subtract out your advertising costs, then divide by your total advertising costs. They make it sound so simple. Of course the very difficult part of this is the, that resulted from your ads. That's the causal statement that we want to evaluate and we're going to do in this exercise. So the ad agency comes in and they say, well we're gonna solve this problem by just adjusting for everything. We can control for anything that you can imagine that might be confounding. So we've got all of the eBay's marketing spend, and we can see all the sales. We can see all the other types of marketing that they're doing. We've also got, you know, weather data from AccuWeather. We've got, you know, gold and silver prices, home prices, gas prices. We can just throw in the kitchen sink, and you should feel comfortable because we've got so much data, so much big data that we can just control for stuff that, you know, our correlation that we find is sure to be causal. And here's what they find. They say that paid search drives 12.4% of the total eBay sales. And in particular they say that the paid search on the people who search for the word eBay is responsible for 9% of eBay's total sales. And the paid search on the non-brand keyword, so these searches for products is responsible, responsible for 3.2%. So Chris, Tom and Steve saw this and they said this seems a little bit implausible. So let's think about the ROI, the return on ad spend, that is implied by these numbers. And in particular these numbers imply that eBay was getting a 1,222% return on investment for keyword paid search, for people who were searching eBay. That that ad for eBay was bringing $12.22 for every dollar they spent. That seems pretty nuts. And so they just didn't really believe it. They said, well on the one hand if that's really true, then we need to be increasing our ad spend by a lot, but we don't really believe it's true. So we're gonna, we're gonna run an experiment. So they wanna test these results. So they're gonna construct groups of individuals, cities, et cetera that look similar to each other. They're gonna randomly choose one, label it the treatment group, and monkey around with it. In this case, the first thing they're gonna do is what I call that light switch method. They're gonna turn everything else off and see what happens. And then based on that motivation from what they see, they're gonna then run a full scale randomized experiment. So here it is clear as day, this is what happens when you shut off the brand keyword paid search.
So when you turn off the, when you search for the word eBay, that paid search ad. So we see right away that the number of people who are clicking on the paid search ad, you see this in the blue line, goes to zero after you turn it off. That isn't surprising, if it's off, you can't turn it more off. But then you also see, you know, there's this, this cautionary tale of what you see on the natural link clicks. You see that jump up. So if you're just looking at the click on the ad and see that decreased to zero, you're gonna think that the turning off these ads had a massive effect, but turns out there's substitution to this organic link. Now you could look at this really closely and squint and say, well the height of this red bar seems maybe a little bit smaller than the height of this blue bar. So maybe there was still an ad effect there. And so what they did to try to say like, well we don't really have a control group, but we're gonna look over at is a different search engine. So these all came from MSN, Bing, and they're gonna look at Google and what you see when you look at Google is there's just some seasonality in searching for eBay and that coincided with them dropping the MSN paid search. So this isn't even running any regressions, there's no fancy statistics here. This is just plotting the outcomes and you can see randomly shutting it off, it doesn't look like these brand keywords obviously increased sales. So we're gonna put it into a statistical framework and estimate the statistical effect here. If you just look at the, without using that control for those Google natural searches, it looks like there's about a 5% effect of these ads on arrivals at eBay. But if you control for those natural changes in searches, it's down to a 0.5% effect. So just having a little bit of good variation here, you take this what looks like a massive effect to a slightly to a quite a bit smaller effect. And then once you have a control group, it's taken it down to basically a zero effect. All right, so maybe that just comports with people's priors, but this seemed like pretty good evidence that eBay's brand keyword paid search wasn't generating much value. So now they're gonna search randomized for these non-brand keyword paid search. So somebody searches for used Gibson Les Paul guitar, and they're gonna punch in here Gibson Les Paul, and there's gonna be an eBay ad over here on the sidebar. So their experimental design here is they're gonna do a geo experiment, which means they're gonna chop the U.S. into regions, and they're gonna say, okay, in some of these regions, we're gonna call them control, they're gonna see the paid search as usual. In the treatment groups, they're gonna turn off the paid search.
Bradley: So we can look at treatment minus control in the before versus the after period of the experiment, and the test is gonna run for a month. So we'll see if it, you know, little by little over time sales seem to drop off or if nothing seems to change. And again, I'll show you the results without using any fancy statistics, just with plotting outcomes. So there's three different measures here. This is the treatment divided by the control on top, and in the gray area is during the test, the white area is before the test, and essentially you see nothing happening. If we really thought that the paid search was doing a lot, you'd see the ratio of the on versus off going up in the treatment period. And then we've got on minus off, just the difference. You don't see really anything changing. And then if you wanted to look at it in percentage point terms, you can do the log of on minus off. And again, we see basically nothing happening. So it looks like even these non-brand keyword paid search ads aren't really doing much. So that's a behavioral effect. It doesn't say what the ROI is. So we're gonna try to compute ROI. So here this is into a statistical framework. The coefficient we care about is this interaction term, and it looks like there's about a 0.4% effect of advertising on sales. So if you were to cut all advertising, you lose 0.4% of sales, the ROI of that is about negative 80%. You know, goes without saying that if your ROI is negative 80%, you'd be better off not running the ads at all. So here's their fancy regression table with lots of scary things. Don't worry about any of the Greek letters. Really I want to draw your attention to the first two columns, the estimated ROI. This is using the methods that the ad agency was using, just controlling for a whole bunch of stuff. They found between a 500% and a 2,500% return on investment. Columns three through five are using these experimental methods that the authors are using, and they find between negative 63% and negative 89% ROI. So there's a massive difference between this well designed experiment and looking at these correlations just using a whole lot of data. So if they dug a little bit deeper, it turned out that these ads didn't seem like a total waste. Here's plotting out the effective advertising by different types of users. So on the horizontal access here we have how many previous purchases that user made in the last 12 months on eBay. And on the vertical access what we see is the advertising effectiveness and percentage terms. So for people who hadn't been to eBay in the last year, we see there's actually like a pretty significant advertising effect, almost an 11% advertising effect. For people who hadn't been to eBay or who had been to eBay once or twice, it looked like there was a modest effect. And then for everybody else it was basically nothing. So if you actually come and take my class, I'll go through theories of advertising as well. And what this result sort of tells you is that this ad is all about information. So for people who already know about eBay, these paid search ads don't really do much of anything. For people who don't have top of mind that eBay is where I might go to find a Gibson Les Paul guitar, these paid search ads might actually work. And so from this set of results we can say, well if we wanted to alter our targeting, we wanna avoid wasting eyeballs on these people who have been to eBay a lot and instead try to direct our ads specifically only to those people who don't go to eBay very much. And it turns out that there's a lot of people who don't go to eBay very, very much. So this is the distribution of total customers at eBay. The total buyers, the typical buyer has only ever been to eBay once, so almost 25% of their buyers. So this could be a reasonable advertising strategy rather than use a bludgeon and do paid search on everybody. If you can figure out the search terms that new users are disproportionately using, that'll be much more useful. So from a simple experiment, we got a whole lot of richness. We got, well one, if we can only advertise to everybody or nobody, we're better off advertising to nobody. But if we can target our ads, we're probably gonna be a lot better off targeting those ads that people who haven't been to eBay much recently.
All right, so what's the story going on here? The causal ROI that we measured was really negative and that correlational one was really, really positive. People who were using eBay paid search ads by and large were finding their way to eBay in other ways if the ads weren't there. But there did seem to be this impact that was different on new versus experienced users. So lots of issues have come up with this project. Maybe this is different if we have a much less well known brand. And I say yes, for this, the study is totally informative, right? If the brand is totally unknown, all of your customers are gonna be these people who've never been there before, and maybe we expect to see a bigger effect there. You know, there were a whole bunch of other comments from article readers we're not gonna go over today for the sake of time, but really these experiments were quite convincing. The one thing that they were concerned about is in the long run maybe competitors are gonna respond. So it might be the case that if eBay stops advertising on its keyword, that Amazon will will pick up the slack and start advertising there.
Bradley: So the next paper I'm gonna tell you something about, tries to deal with exactly this problem of how do we deal with competitor response in the context of an advertising experiment? So again here the takeaway was the correlation ROI suggested that ROI was really, really high, but random assignment suggested otherwise. In this case, the ad agency was really, really wanting to sell them ads. And so their incentive was to try to make you think they were valuable. So what I always tell students is if you're working with an agency that's trying to sell you ads, they're not evil, but they do have incentives that might not be the same as yours. And you need to know what questions to ask them in terms of did you run the experiment? How do we know what would've happened otherwise? So we're gonna skip this and talk about what happens if the competitor responds. So this is a paper that was written by Garrett Johnson, Randall Lewis, and Elmer Nubbemeyer that's in the Journal of Marketing Research. And basically their key insight here is when we do ads online, most ads online are won by an auction. And so when there's an auction winner, that winner gets the ad, but there's usually also a second place bid who might have gotten the ad if you didn't buy the ad in the first place. So what they're gonna do here in this case is they're gonna track what happens, and instead of a control group showing no ad, the control group is gonna let the second highest bidder get that ad. And so you can measure what happens in the absence of your ad when a competitor gets that slot. So here's the the example in an an experiment where you make the control group have no ads, you usually do something like a public service announcement. So say you're advertising for Macy's, your control group might be, you know, the Red Cross or something like that. When you do that, you have to pay for those Red Cross ads. So that's expensive. But also that doesn't take into account what happens if a competitor ends up in that slot instead of something irrelevant. So instead what they do is they put what's called a ghost on these alternative ads, and they say, well in this case when they don't run the ads, one of the positions is occupied by Zappos and another one is occupied by Disney. And so Macy's see Zappos is actually a competitor in the shoe market. Disney is largely irrelevant. And so now we can see what would've happened if they didn't run the ad and a competitor might get in there. And also in this case, the thing that's really nice is now Macy's doesn't have to pay for these control ads because that second highest bidder is just gonna go ahead and pay the the next highest price. So some pluses and minus of this. The plus is it gets you that net effect inclusive of this competitor response, and it's easy and inexpensive relative to where you have to buy those control ads. And it's exactly what you want if the question that you're asking is should I run the next ad holding targeting fixed. A limitation of this sort of methodology is it holds targeting fixed. So it doesn't really say what would happen if you completely changed your strategy towards targeting somebody else, and it's not really informative for telling you how advertising worked. That being said, this is a super useful tool. So bottom line, your question that you wanna ask is what would've happened if I had not run this ad campaign? And your statistical analysis needs to have a plan to recover exactly that. The best way to do that is a randomized and controlled experiment or something that approximates such an experiment. So the next paper I'm gonna talk about doesn't have access to randomization, but it's gonna use what we call natural experiments to try to figure out how much advertising moves sales. And it's gonna tell, this next paper's really what it's gonna be trying to answer is Wanamaker's puzzle. Remember what Wanamaker said was, half of all of my ad spending is wasted, I just don't know which half.
So we're gonna try to answer this question, How much does advertising generally work? And I think there's gonna be two pieces to this. One is gonna be asking is there sort of an inherent law of physics that advertising has some limitations? So how much is advertising inherently able to move the needle? I think this is a reasonable question to ask because what advertising is essentially doing most of the time is interrupting content you want to watch with content you don't wanna watch. But the second piece of this question is, how good are advertisers currently at understanding how advertising works, targeting at the right people, and using it as a compliment to the rest of their marketing strategy? So if advertising generally doesn't work, we could conclude one of two things. We could conclude that, you know, advertisers are just doing a bad job, or we could conclude it's just difficult to make advertising work. So this leads me to a paper, one of my papers, that's joint with Gunter Hitsch, who also works here at Booth and Anna Tuchman who works up at Northwestern. TV advertising and effectiveness, advertising effectiveness and profitability evidence from 288 brands. So we're gonna be looking particularly at TV, and we're gonna be looking at past historical data. And the reason why we can't run experiments is we can't go back in time and change how TV serves ads to people. So we're gonna ask these two different questions. How much does TV advertising work in general? And then we're also gonna ask how can we think carefully about doing measurement that's scalable across different brands? We don't want our choices to be driven by, our results to be driven by our arbitrary choices, and we're gonna use a whole bunch of cool tools at our disposal from Frontier Marketing Analytics. So we're gonna measure the distribution of advertising response curves across 288 brands. So essentially we're doing like this eBay study, but we're doing it hundreds of times across a whole bunch of brands that we argue are representative of the brands that advertising on TV. And we're gonna ask how much does advertising shift choices? And you know, for some brands it might be more, and for some brands it might be less. We're gonna show you the whole distribution. And given that distribution, we're gonna compute a distribution of realized return on investment. So that is, given what we've estimated in terms of how much advertising affects choices, would it be worthwhile given how much advertising costs? So our ROI, we're gonna come up with two different numbers. One is ROI on the margin. So that's how much return is the brand getting on its last dollar spent? And so if we think about marginal ROI, what's the optimal marginal ROI? If I'm doing advertising perfectly, how much should I be getting from the last dollar I'm spending? Anybody know? Wanna put that in the in the chat? Derek, exactly right. We want the marginal ROI to be zero. For the last dollar that we spend on advertising, we wanna be getting exactly $1 back. For info marginal dollars, we wanna be getting more than a dollar back, right? So we wanna keep advertising until it's not profitable to spend the next dollar. So for the marginal ROI, what we're looking for is zero. The other thing that we're gonna measure is total return on investment. So this is similar to what the eBay case measured. So we're gonna say how much profit did advertising bring in total relative to doing no advertising at all. And so that's asking the question, to what extent would we be better off if we didn't advertise at all? So for total ROI, what we're looking for is clearly positive. We don't know what the optimal amount is, but we wanna see it be clearly positive. Marginal ROI, we'd like to see a zero. Total ROI, we'd like to see clearly positive. So our basic model structure looks like this. Don't worry too much about what it looks like or the Greek letters or anything. But let me point out what we're, what we're estimating. Cause I know before lots of you were asking questions about short versus long run, and we're gonna be very serious about that here. So the advertising we're looking at here is this capital A. We're gonna call that advertising stock. And so essentially what we're measuring here is a long run advertising elasticity. So if I permanently change my advertising by 100%, how much is that going to affect my sales? And that's how we're going to interpret this beta.
Bradley: You know, I can get into some more details of this with you guys later or if you take my class when you get here eventually, but for now take it as given that what we're measuring here is a long run advertising elasticity, and we're gonna model that long run advertising stock as the weighted sum of advertising flow. So all of my advertising in the past contributes to what my current advertising stock is. We want causal effects of advertising, right? So we wanna be able to get around these issues of self selection, reverse causality, and other things happening at the same time. And we're gonna have two main strategies that we're gonna use, and they're gonna be distinct from each other. And the reason we wanna use two different strategies is cause we don't have a experiment. So we're not 100% certainty, we don't have 100% certainty that our natural experiments simulate an experiment well. So we're gonna do two different natural experiments. One, we're gonna call a baseline strategy, and one we're gonna call a border strategy, which I'm quite proud of. It's something I came up with in my dissertation. So the baseline strategy, what it does essentially is it takes very seriously the way the ad buying process works. So for most brands, when you wanna buy an ad on TV, what you do is you go to what are called the upfronts. So before the, the whole season of television, you buy from a menu of ads. So essentially you go to the station and you say, I wanna buy this many impressions. I want it spread out over these weeks. And so you can even specify which weeks exactly if you want, and you can say which demographics you want. So there's only so much that you know months ahead of time. And so basically we can, we are arguing that we know exactly what the advertiser knows at the time they buy the ads, and we control for those things. And then what's generating variation in advertising net of those things is actually there's a lot of shifting around that happens by the station through the details in these ad contracts. So the ad, the station has the right to move advertising around as they see fit based on the flexibility of some of these contracts. The other thing that changes is if you want to buy advertising in real time, close to the day it airs, there's very limited slot availability because most of these ads were sold up front and what exactly slots are available in a given week is about as good as random. So that's what our baseline specification is going to be taking advantage of. The second strategy that we have, that is I think, more naturally speaking a natural experiment is what we're gonna call the border strategy. So TV advertising is done in the U.S. by these DMAs, designated market areas. And so here's an illustration of two different DMAs. So this is the, in green is the Lexington, Kentucky DMA. In purple is the Louisville, Kentucky DNA. So if two people are watching advertising at the same time, watching the same channel, say two people are watching CBS News at the same time on the same day, everybody in this green area is gonna see exactly the same ads. Everybody in this purple area is going to see exactly the same ads as each other, but the people in green are likely seeing different ads from the people in purple. So what the border strategy is doing is it says we're gonna focus in on people who live exactly at these TV market borders. They live very close to each other and they look identical to each other in every way, but they're seeing different ads. And so if we're worried about these reverse causality problems, we're worried about these self-selection problems. We're gonna take care of this by getting all else equal looking at people who are very much the same as each other, except for the fact that they just live on opposite sides of what is an arbitrary border. So people on the Louisville side maybe really like potato chips a lot. People on the Lexington over here don't like potato chips as much. But people right along the border like potato chips about the same amount as each other, but they're gonna see different amounts of ads. And so that's our natural experiment.
Bradley: So the data we're gonna use, we're gonna bring our data from AC Nielsen, from our Kilts Center for Marketing here at Chicago Booth, which is one of the really awesome things about this place. We house all of this data. So we've got scanner data on prices and quantities for all of these products. Think about our analysis being at the brand level. So each observation is going to be an advertising effect for a brand like regular Coca-Cola. And then we've got this advertising data. We've got advertising occurrence and viewership data, which are program ratings across these different TV markets and across time. So we want to be able to answer this question in sort of a generalizable way. So we're gonna start with the universe of brands that are in this data that advertise on TV, so that's about the top 500 brands. And we're basically just gonna drop brands that they don't ever advertise on TV. So our final sample is gonna contain 288 supermarket brands that are among the major advertisers. So our results are actually on as publicly available. R Shiny website, that you're welcome to visit if you like, and you can play with our results. I'm not gonna do that now for the sake of time, I'll just walk you through some of the more interesting results that we got. So here's our main result on long run advertising elasticities. We document very, very small long run advertising elasticities, and actually quite a bit smaller than if you were to just look at case study by case study of things that were published. In fact, only about a quarter of the estimates that we get suggests that advertising causes any sales in a statistically significant way. Two-thirds of our estimates are not statistically different from zero. And our median advertising effectiveness estimate is the same in both strategies. It's about 0.014. And how you interpret that is, if I were to end all of my advertising forever, the typical brand would lose 1.4% of their sales forever. So whether or not that's worthwhile is an open question, and we're gonna compute that in the ROI section. But this number is quite small, and in fact, we can actually show you that's robust to all sorts of fancy modeling methods. So we're gonna do a bunch of machine learning methods. We're gonna do a semi barometric model regularized using the Lasso.
So here's, I'm an example of a brand where we estimate the advertising effectiveness at different advertising stock levels. In pink, we've got the semi parametric model. In blue, we have our more parametric baseline model. And you know, for those of you familiar with machine learning methods, you can see there's sort of some overfitting going on with these little bumps, but by and large we're estimating a near zero effective advertising no matter what method we're using. And if you look at this in our histograms as well, brand by brand, the machine learning doesn't change anything. So it's not like we're, you know, using these fancy methods suddenly produces better advertising effectiveness. We're just finding not very effective advertising. All right, so John Wanamaker's puzzle, we still haven't quite solved. We found that advertising effects are small, but that doesn't necessarily mean they're wasted. So how much is wasted is gonna depend on the cost of advertising, the profit margin per sale, and the total amount of advertising done. So there's a question from Peter in the chat of does this vary for bigger companies or older companies or smaller versus younger companies? And that's a very good hypothesis. So I would posit that mechanically return on investment would likely be higher for smaller companies in our framework because they're starting from a lower baseline. And so we're estimating this sort of curved effective advertising that it has a bigger effect when you go from zero to something than from a lot to a little bit more. So we are gonna estimate a higher return on investment for these smaller, newer brands, but we're not actually gonna estimate a higher behavioral effect. We're gonna still estimate very small behavioral effects. All right, so in terms of measuring the return on investment, we're gonna do this marginal ROI, which again, we're looking for zero. And then we're gonna also be measuring this overall ROI. How much did we gain in total as opposed to if we didn't advertise at all? So here's our estimates for the the marginal ROI. We've got three different panels here for different assumed profit margins because we don't observe what the profit margins of these brands are. But a typical profit margin on a consumer packaged good brand is about 30%. And so that's the middle panel. And so what this shows you is that our median brand, we're finding a negative 87% return on investment. And remember, the optimal that we're hoping to find is 0% return on investment at the margin. And the gray bars represent negative and statistically significant. So we can say with some statistical degree of confidence that most brands seem to be advertising too much. They're advertising so that on the margin they're getting less than that dollar back. In terms of the total ROI of their advertising spend, here is where we wanted to see clearly positive effects. And here we're not seeing it. So the median brand, we're finding a negative 57% return on investment in total. So that's saying that in total, the average dollar you spend, you're only getting about 44 cents back. So you'd be better off not advertising at all if you're the typical brand in our sample. And keep in mind these are long run advertising elasticities, not short term. Now the statistical certainty here is a little bit, little bit less clear. So for the gray, those are negative and significant. So those are clearly companies that are advertising way too much and we would be better off advertising not at all. The red here is positive and statistically significant. So we can say with some degree of of confidence that these brands are not wasting all of their money. And the blue ones, their confidence intervals include zero ROI. So you know, there's some chance that they're getting some positive ROI, but there's also some chance that they'd be better off not advertising. So the lesson here is it's just not obvious that you can just throw ad dollars at something and it's gonna work. You're sort of shooting for this narrow window of effectiveness that you're moving the needle a little bit and you've got, you know, this narrow window of trying to make it profitable given the cost. So Gary asks, is it possible that the advertising effectiveness is limited now considering that brands have already advertised significantly in the past?
Bradley: Absolutely. So our sort of notion of long run advertising effectiveness takes us into account that there's a flat part of the advertising response curve. That if you get into that flat part of the response curve, your marginal return is gonna be lower. But we're here measuring the total response, including that steep part of the curve. So if you advertise too much into that flat part of the curve, you're gonna be wasting more and more ad dollars. So the big takeaway here is the majority of brands, yeah, well they would be better off if they advertise not at all would probably be even better off if they advertised a little, but not as much as they were advertising to begin with. So important conclusions here. Our median ROI is about negative 90% on the margin. This means that the typical brand is advertising too much and also the median ROI of all observed advertising is about 57%. We find 11% of brands seem to be doing pretty well. 60% of brands we can't tell for sure if they made or lost money. And 31% of brands we find that they almost surely did lose money. So the important conclusion here is that the economics of advertising is tough. So most brands can plausibly achieve some positive ROI, but they're threading this really tight needle. The economics just isn't super attractive for most brands. And I think a piece of this is unsurprising, given that there's this common wisdom about advertising, that it's being used to trick people. And I think, you know, there's scope for improvement in advertising. If advertising were thought of, you know, more critically as something that compliments the marketing strategy process. And so a lot of you raise this in the chat that, you know, a new brand, it might be more important for a new brand to get the word out, right? So they're on the steep part of that advertising response curve. And if a brand thinks carefully about those economics, they're gonna do a much better job than if they just think they're gonna throw ad dollars at something. So you need to be deliberate about how your advertising strategy really compliments this overall marketing strategy. Just throwing ad dollars at a problem is a bad solution. We wanna remember that we wanna target advertising the best we can. And if you have to target everybody, maybe a new product as a better thing to do that with and not a a really mature brand that's way on that flat part of the advertising response curve. All right, so our main overall takeaways here is the behavioral effects of advertising are small, and I think you also wanna be aware of agencies who give you one off examples of success cause you also have to consider the possibility of failure when constructing your expectation for how much advertising will work. And our results suggest here that Wanamaker was probably too confident when he said 50% of ad spend is wasted. We're getting closer to 57%.
All right, just to sort of sum up here before we get into Q and A. The main way to fail at measuring your advertising return on ad spend is through these three channels. Reverse causality selection, which is where, you know, people actively choose to view ads or click on ads, and then simultaneity, which is other things happening at the same time. I think it is useful to think about separating these three things out and not just blanket say correlation is not causation. Because if you think one of these three things is happening more than others, it can inform your experimental design. Our solution here is gonna be random assignment, either through experiments that you create or that nature creates for you, these things like these borderer strategies. We've got examples, I showed you some examples of how advertising didn't work out as hoped. So eBay paid search, and you know, TV in general. And I've also shown you some state-of-the-art tools and ways of thinking that help us achieve better advertising measurements. So I showed you the idea of this light switch method of eBay, true randomized geo targeting in the eBay case. I showed you this ghost ads idea for if you want to correct for the fact that competitors are gonna jump in and steal your ad slot. And then finally we talked about these natural experiments. So either the TV station moving around your ad slot or this thing like the border strategy where we're looking at sharp changes in advertising that have nothing to do with people's underlying preference. It's been a pleasure talking with you all day. I hope you've learned a little bit more about advertising measurement. I hope to see many of you in class in the future. And then we can actually go through in much more detail how advertising fits into marketing. I think a lot of you were asking questions in the chat that were very pertinent to thinking about the theory behind it. And just because we didn't have time today to talk about it, we didn't, but we certainly will if I see you in class. So there you go.
Kara Northcutt: Great, thanks Brad. Do you have a few minutes to just answer questions that have popped through?
Bradley: Sure.
Kara: So there's one that Sandy asked, she said she missed a little bit of it. And curious, is this through online advertising only? I am assuming advertising through streaming services is different from social media. Any response to that?
Bradley: Yeah, so I mean I think there's, there are, you know, fundamental differences between ads that appear to you in the form of link clicks versus in the form of video. There's also fundamental differences with which between ads that are served through individual targeting mechanisms like online streaming versus linear TV, which is much more analog. Yeah, so the nuances of this, I'm happy to talk through further when you get here and you take my class. So there's gonna be a lot more of that.
Kara: I like that, leaves me {inaudible}. This is a very a generalized question. If most ad campaign does not work, why do most big brand companies go for it or continue doing it?
Bradley: This is an excellent question and this is like a classic University of Chicago question.
Kara: Yep.
Bradley: I'm thrilled you're asking it. So the old, so I'm chuckling about this cause there's this old adage at the University of Chicago that if you see a $20 bill lying on the street, it must be fake because somebody would've picked it up if it was real. No, so I mean there's lots of reasons why I think there's so much advertising that doesn't work. One is I think that there's people who haven't taken my class and who subscribe to this common wisdom of advertising just being a means of tricking people. And that doesn't work super well. The other reason is, I think that there's just different incentives between the people setting the advertising budget and the shareholders in a lot of cases. So if you can imagine if you're an ad agency and you're in charge of selling the ads, it's in your interest to convince companies to buy it because your survival depends on it. And a lot of firms don't have the time or the bandwidth to really be holding the feet of the agencies to the fire. And it's not just an agency, an external agency problem. You can also imagine inside of a company, there's a marketing section of the company and even an advertising section of the company and it might not, they might not see it as worth their while to prove their own existence futile. So I think like these sorts of agency issues are the primary reason why we see inefficient actions within firms, not just advertising, but across different types of strategies.
Kara: Thank you, that's helpful. Things coming in fast and furious here in the chat. This is from Anuj. Do these types of aggressive marketing strategies slash campaigns make it difficult to create brand extension or line extension down the line?
Bradley: I mean, and so there's some nuance to this, and this is related to my comment at the beginning of the lecture that really what we need to do is we need to think about our advertising as a compliment to those strategies. So if we're doing a line extension down the line, we wanna think about, okay, well how can advertising help support that line extension as opposed to get in the way of it. If you're just doing a mass brand strategy, you know, maybe it hurts, maybe it helps your line extension, particularly, you know, if your brand advertising strategy has always been that you're a super high quality brand and you wanna do a line extension into a cheaper version, this is gonna generate some conflicts for your brand. So again, this is something that we're gonna dive into in a lot more detail in in my class when you come here.
Kara: Absolutely, and for those who have asked some questions, as you can see, there's a lot in the chat, so feel free to repost. There's, if you wanna glance at the one that just came in from Joanie Brad, it's a longer one. Let me see if I can paraphrase here. I found LinkedIn very advantageous in marketing my company, especially video. Have you done research on smaller companies that are wearing multiple hats?
Bradley: Yeah, so I'm actually got some big active research now that I'm doing in collaboration with some Facebook collaborators. And the vast majority of companies who advertise on Facebook are tiny companies. And so that generates a whole bunch of interesting new methodological questions because, you know, if you run an experiment that has hardly any observations, it's hard to learn much. And so that's one of the things we're doing there. But yeah, work in progress, you'll see some of this hopefully in the near future. I should also say I'm happy to answer questions about Booth more generally if you have something, a question that's not advertising specific, but I'm also happy to keep answering advertising questions.
Kara: Yeah, you read my mind. So you mentioned Kilts Center earlier, and I would love to know, so Kilts Center, just very briefly, is one of our mini centers, and they work with students and faculty. So students take advantage of Kilts Center, particularly those looking into product management type roles, marketing. There's a lot of, you know, a lot of roles within marketing and that's really expanded with Kilts Center focusing on tech as well. So they have mentors, there are a lot of like conferences, and in different ways case competitions that students participate in. But Professor, from your perspective, like as I know the Nielsen Data you mentioned, so talk about it from your perspective, how you, what you value about Kilts Center and how you interact.
Bradley: So Kilts Center is I think a really, really nice and unique thing here at Booth, and it really, the whole point of it is to center the Booth marketing approach in all aspects of the school. So if you, if you take a marketing sequence someplace other than Booth, you'll see it's quite different and in some ways less quantitative, in some ways less, you know, theory focused. And the Kilts Center sort of like aligns these goals across, you know, what my primary job is, which is research, with also, you know, teaching and with the student experience. So I've gotten a whole bunch of data from my research through the Kilts Center. They've negotiated with data providers, which you saw some of it today, but also they sponsor these programs for students. There's a Kilts Scholars and and fellows program where we get to interact directly with students about our research and support the research being entered into the class pedagogy. So it's a pretty cool and unique thing here, how we think about marketing, and the Kilts Center is just really great at supporting all of that.
Kara: Yeah, that's great. That's really helpful. And what would you say, so some of the individuals in here might not be going into a traditional marketing role, but can you talk about the value? So we have a flexible curriculum at Booth. We essentially allow you to pick and choose classes, but we ensure you take classes within each of the function areas like marketing, strategy, finance, et cetera. So talk about the value, no matter, so someone, many of our alumni end up in general management. Why is it important to have this marketing context and education as part of their MBA?
Bradley: Absolutely, so I think yeah, most people aren't gonna end up being marketing concentrators, but like even speaking to say the finance concentrators, at the end of the day, very, very few of our students end up designing new financial products, right? At the end of the day, they're marketing financial products. They have customers, and they have to satisfy their customers' needs by providing the product or service that fits them, which is exactly what marketing is all about. So my view is that every company that has customers, marketing is very important to you, and marketing is the way to really rigorously think about how to meet your customers' needs. So like you could fairly ask like what isn't marketing then? You know, maybe operations management, maybe supply chain management isn't so much marketing. That being said, people who are doing those also have customers. So I think marketing is super, super important, and I encourage everybody to take at least marketing strategy, but most people who take marketing strategy go on to take at least one marketing elective as well.
Kara: Yeah, absolutely. Yeah, we have data driven marketing. There are lab courses developing new products and services where you work with companies on actual challenges, products, scaling, whatever that issue they may work on. It's kinda like a mini consulting project. And throughout, there's a comment from Grace, she said, now I understand why these schools generally care so much about our quantitative skills. And I think you gave really good example about how data is brought in to all aspects of the MBA experience. But that of course doesn't mean you have to have data science backgrounds coming in. You'll take foundational classes. The faculty really help guide you through that. But are there any like tips you would give on kind of the quantitative skills that would be helpful to brush up on for classes like yourself or your peers in the marketing realm?
Bradley: Yeah, I'll answer the different question first, and then I'll come back to that. And one is that the focus of my class in marketing strategy is more quantitative than others, but really the more important thing than your technical ability is your ability to think through these things rigorously. Can you think through how you want to do this in such a way that you can ask the right questions? I don't care so much that you can run a Ridge Lasso regression, but I do care that you can say why you want to run a Ridge Lasso regression. So I focus my teaching a lot more on thinking correctly, and I know all of you are smart enough that if you end up needing to run Ridge Lasso regressions, you can go online and figure out how to do that. So thinking is the most important thing, as opposed to like direct quantitative technical skills. So I would encourage you to, you know, go back and review your basic statistics, your basic economics 101 thinking. That'll apply to basically all of your classes here at Booth, and you know, very basic calculus if you can. But I give you intuition that that doesn't even require the calculus. I don't want, I think you shouldn't get in your heads about the math skills. I think you should be focused on thinking about the problem correctly. And if you, if you come here and a professor's putting crazy math on the board that you don't understand, ask them to tell you more the intuition and they will, because I think that's a consistent way that we think about things here.
Kara: Yeah, I couldn't agree more. When we talk about the Chicago approach, it's all about the critical thinking skills, teaching you how to ask the right questions, and yeah, try to not get, so it's, we understand it's a rigorous program, but know that if we admit you, we know you can handle it, we know you're gonna be successful in the program and there are many resources to help there.
Bradley: I will say, I will say I had a student in the past who like really, really struggled with the quantitative stuff, came to my office many times very upset that the quant stuff was really hard.
Kara: Yeah.
Bradley: I really encouraged that student to focus less on that and more on the critical thinking and answer questions as best as as they could that way. The student ended up getting a B in the class. B is great. And the student comes back and talks to me basically twice a year to tell me how things are going and how much the stuff that they learned in my class is useful in their job. So yeah, so that's another thing I'll say is, you know, I know most of you who will end up coming to Booth were crazy, crazy successful in your earlier educations. Don't come here and expect to get all A's because all of your classmates are really smart.
Kara: Exactly, I always tell people that you're used to being top of the class and it's okay, like you come to a place like Booth to challenge yourself, get out of your comfort zone. No one down the line will ever ask you what was your GPA at Chicago Booth, you know, you got a Booth MBA. There's a lot that that comes along with that. And we also even have a grade non-disclosure. So when you're going through any sort of recruiting or resumes, our students do not put their GPA and that's a student policy. They vote on that basically year over year. Because again, we want you to take those challenging classes as long as you get something out of the class and learn the concepts and can apply those. And you know, to professor's point, you know, the grade isn't as as important, so.
Bradley: I've never thought less of a student in my class for getting a B. I have thought less of students who got B minus and complained about it.
Kara: That's a very fair point. So, all right, I think we've kinda gotten through most of the questions here, and just for the sake of time, just any thoughts on, you know, you've been at Booth, I believe you mentioned, you know, nine or so years. Like what is it about being a faculty at Booth that keeps you, keeps you energized and keeps you engaged? And I guess two part question, how do you interact with faculty? There's a question about this, like in entrepreneurship and kind of across the different disciplines, how does that work? Any examples there?
Bradley: So I'll answer the second question first, and then the first question. You can interact with faculty as much or as little as you want and faculty have time or not. That's different across faculty. I would say that a general rule of thumb is that if you show an interest in faculty's work, they'll be excited about that because people like it when people show interest in them and their line of work. So if you wanted to get involved in research, there's lots of faculty who can get you involved in research. If you want to like, think about cobbling together a set of classes that doesn't really fit into a concentration but is important for you to have all those things, absolutely you can do that. Your academic advisor can help you with that. So I think you should ask the question, not why, but why not. You can figure it out if something that you wanna learn is really, like you wanna learn Chinese and you wanna take a class in the college, that can be figured out. So all these things. So I guess I can finish with with this cause I think this is a good question. What's special about this place? This place I think is a really, really special place. You know, it a little bit hard to articulate, but I think from the MBA teaching to the research to the all other programs, external speakers, there's sort of a Booth speaking with one voice on intellectual pursuit is worthwhile. And we want to actually have impacts on the world through the way we think and the way we can create knowledge through research and methods that are useful to the students. Everybody's very serious about generating knowledge and that's awesome. So we just, you know, just recently had Professor Doug Diamond get named Nobel Prize winner, and a really cool thing is, you know, he came up and he gave a little speech when we had our champagne toast and his entire speech was about just how wonderful the atmosphere at Chicago Booth was for him to do his research in a couple of ways, in one way because the other faculty always challenge you. So when I'm here, there's so many, so many good brains that, you know, sort of half efforting my research isn't something that's acceptable. Like everybody is trying to hold you to the highest standard, which is, which is really awesome. The other way that Professor Diamond raised that was really helpful and and hospitable to his research was the MBA students. That his research was very, very technical and esoteric, and the students basically wouldn't let him get away with that. They would, they would ask him, you know, how is this useful to me? And so that really helped him. He's a theorist and so most of his work is, you know, purely theoretical, but it shifted from being just solving a bunch of partial differential equations to being these simple, elegant models that really change how we think about the world. And that's why he ended up winning the Nobel Prize. Diamond and Dybvig '83 is like really changed the way we understand how banks work and the role of banks in society. So I think that's a microcosm of this place, that the students challenge the faculty to do things that are relevant and the faculty challenge the other faculty to do things that are extraordinarily high quality. And you know that there's an old saying in real estate that you want the the worst house on the block. I always feel like I want the worst brain on my hall. And and so having these brilliant colleagues and wonderful students is a big piece of that. And then talking to my colleagues who work elsewhere, I think Booth is pretty unique on this dimension. So I think it's a special place. I hope to see you guys here.
Kara: Great, that's really helpful. I really appreciate it. So on behalf of everybody that joined today, thank you Professor Shapiro for your insight. It's a great sampling. I think we've intrigued a lot of people to apply. And then for, for all the audience members, we have a plethora of virtual and on campus visits and events going on over the next few months. So generalized admission sessions, I know there was some questions in there, those questions will be answered and I put my email in the chat, so feel free to reach out anytime. Please stay engaged. As I said, there's many ways you can connect with us from home. And also coming to campus is a great way to kind of do the day in the life of a student. That's true for all of the MBA programs. And we would just love to continue connecting. So thank you again Professor. We really appreciate your time and everyone for joining today. Thanks, everybody, bye.
Bradley: Happy to, thanks.
Course Title | Location | Date |
---|
Recommendations
Booth News & Events to Your Inbox
Stay informed with Booth's newsletter, event notifications, and regular updates featuring faculty research and stories of leadership and impact.
YOUR PRIVACY
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.