Devin Pope:
All right. Welcome, everyone. Thank you for being here. We'll go ahead and get started. So we're excited to have you all here for our Think Better Series with Alex Imas. I'll introduce him in a minute. My name is Devin Pope. I'm a professor in the behavioral science group here at Booth. And I want to welcome all of our in person guests and also the hundreds of people who are visiting us via Zoom around the globe.
This is our last Think Better talk for the year, and next year we'll start it off again in the fall with Nick Epley, and then in the spring with Liz Dunn will be the Think Better speaker. So a bit of housekeeping thing. So Think Better is a public lecture series hosted by the Roman Family Center for Decision Research, which is a center based out of Booth. And the purpose of this series is to help behavioral science expand and be discussed in how it can affect society and shape policy and improve individual lives.
The Roman Family CDR, the center, is home to PIMCO Laboratories that conduct groundbreaking behavioral science research in the loop in Hyde Park campuses, in the online virtual lab, and throughout the city and popup labs. And you can get involved in these laboratory studies by participating in studies at any of these locations. I also want to make sure I mention that there's also a lab called Mindworks that's here in the Loop.
Someone wrote this for me, The World's First Discovery Center and Behavioral Science Lab located in Chicago Loop, and it's celebrating its fifth year. So you can go to mindworkschicago.org to learn more about how to go and visit and also how to participate in experiments there. Both Mindworks and the Think Better Speaker Series helps spread valuable insights from behavioral science and offers ways to navigate some of humanity's most challenging and common problems that we experience.
Okay. So I am excited to introduce Alex Imas, who's both a colleague and a friend of mine. Alex is pretty incredible. I'm not joking. I'm jealous of him on lots of dimensions, some of which I won't discuss, some of which I will. So the dimension I'll discuss is his academic breadth. So Alex is kind of this crazy person who happens to know more about psychology than basically almost all psychologists, knows more about behavioral economics than almost all behavioral economists.
He can do empirics. He can do theory. He's just got a bit of everything. And what's especially interesting is recently he's become one of the world's leading experts on the economic impacts of AI. And I'm excited to discuss both his behavioral economics research and also his new stuff on AI. We'll discuss both of those tonight. He is also co-author with Richard Thaler in the book, The Winner's Curse: Behavioral Economics Anomalies Then and Now. He has a Substack that he's been writing called Ghost Elect... Help me out.
Alex Imas:
The Ghosts of Electricity.
Devin Pope:
Ghosts of Electricity.
Alex Imas:
It's a Bob Dylan line for those of you who know it.
Devin Pope:
I did not know that. I should have known that. And so we'll talk a little bit about that as well. But please join me. Let's go ahead and give Alex a round of applause.
Alex Imas:
Thanks. Thank you.
Devin Pope:
Okay. So Alex, here's my thought. I was thinking, let's start with a few questions about behavioral economics related especially to the book. Then we'll move to some AI stuff, and then we'll combine the two. We'll come full circle and do a little bit of AI and behavioral economics, and then we'll open it up to some audience question and answer. Okay. Does that sound okay?
Alex Imas:
Awesome.
Devin Pope:
Okay. So let me start off with something easy. So you've been studying with Richard about behavioral economics both in the past and now. Tell us, behavioral economics has been around now, you could argue, for 40 to 50 years. Most people would say since like 1979 when Kahneman and Tversky published Prospect Theory. Tell me about the state of the field. What have we done good over the last 40 to 50 years? Where are we at? What are your thoughts on the field?
Alex Imas:
So I think the field started out mostly in the lab. So most of the early studies, including the Kahneman and Tversky 1979 paper, they started out looking at laboratory subjects, making very abstract decisions where people made choices over lotteries. And kind of the goal of doing that was because the standard economic toolkit, the standard models in economics like expected utility theory, one of the great aspects of economics is that they make very falsifiable predictions.
So the nice thing about that in social science is that you can take it and construct an abstract setting and say, "This is what expected utility theory predicts and this is what happens." And the nice thing about that is these are general models. It doesn't matter if these are subjects in the lab. It could be just as long as they're humans, these are the predictions of the models. And the early work was really focused on saying like, look, these axioms of choice that are the bedrock of standard economic models, we can show that they're violated here, here, here.
So people are loss averse, for example. That's one of the findings from the original paper, which means that when you're given a lottery that's got a positive expected value, so most of the time you win, but sometimes you lose, if the stakes are small, expect the utility theory says like, "Look, everybody should take it." And what they found is that actually people were still turning it down.
And so this was one of the anomalies that we discussed in the book. And so that's why it started in the lab and it kind of stayed in the lab for a long time. And what's happened since Richard was writing his anomalies columns, for example, that made the core of the original winner's curse, is that behavioral economics has gone from the lab into the field. And this has been actually due to you and folks like you, right?
Devin, I want to introduce Devin, he didn't think I was going to introduce him, but he has really started the movement of doing this, of going from lab studies. The standard economic dismissal of behavioral economics was like, it was called the confused subject hypothesis. These are confused subjects. They're drunk university students being paid no money. It doesn't matter what they do. We care about market participants. And what did Devin say?
Well, Devin said, "I'm going to collect data on some market participants." And then turns out, they also violated the anomalies and in very important ways. They left a lot of money on the table. You have your famous golf paper, pros like Tiger Woods, who many of you have heard of, this is like the expert of the expert of the expert. They're making mistakes and these mistakes are consistent with behavioral economics.
So it's much harder for economists to ignore the field if you can say, "Look, it's not just subjects in the lab. It's not just abstract settings. It's not just low incentives. It is the highest level of expertise with very large incentives and they're still making mistakes." So that's where the field has transitioned from starting with the humble beginnings of... You read this 1979 paper, there weren't incentives at all.
These were hypothetical choices. Students were given like a bunch of leaflets and then they made little choices and things like that to Tiger Woods, to Terry Odean and the field of behavioral finance has shown that this is held with institutional investors who are trading millions of dollars every single day. So that's where the field has gone.
Devin Pope:
I like this. So I think I agree that there's been a lot of these findings that were super early on in the field, like loss aversion, that have held up really well. Are there example of anomalies that have not held up well? And what do we learn from that?
Alex Imas:
Yeah. So in this book, what we actually tried to do is to say, "Where has the field gone?" But not only where has the field gone, but let's go backwards and actually systematically examine the original anomalies, because the anomalies such as loss aversion, myopias over time, preference over time, people make very impulsive choices, there's a bunch of different anomalies and we'll talk about more of them as the conversation goes on.
These are the bedrock of behavioral economics. Are they robust? Are the original anomalies robust? So what we did in the book was we basically took all of the anomalies and we replicated them. We took the instructions, we ran the experiments again, and we have a online appendix that anybody could access to look at the exact replications and the instructions that we ran.
So any of you who don't believe me for whatever reason, download the instructions, run them yourself. All of the instructions on running the experiments yourself are just online to do it yourself. It doesn't take a lot of money. There's instructions on exactly how to do this. And one of the phenomenal things I think was that basically all of them did hold up.
One part of prospect theory, the diminishing sensitivity in the loss domain, was not super robust, but other people have already shown that. So that was not a new finding. But everything about time preferences, the winner's curse, cooperation in the public goods game, rejections in the ultimatum game, all of the sorts of anomalies that you think of as behavioral economics anomalies they replicated beautifully.
Devin Pope:
That's a pretty positive picture. I mean, there's certainly been a lot of behavioral economics work that's garbage that's happened in the last 50 years, maybe not some of the early stuff, but...
Alex Imas:
Yeah, yeah, yeah.
Devin Pope:
You're not saying that every single thing has ever been published in...
Alex Imas:
Sorry. So there's a self-selection of what made it anomaly, right? So it's like Richard... Richard described it, why would he write an Anomalies column, which was like the original winner's curse, just the Anomalies columns he was writing for The Journal of Economics Perspectives. What made it into the Anomalies column? Things that were not garbage. And essentially there was a lot of studies in the '80s that didn't replicate or weren't that interesting or for whatever reason, but through the basic process of accumulating scientific knowledge, nobody really built on those studies.
They didn't become the bedrock of behavioral economics. Nobody built on them, so they never made it into the models in the first place. But there were some anomalies in finance, in behavioral finance, and we discussed this in the book, that we didn't include in the book. But once you kind of realize what these anomalies were, you'd realize why they didn't make it into the book and why they don't replicate.
Financial markets, once you have an anomaly, what does that mean? That means you can make money off of it. So when you write a paper saying there's a January effect, a lot of people started doing it and the anomaly went away. So by definition, the anomaly should go away if we have well-functioning financial markets through the process of publishing the paper.
And so if you could go back in time, you could say, "Hey, when you were analyzing the data to write the original paper, the anomaly is still there. It's replicable. Looking at it now, it's no longer there because we have functioning financial markets."
Devin Pope:
Nice. Nice. Okay. Danny Kahneman was once asked, if he had a magic wand and could make one bias disappear, which would it be? Do you know what his answer was?
Alex Imas:
Overconfidence.
Devin Pope:
Ah, okay. You knew the answer. Without being too swayed by his answer, what are like the one or two biases that you see as the most pervasive and the most pernicious perhaps that we face today in our society?
Alex Imas:
That's a good question. I think belief-based biases, probably things like motivated reasoning where people... One of the big social issues that I'm sure you've heard about is polarization. And so there are non-behavioral models to explain polarization, but I think one of the, to me, most parsimonious models is people just seek information to confirm their priors, right? So confirmation bias. So this is like related to overconfidence.
It generates overconfidence actually, but it's kind of like at a lower level. And I would say as far as like in our digital environment where you can literally... You don't have to get all of your information from the same source that everybody else. You could actually choose your information sources. When you have things like confirmation bias and motivated beliefs, then this leads you into a world where people are just getting information from sources that are going to confirm their beliefs.
And if you start at even small differences initially, maybe you almost agree, those differences can inflate if you have the opportunity to select where you get information from. So I think given the current moment, I think motivated reasoning, confirmation bias, these sorts of things are pretty concerning to me.
Devin Pope:
Nice. Before we get into AI, you mentioned how our online behavior or just having the internet be pervasive in our lives is something that might lead us to be more biased. We might go out and select certain types of news or something that fit our priors and that can enhance or exacerbate confirmation bias. Net, what do you think about today versus 1980 in terms of how hard it is to protect ourselves from biases versus the things in our lives that are causing us to fall prey to our biases?
Alex Imas:
So there's two sides there. So I think on the one hand, we know about these biases, theoretically. So there's information. If you ask the person on the street, have you heard of biased behavior or even a particular bias, a lot of them would say yes because of this sort of research that behavioral scientists have done that has been in the media, that has been talked about.
So on the one hand, there's awareness, which theoretically should lead to less biased behavior. But on the other side of the coin, kind of like the digital space that a lot of people now spend a lot of time in, it's basically a sandbox for platforms to create environments that could potentially exacerbate the biases, as well as kind of like the way that in multiple information sources, the way that information is delivered in general.
So potentially that could exacerbate the bias. Now, which direction does it go? Are people more biased now than they were before? My sense is it's probably kind of awash. On some dimensions, they're more biased, so they're more polarized, for example. On the other hand, many people now hold index funds and things like that. So as far as like loss aversion, the household non-participation puzzle that people were more worried about back in the day, that might be less so.
Devin Pope:
Yeah. I like that. One bias that you mentioned briefly is present bias or this idea that we have self-control problems. It feels like that is one that is being pushed to its limits right now with having a phone in your pocket that can do sports gambling, pornography addiction, if that's something that's causing you to have problems in your life.
There can be a lot of things that are tempting in a way that they weren't 40 years ago. It was very hard to do both of those activities for 10 hours a day 40 years ago where you can do that today. Would you say that's an area where it's exacerbated and there's like nothing... I mean, we were aware of those issues and problems. It's not like awareness is helping that much on that front.
Alex Imas:
Right, exactly. So I think the whole idea of digital addiction, obviously that didn't exist before. And I think what technology really did was eliminate a lot of frictions. So before, in order to read the news or something like that, you had to wait until the evening news or something like that was on the TV, or you had to go to the store and get a newspaper. It came out in the morning. Now, literally right now as we're talking, we are missing out on information that lives in our pocket and is constantly getting updated.
Devin Pope:
My pocket's buzzing.
Alex Imas:
Maybe yours. I'm not that popular. I'm just talking about news, but then all sorts of other... You mentioned gambling. Before, what's the friction? You have to get in your car, go to a degenerate seedy little casino and spend a ton of money. People are like, "Where are you?" Now you could be playing with your kids and then they turn around for a second, you're like, "Ah, the Knicks, the Knicks. More money on the Knicks." You can do whatever you want. The Knicks are a great team either. Is the basketball season still on?
Devin Pope:
It is.
Alex Imas:
There we go.
Devin Pope:
The Knicks are up 3-2 in their series.
Alex Imas:
Okay, so the Knicks. Yeah, there we go. I didn't even know that.
Devin Pope:
Against your Philly 76ers.
Alex Imas:
So yeah, I think to answer your question, I think the...
Devin Pope:
Sorry, the Hawks. I just want to correct the record. I said the 76ers.
Alex Imas:
So the elimination of frictions exacerbates a lot of the things that have held those biases in check.
Devin Pope:
Yeah. I was thinking of one counter example, but tell me what you think about this one because I never really thought about this until I wrote down this question earlier today. But other than AI, I would say one of the leading candidates for something big that's happening in our world right now are GLP-1 drugs. And when I took behavioral economics classes, the examples for present bias was always like overeating, right?
Matthew Rabin would talk about a big piece of chocolate cake sitting in front of you and you have to make a decision whether to eat it or not. This feels like an example where technology has maybe basically... It could be that 20 years from now, you can't use this example in class anymore. Is that right? Should I be thinking about that as a technological adoption that has fixed one of the leading behavioral problems that we've got?
Alex Imas:
I think that's exactly how you should think about it. I mean, look, this is a new drug. We don't know what the full story is. I mean, the data that we have now is it seems like almost a miracle drug in the sense that there are side effects, obviously. There's nausea. It's not particularly pleasant thing to be on. But for those who really... It is as close to a self-control device for eating that we've ever had.
And that is technology. I don't know if you know about the drugs, but they came from the saliva of a Gila monster, which are like these weird little lizards in Arizona, I think. I was talking to the lab at Penn, the PI, she was the one who first developed the drugs. And it's insane how you go from the saliva of a weird lizard to like, "Oh, I'm not eating cake today." Right? But that's technology.
Devin Pope:
Yeah. Anyway, so I think that would be one example of a place to maybe be hopeful where technology is going to just completely overpower any of the negative consequences that could come from it. Okay. Actually, let me ask one more question about behavioral economics, kind of a big picture question. A little story time for myself too on this one. My first behavioral economics introduction was a class from Matthew Rabin, who is a leader in the field and he was a professor at Berkeley while I was a student there.
And the first class that I went to, I knew there was going to be this wacky, cool, new field that Matthew Rabin was involved with, but I wasn't sure exactly what it was going to be like. And the entire first class he just talks about how great economics is. And it was kind of insane. I was like, "What's going on here?" And then at the very end of class, he's like, "Do you see this beautiful field we have?"
He's like, "We're going to now starting next class tweak it just a little bit. We're going to maybe relax some assumptions or add a parameter here or there." And so that was Matthew Rabin's approach to behavioral economics. So fast-forward to when I arrived here at Booth, I co-taught a PhD class with Richard Thaler, your co-author, and the first day of class we showed up into the classroom and we weren't very prepared as you might imagine. Emir Kamenica was in the class too and all three of us came to class at the same time.
Alex Imas:
I taught the same class, but I replaced you.
Devin Pope:
You replaced me?
Alex Imas:
Yes.
Devin Pope:
You did it with Richard and Emir?
Alex Imas:
I did it.
Devin Pope:
Okay, so maybe he was still doing this with you. Okay, I want to know what you think about this. So Richard said he would take the first class and do kind of an introduction. We were like, "It's only fitting that you do a little introduction." And he came in and there was like a little bit... But basically at some point, I'm paraphrasing, I hope this is all right, but he goes, "All right." This is to a group of PhD Econ students.
He said, "Everything that you've ever learned is wrong and we're here to retrain you." That's how he introduced or motivated this class. So for him, it was like the exact opposite of Matthew Rabin. It was a revolution and behavioral economics had arrived to fix this depleted field. Now, I think he was being a little bit excited.
Alex Imas:
Richard likes the controversy.
Devin Pope:
Yeah. Where are you at? Is behavioral economics a revolution that's taken place as one or a revolution that's still ongoing or is it a nice little tweak on economics?
Alex Imas:
So we talk about this in the epilogue about what... Behavioral economics in some sense has been a success in the sense that almost every type of department has at least one behavioral economist on the faculty. We're publishing papers in the top journals all the time. But in the way that it might matter most, it's been a complete failure. And the reason is if you open up a standard micro textbook that you would as an undergraduate, there's zero behavioral economics.
You open up a macro textbook, there's zero behavioral economics. Literally you have to take a behavioral economics course if it is available to learn any behavioral economics, and that does not sound like a success to me. And so I think if it's a revolution, it's either ongoing, which would be nice, or it's failed. And I think my view is that it's ongoing and now there's this big debate even within behavioral economics, how radical should we be?
Matthew and folks that he's taught are still very much in the let's take the standard model and let's add the least amount of things to it. So it's usually one parameter. So like in present bias, you add one Greek parameter beta and that'll capture all of self-control problems and that's going to be the beta-delta model. And then let's take the model and do our standard economics with it except for this one tweak.
But then there's other folks who are like, hey, forget that whole thing and let's write down a completely new model of like how people make decisions where basically everything is because of people's perceptions of the problem. So it's all about beliefs, your belief about the informational environment. There might be some utility function in the background somewhere, but we're not even going to worry about it.
So that's a complete, super radical departure from the standard economic toolkit. And I think right now the revolution is not behavioral economics versus economics, it's within behavioral economics. I'm part of that. I think I'm like in between somewhere. I find myself being sympathetic to the folks where like, look, we have to think about like what people see when they see a decision problem before deciding how they actually make the decision.
But I'm also like not all the way on the other side where I think utility and parsimony and economics. When I teach behavioral economics, I actually teach it like Matthew. My first class is about the beauty of economics and why it's such a successful social science. As I said earlier, it makes falsifiable predictions. You could also take a model and then build on it. It's a cumulative science, which is very rare in the social sciences.
In the sense that every paper, you read a paper from like somebody from Rochester, somebody from Berkeley, somebody from UCLA, somebody from University of Maryland, they're all in their own intellectual environments, but they're all building on the same framework. They're all building in the science. And that's what you have in physics, that's what you have in biology, but it's very rare to see that in social science.
And I think that that is what gives behavioral economics and economics its replicability, first of all. Part of the reason why I think behavioral economics has that no credibility crisis or replication crisis is because experiments get replicated all the time. You don't need to like run replication experiments here. The way that you publish a paper often in behavioral economics is I think this matters here.
So I'm going to take your experimental instructions, which everybody posts online whenever they release a paper, I'm going to add a tweak. So part of my experiment that I'm going to publish in a good journal is a replication of your experiment and my tweak, and that's going to be the paper. So replication is just kind of part of the science. To write a paper, you replicate and nobody even calls it replication.
It's called a control condition. And I think that the ability to build on a cohesive framework that everybody agrees on as important is a huge strength that I don't think we need to throw out with the bath water.
Devin Pope:
Nice. Can I add one small thing that... It's failed is a pretty good line. Here's one pushback I would give to that a little bit. I think one thing that behavioral economics has succeeded in, and tell me if you disagree, is that non-behavioral economists and economics all think about behavioral economics now. Take who's the best public finance person in economics like Raj Chetty, one could argue he's like basically a behavioral economist.
Alex Imas:
Yeah. He gave a keynote at the Behavioral Economics Conference.
Devin Pope:
Perfect. Take like David Card or like a top labor economist. He's written papers that are very, very behavioral. I think the analogy would be you run for election, you run for office and you lose, but you really move the agenda, or like you caused the incumbent or whatever to think about your issues or to even adopt some of your things. And in that sense, it feels like behavioral economics has won the day on that a little bit. No?
Alex Imas:
No, that's interesting because I was... Sorry, this is a bit of an aside. I was listening to a podcast where there was like a Renaissance historian on air and she was saying that revolts back in like Medieval Florence, if you revolted, but even when your revolt lost, depending on how close you came to winning, the authoritarian government would be less authoritarian because they were worried about the next revolt because you guys almost won. So that's similar. Yeah.
Devin Pope:
Good.
Alex Imas:
We kind of won.
Devin Pope:
Long live the revolution. Okay. So in three sentences or less, explain why an academic job at Booth is so good. I'm leaning somewhere.
Alex Imas:
Okay. Freedom. Resources.
Devin Pope:
Three sentences or words too.
Alex Imas:
No, I'm going to... No, there's going to be...
Devin Pope:
I thought you were giving three words.
Alex Imas:
No, no, no. The freedom, resources, and intellectual environment to do groundbreaking and scientific research.
Devin Pope:
Nice. Good. But like for your own well-being.
Alex Imas:
Oh, it's bad.
Devin Pope:
No, no, no, no, no. It's incredible, right?
Alex Imas:
No, it's incredible.
Devin Pope:
You make money. You don't have a boss.
Alex Imas:
No, it's incredible. It's incredible. It's incredible. It's the best bob in the world. We were just talking about this earlier. It's just the best job in the world.
Devin Pope:
Okay. So explain why you're leaving academia for a year to take a year off and do something else.
Alex Imas:
Okay. So I'm going on leave. I'm not leaving academia.
Devin Pope:
I'll let the audience decide if that's a distinction.
Alex Imas:
I've been doing a lot of AI work recently for the past two years. So at Booth, we are I think the only business school in the country that has its own dedicated applied AI group with its own faculty and tenure lines and concentrations. Other groups have concentrations, but we have an actual group dedicated to AI. And so I was one of the people involved in building that group. And I felt like it was time to spend some time in an AI lab and see what people were doing within the actual institution. So spending a year on leave and getting my hands dirty, looking at the social science.
Devin Pope:
Good. Alex sent our group an email when he told us that he was going to be on leave for the next year. And I think you said it more eloquently in your email. You were like, "Dude, this could be a groundbreaking thing that's about to happen and I have a chance to go and be part of this. And so why wouldn't I take some time and try to have a massive amount of influence?" Right?
Alex Imas:
Yeah.
Devin Pope:
It feels like that's the only thing that could possibly peel you away from a job so good, even for just a year. It's just for a year. Okay. Excited about this. So Alex is going to be one of the leading experts in thinking about the economics behind AI and the impacts that AI might have on economic markets, in particular the labor market. I'm not an expert on this, so now my questions are going to get much worse, but let me try this one, but feel free to riff on it.
What makes AI similar to previous technological advances that in some ways disrupted, but Didn't destroy our economy? Think electricity. Think the automobile. Think the internet. And what makes AI potentially different than those in a way that makes us want to think about it?
Alex Imas:
So AI is a class of technologies called the GPT. So there's a really nice paper called GPTs Are GPTs. GPT is ChatGPT, but the first term GPT is called the general purpose technology. So this is electricity. This is railroads. This is computing. So computing was the most recent version of that, information technology. The reason it's called a GPT is because you use computers for going online, sports gambling, but you also use it for accounting.
There's IT technology recording this conversation. It's a technology that you can use for many, many different purposes. And GPTs are disruptive. That's been historically true. The extent of the disruption depends on the technology. In the Industrial Revolution, you had a bunch of GPTs of coming online at the same time, and they were hugely disruptive. So we think of the Industrial Revolution as leading to a larger pie, way more wealth, way better services.
But for a while there, people weren't sure whether this was such a good thing. So the Luddites, people think like, oh, those Luddites, it's kind of like a bad word at this point, but the Luddites were actually like a very well organized militia who basically just said like, "Look, this technology, which were the looms at the time, the technology is going to go bigger and bigger and bigger and replace humans in the process of work. So we need to make sure to disrupt it so it doesn't happen."
AI is already solidly in the class of GPTs. So the big question now on everybody's mind is, is it closer to like computers? So computers disrupted somewhat, but for the most part, computers increased productivity and increased the number and types of jobs. So computers were very generative. There were many more roles that were created and things like that. There's other things that, particularly in manufacturing, that they didn't eliminate the jobs, but they certainly displaced people.
The worry with AI is whether it's going to be more displacing or creating more jobs. And the reason why people think AI is actually different from any other GPT is because it's almost designed to emulate what a human can do. So it's almost like disruptive by design. So people are worried that, look, if we actually get to a point of something called AGI, artificial general intelligence, which is the ability to match human beings on every single task that a human can do, if we get to AGI, won't it just replace all humans in terms of labor?
Aren't companies just going to say, "Hey, I'm just going to use this model for my lawyer, for my accountant across the whole thing?" We've never been in a situation where the G part of GPT was so general. And so that's kind of the conversation that a lot of people are talking about is where is it going to be on that spectrum? And depending on where it is on the spectrum, how disruptive is it going to be for the labor market?
Because the thing to know is that every technology on net has increased the number of jobs than it destroyed. So it created more jobs than it destroyed. So in some ways, AI would be extremely unique if it decreased the total number of human jobs, but some people are making the argument that it might be.
Devin Pope:
Nice. That's actually super helpful. I like the way you're thinking through that. It feels like part of what you're talking about there, this idea that AI might become AGI and humans might lose our comparative advantage in almost anything, is something that is still quite a ways out.
Alex Imas:
Depending on who you talk to.
Devin Pope:
Could be a long way out. In the short term, as AI is able to do a lot of tasks like coding and other things, how should we be thinking about its disruption in the labor market? So people in this room that are maybe worried about job security or something, in the past, our economy's been able to adapt pretty quickly and there haven't been... Even with these pretty interesting general purpose technologies like a computer, I would say there hasn't even been like a five-year period where people really felt like they were suffering because of it necessarily. I mean, certain sectors...
Alex Imas:
During the Industrial Revolution, there were decades that that was true, that people were suffering.
Devin Pope:
So is there anything in the last hundred years that would be like that?
Alex Imas:
I mean, there's specific occupations. There's a really nice QGE paper looking at phone operators. So phone operators actually were completely displaced over a 20-year period. So what happened to phone operators, many of them found jobs, but a substantial portion of them kind of became either permanently unemployed or at least underemployed.
So in that sense, when you have these specific examples, with computers, you really didn't have that. Computers were in general just complimentary to humans. But with other technology, there were losers with that technology. Even though the general purpose technology created more jobs, that doesn't mean that people didn't lose their jobs and were hurt as a result.
Devin Pope:
Yeah. And it feels to me like the extent to which it could cause short term damage in the labor market is a combination of, well, how many of those jobs like the, what did you call them, the phone people?
Alex Imas:
The phone operators.
Devin Pope:
Phone operators. Thank you. That was a very small segment of the workforce, right? The concern would be that it could be 30%.
Alex Imas:
Yeah, that's the G part, how G is the G.
Devin Pope:
There's that, like how expansive it is, but it feels like the other thing to me is how quickly it can do it. I don't know with the phone operators, like if the phone came in and immediately...
Alex Imas:
No, it was 20 years.
Devin Pope:
It was over a 20-year period.
Alex Imas:
It was 20 years.
Devin Pope:
So there was time for phone operators to retrain and to adapt and for other jobs to start getting created because now we have phones. Are you more worried about how broad AI is able to go or how quickly it's happening or both?
Alex Imas:
I think it's both. The big difference between technologists and economists on this question is that technologists just see the technology, think it's going to be able to do that, and then assume, oh, it'll do that. Tomorrow this will happen. So if you told a technologist in like 1930, "Hey, check out this thing that could replace a phone operator," they'll tell you like, "Oh, tomorrow there won't be any phone operators."
But it took 20 years because organizations and institutions are so complex that everything will just take slower. Regardless of what your timeline is, it'll be slower. So you'll have to take that into account. So Dario from Anthropic is very much on the record constantly talking about no more software engineers within 12 months. No, there's going to be software engineers in 12 months.
In fact, if you think about like what is a job, right? So a job is a bunch of different tasks. So think about the job that you have. So what does a software engineer do? Sure, they're writing very specific code, but they're also going to meetings. They're managing different code bases. They're managing teams. They're within a team. There's a whole bunch of different tasks within a job, and those tasks aren't like plug and play.
You can't take this one out and the job is 10% smaller. They're all interdependent. So if you don't do one of the tasks well, the rest of the job is... If you're a cook, for example, make a meal great and then just like kind of oversalt it. That meal sucks. It's a terrible meal, but that's just one step. You screwed up that one step and it doesn't matter how well you did the rest of the thing.
And a lot of jobs have that property. The tasks are interrelated. I can go on and on about what this means in terms of like what AI implies for automation, but the main thing is that this task-based structure means jobs are a lot harder to automate than tasks. And when technologists are thinking about automation, they're always talking about tasks, not jobs. And so it's going to be slower.
And on top of it, let's say you automate nine out of 10 tasks, but one is automated, sorry, one is not automated, that job's still there, right? On top of it, that person might be more productive. And then if a person's more productive, that's going to mean that the company can basically produce more goods that will lower prices. If consumers all of a sudden want to buy way more of that stuff, you might actually be hiring more people.
The statistic that I'm talking about is called in economics elasticity of consumer demand. So that statistic is really key to all of these questions. If demand is very elastic, you might actually get AI to increase the labor force. If it's inelastic, then you might get a shrinkage.
Devin Pope:
So if we take all of this together, so the way I'm thinking about it is, like you said, it's kind of harder to replace a job. It's easy to replace tasks, but harder to replace a job than you might expect. Couple that with behavioral type stuff of like people are slow to adopt things, right? I mean, look at companies and some companies are still not adopting... Technology has been around for very long time and they probably should.
Alex Imas:
In Germany, you have to fax everything everywhere. Maybe somebody German might correct me, but I've heard that.
Devin Pope:
And then if you add to that, it could be creating other jobs at the same time. If you add to that, I would say, I read a tweet that Derek Thompson put this morning or something, I don't know if you saw it, but he said, he's like, "I used up all my tokens just trying to grab some data from a PDF this morning." How are we going to ever have the compute that's needed to actually replace very many jobs right now?
So there's the technological adoption is probably going to be slower and have bottlenecks where we don't expect. All of that when I combine that makes me think, but tell me if you've got a different opinion, that, yeah, we shouldn't be super worried about a lot of short run job loss, except in very, very specific sectors.
Alex Imas:
So I think we should be worried, but I think a lot of focus is on white collar jobs where they tend to be very high dimensional and they tend to be in very, very complex organizations, but there's other jobs that are low dimensional in the sense of like having fewer tasks. And those are the jobs that I'm most worried about because there's actually quite a lot of them. So truck driving, for example.
We already have self-driving cars that are doing pretty well. Now there's many additional barriers, some of them regulatory, some of them technical in order to get self-driving trucks. But if you have a job that has not a lot of dimensions, so if you automate it, all of a sudden the company saves all the money on the labor role. The company has a ton of incentive to automate that job, and it can automate that job.
And there go several percentage points of the labor force. Now, then let's go to warehousing. If you Google what do new warehouses look like in China, for example, they're fully automated. There are no people inside those warehouses. Everything's robotics. Warehousing is a big job in America. So that's what I'm worried about.
Devin Pope:
I mean, it seems like a big question is, is this happening over five years or 25 years?
Alex Imas:
Yeah.
Devin Pope:
And our ability to adapt is going to depend a lot on that answer.
Alex Imas:
Absolutely. Speed is like... And this is like talking about economics and behavioral economics or anything like that. To me, we need to be working about working on speed. There is no speed in any of our models.
Devin Pope:
Yeah. Okay, let's come full circle. We've got just a few more minutes left before we open it up to question and answer from the audience. AI and behavioral economics, go.
Alex Imas:
So I think behavioral economics is going to interact with AI in really interesting ways. So I think like if you're thinking about the topic of consumer behavior, which every business school has a department of consumer behavior, in that department, those scientists are looking at how we can think about psychology and how that psychology affects what people buy, right?
But in I would say very soon, much sooner than five years, many economic transactions will not be made by humans. People will tell their agents or their agents will already know to buy certain things for them. They'll go on a platform and make those purchases from them, and they'll make transactions and negotiations with other agents, other people's agents. What does that mean for the field of consumer behavior?
Because guess what? AI agents are not prone to the tricks of human psychology. And this is work that I've done already. So we know that they're not prone to many tricks in human psychology. So what does that mean for economic transactions? What does that mean for economic outcomes? What does that mean for ad agencies? So now you have AI first ad agencies that are built around getting agents to buy the stuff.
The people working in those companies, so Google Open Ads, look at that company. There's no psychologists or marketing people working at that company. It's computer engineers designing spaces for computer systems to make purchases. And that company is doing really well because AI agents are already on the internet making purchases. So behavioral economics will then be about how people are interacting with their agents. But behavioral economics will also be about how people are interacting with AI systems.
I mean, I don't have a ton of time, but the thing that I think we've talked about digital spaces creating potentially more scope for addiction, more bias and things like that, polarization, there is a world out there where if everybody has their own agent interact with the digital space for them and the agent knows their preferences, you might actually get back to a realignment where the behavioral biases, their informational environment becomes less polarized.
They're less likely to go on websites that they don't want to go to. The key is them not wanting to go to those websites. So there is a world out there where actually there is an improvement on that dimension, but there's also a different world where it exacerbates things.
Devin Pope:
Yeah. Excellent. All right. Let's go ahead and open it up to some questions from the audience. I think we have some people with mics that are going to go find someone with their hand up.
Speaker 1:
Thank you. It's super interesting topic. So during the computer revolution, you could imagine extrapolating and saying, "Well, if this goes further, we're going to need a lot of network engineers. We need a lot of software engineers. We're going to need a lot of systems engineers." Have the incredible work people thinking about what future jobs could be created by AI or where are the spaces that could be future jobs that are going to come from?
Alex Imas:
So that's the big difference that I mentioned earlier. I think there is hiring by these AI companies, but the stylized fact is that the hiring at these companies so much slower than there were during the computer age. Because as you said, you needed human beings to do a lot of those things. With AI, AI agents are doing a lot of the work that AI people need to do. So it's more like you still need people in the loop. You still need people to check the code, to integrate things, to monitor the AI agents, but the workload, at least in the space of AI, is much, much lower per person.
And so the worry is extrapolating from that to the rest of the economy, and that's where you get these bad forecasts about a lot of unemployment. Now, the extent to which that will generalize, it depends on how much we're over-indexing on computer code. If the entire economy looks like computer code, then we're in trouble. If computer code is kind of like a special flower as far as a job, then the economy's going to be more robust.
Speaker 1:
Thank you.
Speaker 2:
Hi. So I work in technology. We employ hundreds of people. And like you said, we're not laying people off. They are using AI a lot, but they are doing their tasks better. But I have a lot of peers who are not hiring junior people because a senior developer is now much more productive and they feel like they can get away with just having a bunch of senior people and not juniors.
But there is a problem with that because eventually we want to have senior people. So there is a huge maybe it's a behavioral economics issue that they are optimizing for the wrong thing. How do you think people should think about that for their careers, the young people here, also the employers like me who are trying to optimize and are facing this challenge?
Alex Imas:
This is a huge issue. This is the junior-senior problem in AI. So this has been documented. The thing that's happening is less firing and it's more or less hiring, especially for junior roles. Now, the data looks like it's kind of picking back up from junior roles, actually. So if you look at the latest data, even among software engineers across the spectrum, you're seeing positive signs in the data.
Now, let's say that doesn't materialize. One thing that could happen is that the job basically changes. So now there's no such thing as what used to be called the junior person, and everybody gets trained to whatever is the senior role with the help of AI obviously. That's one model, but this is something people are very worried about because that's exactly what the data...
There's a paper from two grad students at Harvard who have demonstrated that exact thing where like you have the seniors actually, there's more hiring in seniors and they're higher wages, they're doing really well, and then you have this dip for juniors. There's issues with like whether this identifies AI versus there's a whole other thing like COVID overhang and all of this other stuff that's going on in the economy, but yeah, that's the big worry.
Speaker 3:
So I'm really interested in how the relationship between people in AI and how the biases get influenced by the agreeability of the models. I watched a video today where someone's $50 million housing deal almost got blown up because both the buyer and seller went to AI and asked it questions and it just agreed with both of them and almost blow up the whole deal. And so I'm curious how you sort of see the agreeability of these programs influencing the larger economic biases and things like that.
Alex Imas:
So one worry that you have, you mentioned overconfidence. It depends on the model actually. So some models are much more sycophantic than others. So like Claude is like not very sycophantic. It's actually sometimes kind of me. Yeah, I have a love-hate relationship with Claude. But the worry is that if it starts out from a place of confirming you and starts out from a place of trying to please you, this actually has much bigger issues.
Imagine you're a CEO of a company and you're using AI to do cost-benefit analysis and to do stuff like for managing your organization and it wants to please you, so it's putting so much weight on your compensation package and your bonus and things like that. That model is going to be bad for the organization. So I think these are just issues that need to get worked out as the...
So if they don't get worked out, I think the result will be they will be less applied in organizational settings than otherwise they should be, and then they will lead to more overconfidence. The other end of the story will be that this is a fixable problem. People tried to fix it, by the way. So ChatGPT released 4o. I don't know if you remember this, but like it came out and people were complaining about this issue.
So 4o initially was kind of a mean model and everybody went nuts, but actually what happened is it just didn't suck up to you anymore. And so OpenAI, what did they say? They said, "Eh, let's make it a little nicer." And then this is what you have now. So people don't like non-sycophantic models, it turns out. These are products.
Speaker 4:
Hi, I had a question. In your most recent article on Substack, you wrote about what will be scarce and you said that people... AI will cause more jobs to go into the relational sector. And I was curious, on that point on 4o, what you think about people using 4o or things like that as therapy or in lieu of therapists.
Alex Imas:
Yeah. So there's kind of two parts. So it's like there's the expansion of supply. So there's the part where many people don't have access to therapists. So for those people who don't have access to a therapist, ChatGPT could be a therapist. But on the other hand, given ChatGPT 4o or a human therapist, which one would they prefer? Or a human nurse or a human doctor or something like that? I haven't done the whole scope of occupation.
The projects that I'm actually doing now is actually calculating the elasticity of consumer demand for the relational components of jobs compared to AI. And at least the ones that we've seen, there's a willingness to pay to interact with the human in certain tasks on certain jobs. And the claim that I made in this essay is that as people become richer because basic goods become cheaper, that income effect will push them to purchase more products and services from high income elasticity sectors, which will be this sector that has human beings as part of the value.
The fact that I'm interacting with a human nurse versus an artificial nurse is part of the value of interacting with that nurse. And so that was kind of the point of the essay. I've had a lot of really interesting conversations with people pushing back on this idea like I think you're doing now. And that's important to have that conversation, which is why we're collecting the data.
Devin Pope:
Good. Two more questions.
Anaka:
Hi, my name is Anaka. Thank you so much for the great talk. I wanted to understand your vision for how you think AI will help understand and model people's behaviors and preferences, especially for situations when people themselves may not know what they want. How can AI be used to understand and act upon and even make decisions like you were mentioning on behalf of those?
Alex Imas:
This is the question of alignment. So this is the, how do you align your agent with your underlying preferences? And this is, I think, kind of one of the number one questions that we have right now. How do you facilitate an agent interacting with you so it can elicit your preferences? So this is work that I'm actively working on and I know people at Anthropic are actively working on.
What is the best way to do this? So for example, if you ask people, how do you want to make choices like over time or like risk preferences and stuff like that, people are loss averse and then they're myopic, all of the sort of behavioral anomaly things that we document in this book. You could do the same thing by saying like, "Look, this agent will make those same decisions for you. Tell the agent what it should do."
They tell it not to be myopic and they tell it to be risk neutral and to not have any of the biases. And then the agent does what the people tell it to do and it gets completely unbiased outcomes for the person. Now, the reason why this is a very limited example is because how many questions and context can we really do that at? And so we need a much richer contextual way of eliciting preferences in order to solve that alignment problem.
But I think this is like, if you're coming up with like a list of five main problems that the field has right now, this is going to be in the top five.
Devin Pope:
One last one. One final one right here.
Speaker 5:
Yes. Thanks for a great talk. If I was to kind of step back and just oversimplify perhaps, the bullish case for AI is the productivity enhancement, which is generally the argument made for most technologies will supersede any negative impact, but that's what was said about IT and the computer revolution where productivity increases were everywhere except in the data.
Alex Imas:
They did show up at some point. They showed up in 2004 and going forward.
Speaker 5:
And so perhaps that's what's going to happen, but what is the risk that perhaps the same similar type of circumstances happens with AI?
Alex Imas:
So this is the big debate between when are we supposed to see productivity numbers. And if you ask economists, even the ones that are working in AI who are very optimistic, they're going to say the increase in productivity is going to be like one, 2%. If you ask technologists, they're going to be like 30%. That's because you need to understand how the economy works.
It's a complex system. So even when you have productivity shocks in let's say this sector, it still needs to interact with a sector that has not increased in productivity, which is going to dampen how that productivity increase is going to go through the economy. So this is called Baumol's disease. You can call it the bottlenecks. There's tons of bottlenecks in the economy.
So the prediction even for the most bullish case is in terms of not displacement, but productivity numbers in the GDP, slow and small... I mean, to a technologist that would look small. To an economist, one to 2% extra growth is huge and completely unprecedented. We'd be very happy with one to 2%. But there's a survey that surveyed a bunch of economists and technologists and the median expectation is one to 2% over the next 10 years.
Devin Pope:
All right. Please join me in thanking Alex Imas.