Capitalisn’t: Is Technological Progress Good for Everyone?
- May 18, 2023
- CBR - Capitalisnt
On this episode of the Capitalisn’t podcast, MIT’s Daron Acemoğlu joins hosts Bethany McLean and Luigi Zingales to discuss the key challenges of ensuring that technological progress benefits everyone, not just the wealthy and powerful. They discuss the rules, norms, and expectations around technology governance, the unintended consequences of A.I. development, and how the mismanagement of property rights, especially over data, can reinforce inequality and exploitation.
Daron Acemoğlu: We need to create incentives so that technologies are used not just for automation, but for empowering workers, increasing worker productivity, em-powering citizens.
Bethany: I’m Bethany McLean.
Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?
Luigi: And I’m Luigi Zingales.
Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.
Bethany: And this is Capitalisn’t, a podcast about what is working in capital-ism.
Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?
Luigi: And, most importantly, what isn’t.
Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.
Bethany: The biggest success of capitalism has been the remarkable im-provement in the standard of living everywhere it was applied. From the time the first Homo sapiens walked on planet Earth to the middle of the 18th century, the per-capita income of humans was roughly flat. From the dawn of the Industrial Revolution to today, per-capita income in England increased 17 times. During the same time period, life expectancy in Europe more than doubled. This success has been replicated over and over again.
All of these gains were achieved by applying technological advances that were fueled by a capi-talist profit motive. Thus, technological advances and capitalism are tightly connected, and the whole thing is a remarkable and inevitable success. Isn’t that the story?
Well, maybe not. Luigi and I were intrigued to learn that two professors at the Massachusetts Institute of Technology have written a book challenging the view that technological process is nec-essarily good for everyone.
Luigi: And these are not just two random MIT economists: Simon Johnson is the former head of the research department of the IMF, and Daron Acemoğlu is the most famous economist of his generation. Not only did he win the Clark Medal, which is the best predictor of the Nobel Prize, but he is by far the most productive economist.
In fact, almost 20 years ago, I wrote a paper about the productivity of economists, classifying them by how many papers they published in top journals, adjusting for age, et cetera. The second-most productive was Jean Tirole, who is a Nobel Prize winner. But Daron was first. Not only was he the first, but he was also 30 percent more productive than Tirole, who is extremely produc-tive.
And just to give you a sense, Daron was twice as productive as the 11th-most productive econ-omist, who is the Nobel Prize winner James Heckman.
So, it is our pleasure to have Daron Acemoğlu with us discussing his forthcoming book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.
Bethany: Daron, in your book, you say that technology is generally viewed as a tide that lifts all boats. You call it the productivity bandwagon, but you make a different argu-ment. You argue that how productivity benefits are shared depends on how, exactly, technology changes, and on the rules, norms, and expectations that govern how management treats workers.
Can you explain to our listeners what you mean by that? And maybe even start with what led you to this insight? What made you start thinking along these lines?
Daron Acemoğlu: Bethany, that’s the key question. I am a huge believer in what you started the show with, which is that we owe our relative prosperity today to technologi-cal progress. And it is also baked into the cake of economic models, because once technology im-proves, meaning that our capability to produce more goods and services improves, that will ulti-mately lift the standards of workers, as well as capital owners and firm owners.
The reason why I started questioning this came from two sources. One, when you look at vari-ous historical episodes, you see very long periods in which workers do not benefit. They sometimes even lose out, in the grand scheme of things, from major technological improvements. And two, once you start thinking about how technology works, you see that there are many ways in which it may not really be so helpful to workers.
Let me start with that last point by giving two examples. One is, imagine the technological pro-gress is that you can monitor me much, much more tightly as an employer. There is no necessity that that’s necessarily going to be an improving technological breakthrough, from my point of view. By monitoring me better, you can also force me to do things that I would not have done, or you can make me do that work for cheaper. So, the nature of technology matters.
And the second is something that’s in the minds of many people: automation. With advances in large language models, for example, everybody’s thinking about what’s going to happen to the fu-ture of work. When you have technologies that can actually perform the tasks that we used to per-form as human laborers, is that necessarily be going to be better for workers? And I think, in gen-eral, there is no guarantee that they would. Automation is not the only type of technological pro-gress, but it’s a very important block of technological progress that doesn’t automatically lead to higher living standards for workers.
Bethany: If I were to play Pollyanna, I’d say that the big picture is clear. These two things have been linked together: technological innovation has fueled a rising standard of living for everybody. Maybe there’s been a lag time. Maybe the situation has been different each time, as to why there was a lag time and why the lag eventually went away. But over history, it’s worked. Why is that not a fair statement?
And then, secondly, is there any argument that what’s going on today is just different? That we’re just in uncharted territory because something has changed? So, even if that were true in the past, which you might not agree with, would it be different today?
Daron Acemoğlu: Great question, Bethany. And it’s a question that many economists ask. So, you’re in good company or bad company, depending on how you look at it.
Economists are an optimistic bunch. I share that optimism from time to time—not always, per-haps. So, when confronted with facts like, well, real wages probably fell during the first eight or nine decades of the Industrial Revolution . . .
There was a huge amount of hardship imposed on workers, children as young as five and six. Or huge improvements in the US economy in the first half of the 19th century came on the back of slaves, who actually suffered even more after the introduction of the cotton gin, which was revolu-tionary for American agriculture. The economists’ reaction is, yes, but things got better.
But the issue is, why did it get better? Did it get better automatically? Or did it get better be-cause, A, certain institutional and technological choices were made in a different way? And B, were we also lucky? It’s very difficult to adjudicate this. If you really believe it was bound to get better, it’s very difficult to disprove that point.
But the more you look at the details of history, you see that it doesn’t look like an automatic process. There were always very powerful people pushing for a continuation of what was happen-ing. And many of the transformative changes were somewhat uncertain, contingent. And also, you can see why they were dependent on changing circumstances.
For instance, I don’t think it is easy to make the argument that everything would have worked out fine for the English working class if they had remained under Master and Servant Acts that made it illegal for them to leave their employers. They were completely disorganized, women and children were still working in factories, but things were going to work out well. I think that’s a very difficult argument to make.
And when you look at the history of the labor movement, there was nothing automatic about these rights being gained. Like the Chartist movement, which made very, very modest demands. They purposefully did not go down very hard on unionization because they thought this would be controversial to the British ruling classes. And despite the fact that they, in an age before radio or anything like that, collected three million signatures, they were viciously suppressed. All of their leaders were put in jail, and none of their demands were heeded until further pressure. So, it doesn’t seem like this was an automatic process.
And the same thing when it comes to technology. I mean, I think the automation process was quite profitable for employers, so they could have pursued that path even further. And it was a se-ries of things that happened, like, for example, the labor shortages in the United States, ideas about new technologies that were based on some breakthroughs, and different sorts of entrepre-neurs. I think all of these make me believe that there wasn’t anything automatic about it.
And now, bringing that to the second part of your question, I don’t think this time is different. It is a very well-posed question. Because if, indeed, it wasn’t automatic, I think this time is just like the other times, and it’s going to depend on these contingent factors.
Bethany: So, I’m thinking about this through the lens of the horror of coloni-al exploitation, which you had from, let’s call it 1850 to 1970, where Western countries benefited a lot, but workers from the rest of the world did not. And the last 40 years have witnessed just the opposite. Great improvement in the standard of living, not just in China, but in India, Bangladesh, Indonesia, Vietnam. Even a little bit in Africa. Does that challenge any part of your argument, or would you say, again, there are specific factors that are at work in causing that catch-up?
In other words, if I were to look at that and say, “Well, see? It’s inevitable, it all works out in the end,” through a Pollyanna-ish viewpoint, how would you challenge that in this case?
Daron Acemoğlu: The way that I would challenge that is twofold. First, if you look at that post-war era in greater detail, you see that there isn’t anything inevitable about it. So, China grew very little until Mao’s fall and the defeat of the Gang of Four. So, it’s only through a se-ries of economic reforms that enabled at least some limited amount of price-system incentives in agriculture, then in industry, that China became a place where investment could be made, and technology could flow and started this whole process of very rapid economic growth.
The same thing in Africa. For several decades, African countries, because of institutional distor-tions, civil wars, insecurities, actually experienced declines in their GDP per capita, not improve-ments. So, the institutional structure is quite critical for the developing nations, but even more so than just the GDP figures would themselves suggest. You look at some of those growth episodes, and they’re fairly broad-based, like the ones that industrialized-world countries experienced in the 1950s and ’60s, that wages are growing at the same time as firm profits. But then you look at other ones where inequality is going through the roof.
Who benefits from that economic growth process is also going to depend on institutions, but it’s also going to depend on the nature of technology. One of the things that I really worry about AI and the broader automation is that it’s actually going to make the developing world really squeezed between a rock and a hard place. If you look at the typical rapid growth pattern, what countries like Taiwan, South Korea, Malaysia, Indonesia, China went through, they have rapid, somewhat export-driven industrialization experiences. And that’s being fueled by the fact that they have abundant labor, and they are using that labor with some basic technologies. Human resources, in other words, are at this core of their rapid growth experience.
With automation technologies spreading in the West, the global division of labor is going to change in such a way that that growth path is no longer feasible. In fact, some of these countries are going to start losing the tasks that they are currently specializing in.
In other words, Bethany, I think if the future would look like one in which working classes in the industrialized world are going to have a hard time, but lots of people in the developing world will come out of poverty, then we can talk about, well, how do we trade off these two ends of the stick? But I am worried that workers around the world, everywhere, are going to suffer.
Bethany: You mentioned the word luck earlier when you were talking about the evolution of human economic history, and it seems to me that what you’re talking about now is somewhat the law of unintended consequences. And so, what kinds of challenges do those two things, those two components, pose for policymakers? If there is a large amount of luck at work, and there are these unintended consequences, how can we possibly make policies that can plan for things? And what kinds of challenges do they pose for you as an economist? Because I don’t think of economists as wrestling with luck or unintended consequences very often.
Daron Acemoğlu: Well, I mean, I think we have to be really much more cog-nizant about the importance of unintended consequences. Look at the age of AI. Most of the things that we should be worried about are the unintended consequences. I don’t think people are de-veloping large language models in order to completely destroy our education system, but that could be one of its unintended consequences.
Tim Berners-Lee and other pioneers of the internet did not do it in order to damage democra-cy with misinformation. But that was one of its unintended consequences. And luck or critical junc-tures, or exactly how different visions shape how a given technology or technology platform is be-ing used, I think those are really, critically important.
Henry Ford wasn’t intending to create new tasks for workers. What he wanted to do was mass production of cheap cars. But in the process of doing that, he actually created a whole new way of organizing factories based on electrical machinery, new tasks, engineers, clerical workers, that fueled both productivity growth and wage growth for several decades.
You can challenge me and say: “That wasn’t luck. Anybody else other than Ford would have done the same thing as well.” It certainly is a turning point. Whether it was lucky that Ford did it in one way and somebody might have done it in another way, that’s a bit harder to know. I mean, I certainly cannot conclusively answer that question, but I think those sorts of turning points are par-ticularly important, and especially when it comes to things like union organization, who has politi-cal power, luck and contingent factors definitely become even more important.
Luigi: But, actually, one aspect I would like to see discussed more is the boundaries of property rights, precisely in this dimension. Because, as you probably know, John Deere has a system that makes it impossible for anybody to repair the tractors. They completely vertically integrate their repair maintenance using, by the way, some legislation like digital rights management, so on and so forth, that makes it illegal, a crime, et cetera, even to try to repair the tractor. So, probably, Ford would have tried to do the same if he could.
Now, in part it was technology, but in large part, it is the property rights that we allocate. If we enforce the property rights of John Deere too much, we are making it impossible for the benefits of technology to trickle down to the rest of the world. And I think that’s the part where we should intervene the most.
Daron Acemoğlu: I completely agree. I don’t think we should be absolutist on anything and certainly not absolutist on property rights. And many times, the allocation of property rights is going to determine monopoly power and, sometimes, also political power. But I think property rights is also going to take a completely different set of facets in the decades to come.
And again, I’m not saying this, in any way, as a property-rights absolutist, but who owns data and who controls data are going to be quite critical. And we’re probably already suffering the conse-quences of not having well-defined, controlled rights and property rights over data. If you think of the large language models right now, such as GPT-3, GPT-4, ChatGPT, they are fueled by free use of a large amount of data, some of which was generated for very different purposes, such as the Wik-ipedia data. So, I think Wikipedia is a huge subsidy to these large language models, and the people who produced that Wikipedia data did it with an understanding, an intention, that was very differ-ent from what Microsoft, OpenAI, Google are using these data for. I think those issues are quite dis-tinct from the property-rights questions that economists normally worry about, but I think they’re going to become central.
Bethany: In your book, you talk about technocratic decision-making versus democratic decision-making, I think with a view that the latter is better, and the first can be a dic-tatorship of sorts. But if you have a world where we have workers involved in technology decisions, in order to prevent bad technology from being developed, I guess, in a way, that’s democratic. But doesn’t it become almost anticapitalist to the point . . . After all, the initial idea of the Soviet economy was to have the Soviet, i.e., the factory council, make the investment decisions of firms. Is there a balance we need to find between technocratic and democratic decision-making, rather than having it be an either-or? Or is there a risk that this goes too far?
Daron Acemoğlu: Yep, absolutely. I think it’s a balance. In one of my earlier books with Jim Robinson, Why Nations Fail, we talk a lot about how blocking of techno-logical progress could be a very important brake on economic development and has been so throughout history. And that is absolutely a first-order concern that we have to worry about.
But that concern becomes excessive if you look at the world through the following lenses, which is, there is this completely rigid technology, and resistance is just a way of blocking it. The other one is, technology is like a flow, and you try to make it flow in the right direction. Then the resistance is an enabler.
Of course, reality is going to have both elements. But even if I’m worried about the first ele-ment, I think getting rid of democratic voices is a really fraught proposition. Because there are so many instances in which powerful visionaries tell us: “You have to go all into this technology. This is the only way to do it, and this is in the national interest, in the common good. And you’ll be fool-ish not to do it.” And they are often very powerful by their nature, and they’re very convincing, very persuasive.
I think we do need some sort of resistance against those visions. And I say that with full under-standing that, today, we are in the midst of a period like that. We have very powerful visionaries, with an amazing story to peddle, who are telling us exactly that. So, I wouldn’t discard democratic processes.
Bethany: I’ve always been fascinated by what I see as the fine line between a visionary and a fraudster. And you just brought in another angle, which is the visionary and the dangerous man. And they are usually men, so I think I’m OK to use the word. At least the ones who get recognized are usually men. So, I think I’m OK with saying dangerous man.
Anyway, in the book you tell the story of Ferdinand de Lesseps—I might be mangling his name—and the Suez Canal and the Panama Canal. And I’d love for you just to quickly tell our lis-teners about that and what lesson you draw from it. Because, as you point out, once again, we’re living in the age of the visionary. And I think the lesson you draw from that is profoundly im-portant.
Daron Acemoğlu: Well, thank you, Bethany. Yes, absolutely. And I think when you look at certain historical episodes, your instincts are completely right. There are fraudsters and there are visionaries. But I think the more common case, and the one that I find more dangerous, is that there are visionaries who are truly skillful. And their self-confidence, their hubris, and their influence over society grows because of their success.
Think of Elon Musk. I mean, he is truly a brilliant entrepreneur. He’s been very successful, with some government support, with some luck, in at least four different lines of business. But it doesn’t mean that we should take his vision of what society should be like—especially his vision of areas which are of critical importance to democracy and are really out of his specific skills, such as social media or how democracy should be organized—at face value and follow him.
And de Lesseps is actually a good example for us because he was truly a visionary person. He was far ahead of everybody else in seeing the role of canals and how he organized . . . some of it was schmoozing, but how he organized people, how he formed connections, how he leveraged technology, how he empowered technologists in order to solve important problems.
But then the more he became influential within European society, within French society, and the more he became hubristic with his own belief of his vision’s inevitability and power, the more unhinged he became. And even when his plans were completely inadequate, for example, in the case of Panama, where his ideas of how to build a canal would not work . . . The conditions in terms of the elevation of the sea and the rivers that needed to be drained were completely infea-sible. The disease environment was not conducive. The labor costs and labor conditions were very different. None of that could deter him.
And I think, to us, that’s a parable of not just individuals, but also a general mindset that, OK, fine, we’ve cracked important problems in digital technologies. We’ve even made progress in things like AI. But it doesn’t mean that a small cadre of people should decide exactly what the fu-ture of work should be or what the future of democracy should be.
Luigi: You seem to be suggesting a greater role for the government in shap-ing the incentive for innovation. But in your book, you seem to ignore the perverse role of incen-tives in the government. A lot of the innovation that we see throughout history and even today is for military purposes. What we get are just the leftovers on the side, right? And so, can we trust the government to use innovation in the right direction? Or is it actually all distorted in an arms race that will make all of us worse off?
Daron Acemoğlu: That’s a great question, Luigi. I don’t know the answer to that. And that may be just one of these lucky things that Bethany and I were talking about a mo-ment ago. If you look at the postwar history of the United States, you really cannot understand the direction of technology and the important big technological breakthroughs without the role of the government. From the internet, to nanotechnology, to biotech, to antibiotics, to sensors, to air-craft, they did it for defense-related purposes.
But at the end of the day, it was done in such a way that, although some of it was misused and turned into bombs, a lot of it also expanded the knowledge base or the scientific capabilities of the research community, of the labs, of firms, and acted as a stepping stone for private-sector innova-tions building on these innovations.
Now, was that just because we were lucky, and it could have gone another way, that we could have just built bigger and bigger bombs and got nothing for the consumers in the private sector? That’s a much harder question to answer.
But I think the more positive way to interpret that is this episode shows that government in-volvement can be OK as long as it is not micromanaging it too much. And as long as it’s not picking winners. And, in fact, it worked despite the fact that the US government wasn’t doing it well and was doing it for a completely different purpose. So, if we did it in a more rational and more limited way, perhaps we can get even better outcomes.
Renewables, I think, are a case in point. I think we have a completely nondefense-related justi-fication for renewables. We have a measurement framework about what types of technologies we should support. We can do that without turning that into a bad form of industrial policy, where the government picks winners or which companies are going to be successful, which companies are going to be profitable. And we can completely separate that from the defense industry. Is that a dream? I don’t know. I don’t think so, but of course, I think one can be skeptical as well.
Luigi: Now, one of the things that attracted my interest, because I think it is important and often forgotten, but you put your finger on it, is the funding of universities that are becoming more and more funded by the private sector. And most presidents are now, basically, asked to do only fundraising. That’s their job. What is the alternative, and how bad is the situation, and what can we do?
Daron Acemoğlu: I think that’s something that worries me as well. I’m not even sure how bad it is. Looked at from one perspective, it’s very bad. The future, again, whether you like it or not, it’s going to be in high-tech areas, including digital. And if you see where money in those areas comes from, it comes from a very small number of very, very powerful companies. So, that will then impose those companies’ visions and priorities into the mindset of the faculty, administrators, and, most importantly, students in these fields. So that, I think, is the bad news.
The reason why I’m not sure, and this requires more research, is that people are also very in-volved in thinking about the future of our society more and more so. And it may well be that you’re going to see the brightest minds in computer science, artificial intelligence, et cetera, at some point are going to say, “No, we don’t want to go down this path of just empowering Google and Facebook and automating work, and we want to use our skill for something else.”
I’m generally skeptical that ethics by itself is going to work without other institutional norm-based interventions, but I think this is going to be a question that we need to investigate. Are we really making universities part of the overautomation and overdigitalization complex, by relying so much on private money from the most powerful corporations? Or is it that that money comes in, it’s fungible, and then we can use it for really pushing the more socially beneficial agenda? I think that’s a question we need to investigate.
Luigi: Daron, one of your implications is that we should tax capital more and we should tax labor less. Now, traditionally, the economists have always thought that you should not tax capital, but I think you have a paper explaining why this intuition that is so pervasive is wrong. Can you explain to the listeners?
Daron Acemoğlu: It’s a little bit technical, actually.
Luigi: But our listeners are very smart.
Daron Acemoğlu: I know, I know, I know, I know. I do know that.
First of all, you’re absolutely right, Luigi. Many economists subscribe to the view that capital should be taxed much more lightly or even should have a zero tax rate.
The idea here is actually twofold. One is that capital is a very elastic factor, and you don’t want to tax elastic factors. But second, capital is a very forward-looking investment, so if it faces tax rates in the future, that’s going to then discourage investment or distort saving decisions.
It turns out that both of those forces are baked into our models because we make simplifying assumptions. So, once we go away from these simplifying assumptions, it’s no longer true that capi-tal is very elastic in the models that we use. And these forward-looking elements are not as power-ful. So, it becomes, actually, an empirical question: how distortionary it is to tax capital, or how elastically capital responds. And when you look at existing estimates, they show, on average, that it’s not so different from labor.
If that’s the case, from a theoretical point of view, just leaving the other issues of automation, AI, future of work aside, it suggests that capital and labor should be taxed more or less symmetrical-ly. But there’s an additional reason for worrying about these issues today, which is that if we are on the cusp of automating more and more work, creating an additional cost advantage for capital in the form of capital not paying taxes and labor paying fairly high taxes through payroll and federal income taxes, it is going to be particularly bad for overautomating and not investing enough in la-bor-complementary technologies.
So, both of these things come together. And precisely because we are worried about the future of work and generating enough meaningful work for people in the future in the age of AI, we think equalizing tax rates on capital and labor might be a good idea.
Bethany: If there’s one thing you could do to fix Silicon Valley, what would it be?
Daron Acemoğlu: I think we need regulation. The regulation has to be multi-pronged. I think we need to control how data is used. We need to create incentives, so that tech-nologies are used not just for automation but for empowering workers, increasing worker produc-tivity, empowering citizens. We also need to get rid of the monopolistic structure of the industry, such that a greater diversity of technologies is facilitated.
So, it’s truly multipronged. When you go into the details of how to change the data environ-ment, I think you need to have very specific policy ideas that need to be discussed. One that I favor is considering a digital-ad tax. I think a lot of that industry centers too much on controlling infor-mation in order to monetize it via digital ads, and it’s not leading to the right type of technologies and the right type of environment. But we also need other ways of providing more positive incen-tives to these platforms.
Luigi: Anything else that we need to ask?
Daron Acemoğlu: No, this was great. Amazingly far-ranging and fun. I’m happy to do another hour if you guys want to do that.
Luigi: Be careful what you ask for. Thank you very much, Daron, this was great.
Bethany: Thank you very much.
Luigi: What did you get out of reading the book and talking to Daron?
Bethany: I’m always fascinated by questions of correlation or causality, and I guess I have always thought, to use another cliche, the proof was in the pudding, in the fact that we’ve had such a growth in wealth over these periods where capitalism has been the defining sys-tem, that I thought that it was probably correlation, and now . . . I mean causation, I’m sorry. Even I can’t get it straight. And now, I’m wondering if it’s actually just correlation.
I still think there’s more causality than he’s willing to give it, but I accept that a lot of this is a matter of pre-existing ideology, perhaps, rather than provable fact. What do you think?
Luigi: There is one part of the book I like very much, which is maybe obvious but very important and not obvious within the economics profession—maybe it’s obvious outside of the economics profession—that technology is something that can and needs to be shaped. Very often, economists tend to think about technology as manna coming from heaven. That’s wrong. It’s dangerous.
In fact, at the conference on antitrust we had, there was a debate. One of my colleagues at the University of Chicago said: “The only thing that matters in the long term is productivity growth. Ba-sically, antitrust or anything else does not have any impact on productivity growth. So, the implica-tion is, it’s irrelevant.”
I think that that’s an attitude that is frequent among economists but is wrong. And I think, hopefully, this book can put in a serious foundation for rooting it out of the economics profession. The sooner it’s going to be rooted out, the better.
Bethany: That seems like an incredibly narrow reading. Correct me if I’m wrong; I don’t mean to be disparaging to your colleague or your profession. But that completely misses the fact that the economy is society, and that economic wellbeing is critical to societal wellbeing. And if you have an economy that’s leaving people behind, you have a society that is frag-ile. And that seems to me . . . Sure, maybe, maybe productivity leads to growth, but if it’s leaving a segment of the population further and further behind, then you get war. That’s certainly not good for anybody.
Luigi: I completely agree. But I think the general argument, which at some level is true, is that what leads to increasing living standards in the long term is productivity growth. And so, if you don’t have productivity growth, it is very difficult for people to be better off. I think that that’s true.
If you have productivity growth, it’s much easier to do anything else. Dividing a larger pie is much easier than dividing a fixed pie and even easier than dividing a shrinking pie. And so, a lot of the political problems arise from dividing the pie, and a shrinking pie or a constant pie exacerbates the political element.
It’s a bit of a, if you want, naive view, but it is not completely wrong. I think it’s overstated be-cause the conclusion is, oh, we shouldn’t do anything, we should let it go. That’s where Daron’s book is useful because it points out, no, we should do something.
Bethany: This is too dismissive of it, but it is a version of trickle-down theory, right? I get your argument about the bigger pie, but it also just assumes that there will be a mech-anism that it trickles down. That the benefits of that trickle down to everybody without actually thinking through how they do and whether or not they do.
And that does . . . To your point, that seems to me what Daron’s book does add a lot, which is this idea that they don’t necessarily trickle down. There have to be mechanisms put in place to make them do so.
Luigi: What I’m a little bit less convinced by is whether they have been able to pinpoint what the conditions are to make this trickle-down work. Take China, OK, China has the same technology as the United States, more or less. Unions are not particularly strong in China, quite the opposite. But in the last 16 years, real wages have increased eight times. That’s a pretty dramatic increase in wages. And I think that that’s simple demand and supply. There’s been an enormous demand for production in China because it was very competitive, and thanks to the one-child policy, the supply of new people going to work in China did not increase much, and China does not have immigration. So, if you have increased demand and shrinking supply, prices go up.
Bethany: That seems like a powerful argument to me. And, of course, I tried to get at a version of that argument with a question I asked him that I think I butchered as I was asking him the question, but the question was about luck. If some of the things that have gone right are a matter of luck, then, in a matter of unforeseen consequences, by meddling with it, do we create other unforeseen consequences in which things might not go quite as well?
I think it is my philosophical worldview that there are always unintended consequences. And so, I suppose I’m also a little skeptical of the idea that deliberate policies always get deliberate re-sults.
Luigi: I think you’re right. I don’t think you butchered the question. You asked the question very well, and I think he answered recognizing that there are a lot of unintend-ed consequences. The question is whether doing nothing has more unintended consequences than intervening. And this is where I think I find the book not completely satisfactory because I don’t really see the mechanism.
I asked him the question about the period after World War II. My fear, if you are in a world of, for lack of a better word, imperialism, where from the United States you can export your manufac-turing all over the world, the demand for manufacturing is very large, and, as a result, there is pres-sure to increase the wages of workers. And whether you have unions, whether you have the tech-nology that is wrong, whatever, I think that demand and supply work, and you have increasing wag-es.
In a world in which you don’t have that—and, surprise, surprise, this is the world before 1850 in England, and it was the world in between the two wars, or the world now to some extent—then you have problems. You have not enough demand of good jobs inside the developed countries and, as a result, the wages are suffering. But in places that can export throughout the world, like China, they’re doing great.
Bethany: Maybe another way to say that, which is not a very satisfactory an-swer, but that all of this is less cause and effect than it is wings of a butterfly, right? In the sense that similar-looking things can happen in different places, but if there are completely different pre-conditions in place, then you can get a totally different result.
And so, it’s not a simple cause-and-effect relationship as to what happens. There are just all these other factors, and we don’t even understand what those factors necessarily are in their en-tirety, or which ones matter the most in any given circumstance. That’s really interesting. But just even the mere fact that he is raising the question that forces you to think about this and forces you to say it’s not automatic, per your point, it’s really valuable.
Luigi: I completely agree. And also, his willingness to challenge conventional wisdom and say, for example, you should tax capital, you should tax automation. For somebody teaching at the Massachusetts Institute of Technology, saying you should basically tax technology and automation is quite controversial.
You asked a terrific question about how you actually implement this change in technology and the comparison with the Soviet, because the Soviets were actually created to control technology firms. And it’s not clear from the book, in my view, whether he wants this control to be at the macro level, like only through taxation, or at the micro level, at the firm level. And if at the firm level, who does it? Do we want to have codetermination, like in Germany, where you have the unions represented on boards and somehow impacting the choices? Not that Germany is doing par-ticularly well economically, but maybe that’s what he wants, but he’s not clear on that.
And the part that I tried to ask him, but I don’t think I did very well, is how this can happen in a competitive world. So, number one, there is the military competition. We want to maybe control AI, but if AI is part of a military competition with China, it’s not going to be controlled. In fact, it’s going to be pushed even more. And even product market competition. I can decide that I’m not going to adopt a certain technology, but if my competitors adopt that technology, unless I put up gigantic barriers to entry, or tariffs, I am not so sure that I can survive. So, I think there is an enor-mous pressure in a global world to actually be at the frontier, no matter what this frontier is.
Bethany: Yeah, I worry about that on multiple fronts. Somebody, a smart friend of mine, said that to me recently when I was talking about the idea of going slow with AI—per our conversation with David Autor, who also, I want to circle back on this point, too, because I think he has something else to add to this debate—but this person said, “We can’t go slow because China will just run us over.”
There are so many ways—and it might make for a different episode, but there are so many ways in which things that are supposed to be a race to the top become instead a race to the bottom. And I guess it’s just because of the multifaceted impacts of so many things, but I don’t know that we can go slow on AI, because if the rest of the world doesn’t, then all we do is get left behind. And that’s per your point on competition, too.
And so, I think it’s another interesting challenge to the pervasive way of thinking that competi-tion is supposed to be always good. It’s supposed to create a race to the top, and when does it? And when does it create a race to the bottom instead?
Luigi: You’re right, but when it comes to military competition, I’m not so sure that this is for the better.
Bethany: Right, probably not with military competition. But I definitely was thinking about our conversation with David Autor and how, if you get it right on the front end, then you don’t have to get it right on the back end. In other words, if you get wages right, then you don’t have to worry about the redistribution of the riches in an economy. And that had a really profound effect on me. And so, I can’t help filtering this conversation through that lens as well.
Luigi: You’re absolutely right, but I think that this is what makes it so chal-lenging to understand the impact of technology because getting the wages right, there are only two ways to do it, in my view. One is controlling demand and supply, and the easiest way we have to control the supply is through immigration. And the second one is somehow reshaping technolo-gy. But this is where I’m not so sure there is a simple solution coming out of the book, because he gives this nice example that we should look at the marginal contribution rather than the average. But what does it mean in practice?
Should we ban ChatGPT because what is on the margin? And you can make an argument that ChatGPT actually increases the marginal productivity of at least some people, and of course, reduc-es the average productivity of a lot of others. So, what do you do?
Bethany: Yeah, I think this is definitely a flaw in the way I think in general, and probably a flaw in the way a lot of people think, which is that the ideas are so much more fun than the implementation. This is going to sound like a tangent, but I remember when I wrote a lit-tle book about Fannie and Freddie, there’s a line in it about how various Congresspeople would get really excited because, “Oh my goodness, housing markets are so critical to everything about our country and our economy. I’m going to get involved and fix this.” And then they’d get involved in Fannie and Freddie and very quickly be drowning in the complexity and the competing angles of what you do and what you don’t do. And they’d all just slowly back away, saying: “Don’t look too closely. It might erupt into flames if you stare at it.”
And I think that’s true of so many of us, and unfortunately, I’ve realized it’s true of me as well. I like the big ideas about, do something. And then, when it comes to the dirty details of, yes, but what precisely do you do? That’s where it always gets messy.
The part of his book I actually found the most fascinating, and I suppose it goes to my ongoing fascination with human nature, but I really loved the stories that he has in there about the Suez Canal and then the Panama Canal and the character of Ferdinand de Lesseps, and the way in which we tend to believe in great men.
I guess I’ve thought of that as more a feature of our modern markets, that we have these “great men” like Elon Musk who can do nothing wrong. And so, if Elon Musk says something about Federal Reserve policy, everybody bows down, even though what the hell does Elon Musk know about Federal Reserve policy? Anyway, but thus it’s ever been, right?
And so, that story about how this guy basically got the Suez Canal right, even though everybody said he was wrong, and then proceeded to do exactly the same thing, trying to build the Panama Canal, and it all went wrong, is just . . . I think it’s a really fascinating and instructive example to me, and one that has a lot of applicability to our modern times that, just because Elon Musk creat-ed Tesla does not mean he can run Twitter. Just because he created Tesla does not mean he could create the second electric vehicle company. And I think we do tend to believe past is prologue a little bit more than it actually is.
Luigi: Yeah, but first of all, maybe we overextrapolate. Actually, for sure, we overextrapolate, but there is some serial correlation. With Elon Musk, at least at the beginning, he did multiple firms right, so it’s not just one.
Now, can somebody always be right? No. So, I think it’s a bit in the middle. You have to be careful because, at some level, technological choices are left more in the hands of single people than other choices. So, the terrible choices that Hitler made were made within the context of some institutional control. And people could have stopped Hitler when he was taking too much power, but if you are Elon Musk and you have all this money, it’s very hard to stop you. And so, un-less you really come in big time with some regulation, individual people can have more choices about technology than any other political choice.
Bethany: His phrase, right? Techno-utopia and the sort of great man who’s going to lead us all forward out of the darkness, that just isn’t true. It hasn’t been true throughout history. And so, we should be careful about believing that it’s true now. And I think that, actually, that piece of it does connect because we all—certainly the business press, but everybody in gen-eral—tend to look at the pronouncements of Musk and Jeff Bezos and Bill Gates and say: “Oh, this is the way the world is going. Of course, they’re right about this.” And are they? Well, not necessarily so.
Luigi: But this is where I found his answer to my question about universities a bit puzzling because I think that, correctly, he points out the risk. In the book, they point out the risks that universities are facing as a result of a lot of funding from a few concentrated places who tend to have a disproportionate view. The views of isolated entrepreneurs become the views of academia as a result of not only donations, but as I repeated many, many times, access to data, be-cause unless you conform with me, you don’t have access to data, and without data, you can’t do research. And so, you end up being captured by these large institutions. But then he created this image of the lonely professor that fights against evil in spite of the funding, who was a bit of a great man or great people, without too much substance, at least in my view.
Bethany: I had never understood that though, just to . . . Until you and I started this and started talking more, I had never understood that issue of access to data and how closely tied that is to one of the major issues in journalism, which is access to sources and how you figure out how to work without sources and how you figure out how to work without data. And the profound impact that playing nice has in both cases, which is really quite upsetting.
Luigi: I think that the big difference is that you journalists are trained to deal with sources. From day one, you know that number one, every time a source comes to you, he or she has an agenda. And you might use that information, but you should be very aware of the agen-da of the source of the leak.
We economists are not trained in that way, and many of us tend to be naive. And when some-body gives you data, you say, “Oh, thank you.” And you’re writing a paper and not realizing that you’re part of a bigger plan.
Bethany: Yes and no. So, yes, for investigative journalists, but for any journal-ist who works at a daily publication where they have a beat and have to cover an industry, it’s very, very difficult because they have to have access to the companies they cover. And if they do too many things that the companies don’t like, they will not have access to those companies. And no-body really trains you how to deal with that. I’m not even sure how well acknowledged that issue is.
There are those who don’t have to have access to corporations, and if the corporation doesn’t talk to you, you find your own way around it. And there are those who need access to the compa-nies in order to do their daily jobs. So, that’s where the issue comes up.
You were really surprised by Daron’s willingness to be critical of Silicon Valley. Why were you . . . What surprised you so much about that?
Luigi: In economics, the term Luddite is an insult. This is what Larry Summers infamously used against Raghu Rajan, in 2006, about his Jackson Hole speech.
Bethany: He did, that’s right. I had forgotten about that.
Luigi: And so, every time you are taking a position that is not optimistic about technology, you run the risk of being labeled as a Luddite. I’m not saying that Daron and Si-mon Johnson are Luddites, but they flirt with the idea that you need to somehow stop the ma-chines, or the AI, or the technology.
Bethany: Yeah, that’s interesting. I guess in my line of work, it is not as . . . I have often referred to myself as a Luddite, so maybe I don’t realize what an insult it is, or I’m will-ing enough to accept that technology doesn’t like me, so therefore, I don’t like it. So, it doesn’t feel quite as horrifying to me. But, yeah, it’s really interesting.
I guess that is the other aspect of his work that I really do think is thought-provoking. It’s not just the history and whether economic progress ostensibly caused by capitalism has been causation or just mere correlation. But really, also, it’s this idea that technological progress is not necessarily all it’s cracked up to be, and they’re entwined.
The ideas are entwined because the history of capitalism is the history of technological pro-gress to some degree, but they are also two separate things. And I think it is worth pulling them apart a little bit and thinking about each of them and thinking whether . . . thinking about the no-tion that with either one, progress is inevitable. And it’s really interesting just to contemplate the idea that maybe it’s not inevitable.
Luigi: I think that maybe in capitalism, technological progress is inevitable. Its benefits are not inevitable. But I’m not so sure that in other systems, things are much better because either there is no development, or if you think about during the Soviet Union, there was a lot of technological development for military purposes, not as much for common human beings. So, I’m not so sure that the alternative or the alternatives we know are much better.
I think we are struggling to think about a world in which you can get the benefits of technology without having the costs. That’s the ideal, but it’s not easy to imagine.
Bethany: Yeah. Or, to put it differently, a world in which . . . or a society that is still capitalistic, but where capitalism works for the benefit of all, so that we have a more stable society. Because if it doesn’t work for the benefit of all, then we don’t have a stable society. And, ultimately, capitalism and society itself both get undermined.
So, I think where I’ve come out, at least based on our couple of years of doing this, is I still ha-ven’t seen an economic system that I’d prefer to capitalism. But that doesn’t mean that capitalism can’t and shouldn’t be done a lot better.
Luigi: I completely agree.
New York University’s Jonathan Haidt discusses the effects of childhoods spent under the influence of smartphones and overprotective parenting.
Capitalisn’t: The Economic Costs of a Phone-Based ChildhoodMervyn King, former Governor of the Bank of England, discusses inflation and what comes next for the economy.
Capitalisn’t: He Foresaw Inflation. Here’s What He Expects Next.Capitalisn't hosts Luigi Zingales and Bethany McLean talk with Matt Stoller about the speculative frenzy around GameStop stock.
Capitalisn’t: GameStop, Robinhood, and Our Troubling Obsession with SpeculationYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.