Capitalisn’t: How Profit and Politics Hijacked Scientific Inquiry
- September 26, 2025
- CBR - Capitalisnt
Modern capitalism and science have evolved together since the Enlightenment. Advances in ship building and navigation enabled the Age of Discovery, which opened up new trade routes and markets to European merchants. The United States’ Department of Defense research and development agency helped create the precursor to the internet. The internet now supports software and media industries worth trillions of dollars. On the flip side, some of America’s greatest capitalists and businesses, including Thomas Edison, Henry Ford, and Bell Labs, gave us everything from electricity production to the transistor. Neither science nor capitalism can succeed without the other.
However, science’s star is now dimming. Part of this is due to political intervention, but so too has capitalism played a hand in science’s struggles. While corporations sponsor a significant portion of funding for scientific research, this funding too often comes with undisclosed conflicts of interest. Or corporate pressure may influence results in other ways.
Stanford’s John Ioannidis studies the methodology and sociology of science itself: how the process and standards for empirical research influence findings in ways that some may find inaccurate. On this episode of Capitalisn’t, Ioannidis joins cohosts Bethany McLean and Luigi Zingales to discuss the future of the relationship between capitalism and science.
John Ioannidis: Wrong incentives, wrong financial incentives for scientists, even in democratic societies, can be problematic.
Bethany: I’m Bethany McLean.
Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?
Luigi: And I’m Luigi Zingales.
Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.
Bethany: And this is Capitalisn’t, a podcast about what is working in capitalism.
Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?
Luigi: And, most importantly, what isn’t.
Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.
Bethany: Today’s topic may seem a little bit different for us because we have a guest who is a scientist.
Luigi: Why does a podcast about capitalism want to talk about science? Because capitalism and science were born roughly at the same time, they share a common cultural foundation, and there is a mutual dependency between the two. Modern capitalism cannot exist without the fruits of science. And modern science is supported by capitalist institutions that finance laboratories, R&D, and universities.
Bethany: Their success is intermingled in complicated ways. If per-capita income today is 27 times what it was 250 years ago, and if life expectancy is twice as long as it was 250 years ago, is that due to the success of capitalism, or the success of science, or both?
Luigi: And the question is, if science is in a crisis, how does this affect capitalism as we know it? In fact, we can argue that maybe it’s capitalism itself or capitalist incentives that are responsible for the science crisis. If we think that science is guided by disinterested inquiry, capitalism is guided by profit. To what extent is this profit motive impacting science and, indirectly, the success of capitalism?
Bethany: Or to what extent is it politics that are impacting both?
To discuss this topic, we are delighted to have with us John Ioannidis, a physician-scientist who is a professor at Stanford University and one of the world’s most-cited scientists. He has more than 600,000 Google citations.
Luigi: His most famous paper is “Why Most Published Research Findings are False,” which actually started the field of metascience. In it, Ioannidis argued that a large number, if not the majority, of published medical-research papers contain results that cannot be replicated.
Bethany: He also, I think, has additional work showing that citations can themselves be bought. So, maybe we should be careful about citing his citations. Anyway, there is no better person to discuss with us the topic of a science crisis and why it matters.
John, I wanted to start with some of your history. What made you an unorthodox thinker? About 20 years ago, you published research showing that most published claims were false. What do you think it is about your personality that started you down this path?
John Ioannidis: I’m not sure it’s a personality issue. It’s just the natural evolution of what I was witnessing and the research that I was doing. It was just very common to see mistakes, flaws, difficult to replicate, implausible results, very weird claims. If anything, my interest was in rigorous methods. I am the kind of person who’s more interested in methodology rather than the results. The results are interesting no matter what. I think if you use rigorous methods, there’s no good result or bad result. They’re all results to be respected.
Luigi: May I suggest that the reason is that you had to spend, I think, six months in the military service in Greece after being a doctor? I had to do one year of military service in Italy after I got my undergraduate degree, and that made me an anarchist for the rest of my life.
John Ioannidis: That’s a very interesting possibility. I hadn’t really made the connection, but who knows? Obviously, the military is a crazy world, and any notion of rationality and reason probably disappears very quickly.
Bethany: I did not know that about you, Luigi. So, see, all these years into our podcast and I’m still learning something all the time.
Before we move on to more recent history, I’d love for you to talk a little bit about COVID and what happened to you in the pandemic when you voiced some unconventional views.
John Ioannidis: It’s what happened to many people who voiced any views, not conventional or unconventional. Even the definition of what is conventional or unconventional can be challenged depending on whom you ask.
I think that many scientists very quickly realized that they either had to completely silence themselves, distance themselves—"Don’t deal with that. It’s just a crazy world out there”—or if they continued to be engaged, they took sides. So, they said, “Wait, I need some protection.” But that probably led to some huge polarizing effect.
Personally, I didn’t feel that I should seek protection from anyone. I have always argued that politics and this type of forces should not subvert science. They should not affect scientific thinking and reasoning and evidence.
I was just reporting what I found, and that made some people very happy and some others very, very angry. But it’s sad that some people would just feel that numbers had a color, that they belong to political parties, that they belong to partisan groups, that you had to find one number in order to be a good Democrat or a good Republican, a good supporter of someone. That’s very sad, but—
Luigi: I think that one of the concerns was that, of course, COVID was extremely disruptive or potentially disruptive to economic activity. The economic incentives to minimize the effects of COVID were enormous. This is where trust in science came in big time. There was a fraction of the population that did not trust the science and did not trust the results. To be honest, I think that some scientists or pseudoscientists were claiming that the damages were much lower because they wanted to continue business as usual. I think that this created a fracture in the population about this topic.
John Ioannidis: I think it’s a very complex narrative, and I think what you point out is valid. There were not one or two voices. There were zillions of voices out there, some of them more rational than others, some of them just very heavily conspiratorial-theories oriented. But many people were dissatisfied with what they saw as science or what was being sold as science or followed the science to them. They felt that it was affecting them in multiple ways that were mostly negative.
After a certain point, I think that they generalized their negative response to anything that came out of science, that science is yet another conspiracy of the powerful, against me, against my life, against my family, against my world. It’s not for me. It’s not trying to help me. It’s just trying to make money, help some big tech, some big pharma, some powerful people. It’s exploiting me, it’s killing me, it’s harming me, not for me. Science is my enemy. Very sad. Extremely sad.
Bethany: But I think that leads into a bigger question that I was thinking about. What is science? How would you define it? I remember my father saying to me at one point . . . He was a doctor, and I remember him saying to me that all science is a process of asking questions. I wonder if you would agree with that or if you would define science differently.
John Ioannidis: That’s a good start. I think it’s a process of asking questions and trying to get answers that are falsifiable, that can be checked, that can be evaluated by other scientists, other people, that can be replicated, that can be refuted, that can see their evidence strengthened or weakened in a cumulative way and, hopefully, in a transparent way.
Now, science, in order to run well, should have some qualities. It has to be disinterested. It has to be universal. There has to be communalism, and there has to be a healthy, organized skepticism.
But I think that these values have been eroded at times. I think that they were very seriously eroded in recent years. Most people do not see science as disinterested any longer, because there are very strong conflicts of interest.
We have been struggling with openness to make science more of a community good, that people can look at datasets, they can look at the code and the algorithms that people are running to get different results and conclusions, protocols and how things are done, sharing materials and details about their methods. The large majority of science is not transparent, and it’s not possible to just run the machinery that has created it.
I think that universalism is something that has created challenges and advances and problems, because we do see more people doing science. Before the pandemic, I would argue: “Goodness, why do we have so few scientists? We need more science. We need everybody to be a scientist.” And I should have watched very carefully what I was wishing and saying, because, yes, everybody became an expert during these years, and everybody was able to trash the best, most knowledgeable people just with a tweet, completely dismissing them.
Twenty years ago, the vast majority of research was done in fully democratic countries. Currently—this is some data that I released recently—less than 20 percent of published research comes from full democracies. The stronger players are either totalitarian regimes . . . China is currently publishing about three times more papers than the US. India is catching up, which is a flawed democracy. It will be number two within a few years, outpacing the US. Many other totalitarian countries like Iran see escalating geometrically, not exponentially, productivity in their papers. Are they real? Are they fake? We have evidence that some of that is really fake.
Then organized skepticism. Goodness, we’ve seen it weaponized. We’ve seen it go completely off target. We’ve also seen it suppressed and censored and attacked. It’s a very delicate . . . Good, organized skepticism is something to be respected and allowed to grow in a healthy way, and people have very different notions about what is healthy skepticism. We have so many influencers that completely overshadow the landscape of what it means to be a skeptic, a healthy skeptic.
Luigi: In that statistic, you can see the United States is still a full democracy, because when you see this—
John Ioannidis: No.
Luigi: So, the 20 percent does not include the United States?
John Ioannidis: It does not include the United States. The United States is a flawed democracy, in the last 10 years, roughly.
Bethany: Why does that matter for science? Why can science not be done effectively in a totalitarian regime? Why is science . . . It sounds like part of the argument you’re making is that science is best practiced in a democracy, or most fairly practiced in a democracy, maybe. Is that the argument, and why would that be the case?
John Ioannidis: It’s not really my argument. I follow that argument, but it’s something that has been expressed for many years, decades. The classics in sociology of science have traditionally said that you need democracy, you need freedom of speech, freedom of thought, freedom of writing and publishing, whatever you want to do and say and express, in order to have science. It makes sense that if you cannot really ask the questions that are important, or if you need to get answers that satisfy some regime, or some of the entities that support or benefit from that regime . . .
Luigi: I expected a slightly different answer, because you are one of the founders of evidence-based medicine against what you call eminence-based medicine. I expected you to say that totalitarian regimes are more naturally supporting eminence-based medicine. But can you explain to our listeners what evidence-based medicine is and why, from being a strong supporter, you started to become a critic more recently?
John Ioannidis: Eminence-based medicine means that you give more weight to people who have higher titles, more prestigious titles. They’re professors, and their opinions should carry more weight. Evidence-based medicine, conversely, is using results that are numbers, or it could be qualitative research at times, but it is evidence. It is not opinion.
Now, is totalitarianism in favor of eminence or evidence? It could be either way. I think that totalitarianism may use experts that are aligned with the regime, that are just easy to buy or just make them align their thinking and their beliefs, or perhaps even silence, if they’re contrarians.
But it could also work through an “evidence” approach. It could try to get studies, and perhaps with a little bias here and there, it could also create an environment where, seemingly, there is enough evidence for things to be done in a given way.
I think that eminence-based approaches have become less tolerated over time, as people in evidence-based medicine have tried to make the case that they’re not reliable, and they’re not reliable. It’s less likely that people who want to subvert evidence will just present crude, raw eminence in defense of whatever they do. Usually, they will try to do it by supplying things that look like good evidence, but actually, it is not, then build the case along with the wrapping of opinion and policy and whatever else.
Bethany: But it seems that you’ve become something of a skeptic, or at least a little bit worried, about evidence-based medicine, too. Is that because we’re in a place where what is actually eminence is disguised as evidence? Is that because the boundaries between the two are more blurred than you would like to see, or has it been some more profound failure of evidence-based medicine?
John Ioannidis: I think we’ve seen profound failures of evidence-based approaches, or so-called evidence-based approaches. Somehow, eminence-based approaches hijacked evidence-based terminology. You have the most eminence-based approaches hijacking the name “evidence-based” and using it to their benefit.
Even in the field of cardiology, which has more evidence than any other medical field, more than 50 percent of the recommendations in the best cardiology guidelines are just based on eminent opinion, what experts believe. There’s no evidence for them, and when there is evidence, it is being oversold much of the time.
Also, many of the tools of evidence-based approaches have become so widely misused that you have to be very careful. I have been a very strong supporter of the need to perform evidence synthesis, with the results of multiple studies together on a given topic, perform systematic reviews and meta-analysis.
Again, I should have been very careful about what I wished for, because now we see a pandemic of systematic reviews. There are some topics for which we have 100 systematic reviews on the same topic. There’s absolutely no reason to have such redundancy. Out of the 100, all of them are very poorly done sometimes. So, it is a factory producing lemon cars, and people just love them because they have the stamp of evidence-based medicine, supposedly, but they’re just products that should have no value and should be discredited.
Luigi: But there seems to be a little bit of an apparent contradiction between your support for evidence-based medicine and the fact that your most famous paper says that most published research findings are false. Because if most research findings are false, what is the evidence?
John Ioannidis: Well, it is the way that 20 years ago when I published that paper . . . Unfortunately, even now, I think if you take a random paper just from the 7 million papers that are published every year, if you pick a random one that makes a new claim, it’s more likely that it will be false than it is correct, even though we have seen some structural reforms and some improvements in several fields, but not all fields. Some of the worst fields that were the least reliable have been the ones that have grown completely unchecked, and to gigantic proportions, publishing amazing quantities of papers that are unreliable.
The notion that we need evidence-based approaches does not mean that every piece of evidence is to be believed and worshiped. Actually, it needs to be scrutinized, it needs to be checked, it needs to be perhaps reproduced or triangulated somehow, and then, perhaps, we can take it to the bank and make some use of it.
Bethany: What is more dangerous, a totalitarian society practicing science, or a society—and here’s where I’ll invoke Luigi’s hatred of the economy in the pandemic with my own hatred of the economy now—that has the wrong economic incentives for its scientists practicing science?
John Ioannidis: I think they can both be bad. I think that wrong incentives, wrong financial incentives for scientists, even in democratic societies, can be problematic. People are not rewarded for doing rigorous science. They’re not rewarded for questioning their own results. They’re not rewarded for finding something and then being skeptical about their own findings and perhaps dismantling their own findings, if they find that that was due to some bias or some winner’s-curse finding that gets refuted downstream. They’re rewarded to come up with extreme, extravagant—eventually, unbelievable, much of the time—results that do not really offer much for progress in science, let alone for society.
Luigi: Speaking of structural incentives, hopefully I don’t misquote you—and feel free to tell me that I’m completely wrong—but one of the things that intrigued me about what you’re saying is that there seems to be a structural tension between evidence-based medicine and the way we finance research, for example, in the pharmaceutical sector. The pharmaceutical sector finances and owns the data that, in principle, should bring about its own demise. You basically can’t get serious research done in this situation. Isn’t that the case?
John Ioannidis: There’s clearly a conflict here. We ask the pharmaceutical industry to come up with the research that will show that their products are good to go and good to market and good to buy and good to make profit. I think that this is a misallocation of resources.
I think that the testing of the products that pharmaceutical industries develop should be done in an independent way by independent stakeholders with public funds. I have even proposed ways that this could be financed. For example, it still could be financed with some money from the pharmaceutical industry, but that would go into a common pool where independent investigators would be able to use it to ask the most relevant questions about these products in a more rigorous way, in a more appropriate, unbiased way.
I think that currently most of the public funding, like NIH, for example, goes into supporting research that probably should be done by the industry. For example, supporting research that is not fully basic science, which I believe public funds should go to. Basic science means you don’t know what you’re seeking and what you’re trying to find. You’re going into the jungle, and who knows what will come out of it? You cannot promise a deliverable. You cannot promise a product that will come out of it. Maybe hundreds of products will be based on that research eventually, but you just don’t know that. You need to be free to explore the jungle, and I think it is fine to have public funds go that way.
But the vast majority of public funds go to translational research, where, in theory, there are deliverables, there are products, drugs, devices, biologics, vaccines, tests, whatever, that are to be developed. I think this type of work should be done and funded by the industry. They have every reason to want to develop drugs, devices, biologics, vaccines, and tests that would be the best. They would have the best performance, the best effectiveness, the best accuracy. I think that they should do that research instead of having all the public funds go in that direction.
Instead, the final testing of which of these products is the best, which one is better than nothing, which one is better than the standard of care that we have now, that should be done by independent investigators. It is currently done by the industry that has every right to do whatever is in their remit to show that their products are the best, and they need to be licensed immediately, and they need to be sold widely, and they need to be even mandated, if possible, for everyone, and to make tens of billions out of it.
I cannot blame them. They’re just like painters who are asked to appraise their own paintings and give the price of which painting is best. What is the likelihood that they will not try to show that they are the best painter on earth?
Bethany: What is that trade-off with public funds? When you think about maybe an old-school model of, “Just go explore the jungle, and let’s see what ideas come,” versus a more new-school model of, "We need deliverables to make sure money is being well spent,” how do you toggle between those two ideas? Or should it be just this purity that if it is public funds, then it is the purity of exploring the jungle?
John Ioannidis: I think the common denominator should be transparency. When you go into the jungle, you can still have a travel log and record: “Today I saw a pink panther. Tomorrow I saw a python. The next day I fell into a trap.” Keep a log and people will know: “John went into the jungle. This is what he did, this is what he encountered, this is what he found.” Now, let’s see if others can find the same signals or the same patterns, and perhaps some of that is useful, and we can take it to the next step of experimentation.
Now, again, this needs to be funded by public funds. You need to have bold people, innovators, disruptors, people who want to venture, who want to go to the jungle, to run the risk of falling into a trap, to meet a panther, to meet a python, and fail. Very often fail. But one of them might succeed, and that, in the long run, is shown to have benefit. No industry is going to fund that. There’s no immediate profit for them. They cannot wait for 10, 20, 30, 40, 50 years, who knows, for that to be translated.
Then you have other things that need to be publicly funded, like the clinical trials or the evidence-based approaches that I mentioned, that should be independent from the industry, and there, transparency means a very detailed, pre-specified, accurate, complete protocol.
Luigi: As an economist, I’m embarrassed to say that you’re doing something that we economists should be doing, but we’re not, which is to bring incentives into research. What I find amazing is that economists think that everything is motivated by monetary incentives except research. They think that every human responds to incentives except the economists, who are angels, and they don’t respond to incentives. I think that it is very healthy to bring this evidence.
Let me provoke you with one idea that I’ve not seen. Maybe you have written a paper about this, but I don’t know all your papers because you’ve written so many. When there is a very strong incentive to find a particular result, and you know that a lot of researchers are trying to find a result, because the data are available, and you know that this is a good result to have, and the result is not there in the literature, can you say the lack of evidence is evidence of lack of an effect?
John Ioannidis: The classic answer: no. Lack of evidence is not evidence for lack of an effect. There are so many questions for which we have no evidence, or we have very poor evidence that you can very quickly say, “Goodness, that’s like we have no evidence at all.” We need evidence for these questions that we don’t have some data to support one or another answer.
Luigi: But when you know that there are a lot of researchers that will try to get . . . and you know that null results rarely get published. Let me give an example. In the past, there was a lot of interest in showing that countries with the largest stock markets were growing faster. There is no paper I know that basically shows that. Can I infer that, basically, the correlation is zero? I know that a lot of researchers—and I’m one of those—have tried to find that correlation, and they failed. So, can I make that conclusion?
John Ioannidis: Sometimes you might. If something is so easy to address, so easy to tackle, so in front of your eyes, at the tip of your fingers, anyone with a laptop can access datasets and probe that correlation within minutes, if not seconds, and you see no literature, it’s not paranoid to think that many people have really just clicked on their laptops and have tried to get an answer, and there was not an answer that they felt was publishable.
I see that very often in some very common medical fields where you look at the literature, there’s very little, and you say: “Goodness, that’s something that tens of thousands of datasets can immediately address if someone wants to address it. What’s going on here?”
Now, if you take the paranoid step, this means that they found a negative result. It’s a paranoid step, but it is likely to be true. If it’s something that is so easy to do and no one has reported it, most likely it is null or something that people don’t want to hear about.
Conversely, if it’s a question that you only can answer if you have private access to the Hubble or Webb telescope, everyone can know who has access to that, that’s clearly a very different story. You have to say no one can really attack that question. The one means that is available, we know that no one has done that, has not looked in that direction of the galaxy, so there’s nothing unpublished.
It takes a little bit of understanding of how a field works and its whereabouts in terms of who is doing what, what kind of access people have to technologies and methods and tools to generate data, and also what is the culture in the field in terms of tolerating negative results.
There are some fields that have no problem with negative results nowadays, for various reasons. Perhaps, for example, you work in genetic epidemiology. You test every little polymorphism in our genome for association. There’s no reason to hide some of these polymorphisms from the publication of the results. You just show: “Here’s the entire sequencing, here’s the entire genome that I have tested, and these are the results that I have found. Some of them are significant. Others are not significant.” If you try to eliminate some, people will notice. They’ll say: “What’s going on here? Why are you showing me only 22 percent of what, obviously, you have tested?”
So, there are some settings where publication bias does not exist. There are even some settings where publication bias exists in an inverse mode where people prefer negative results. For example, studies of harms of medications. The industry does not want to hear anything about harms. They will do their best to suppress studies that show harm signals, downplay them, not publish them, make a story: “These are conspiracy theorists who are talking about harms.” The incentive is against positive results.
Or people like me who want to show that results are null, I have a bias to show negative results, so who knows? If you have more people like me, maybe you will get more disincentives against the positive results. I don’t want to hear about replication. That will mean that my papers are not replicated. I’m kidding, but who knows?
Bethany: Is there a way to draw a clear line between mistakes that may arise from very human cognitive biases, but are not mistakes because of bad incentives, and mistakes that are made because of bad incentives? How do you think about each category of mistakes?
John Ioannidis: I think the two categories interact. You may have cognitive biases, and this may make you more receptive to doing some research that also has financial conflicts or, other more overt conflicts that lead to a particular result, or conversely, maybe you avoid that explicitly.
What we need is transparency. Transparency regarding the financial components should be the easiest to attain, but we know that we have not attained even that. I’m not going to dismiss a study because it is funded by a potentially conflicted stakeholder, but I just want to know. It could be done very well. Who knows? It may be the best study in the field, but I think it’s good to know that this is who has supported it and what kind of involvement they had. Was it that they just got the money and then they shut their eyes, or did they run the protocol? Did they run the analysis? Did they write the paper? Did they do everything and just ask a couple of Stanford and Harvard and University of Chicago professors to put their names on it, just to gain the eminence prestige?
Now, the allegiance and cognitive biases are more difficult to decipher. Again, I think we can do a little better in disclosing some of the possibilities, for example, advocacy and activism. Especially when it’s publicly declared previously, I see no reason why people should not say so. They should be proud. “I’m an activist for climate change to be taken seriously.” Good for you, but just tell me about it. “I’m an activist in trying to get rid of SARS-CoV-2.” Thank you very much. Just tell me that you’re an activist and what kind of activism you are proposing, because maybe what you propose is very reasonable or maybe it’s a little weird. But whatever it is, you should be proud that you’re an activist or you should disclose it. We don’t see that disclosed.
We had a situation with a paper published in Nature where out of 100, roughly, such conflicts of interest of intense, publicly disclosed activism for something that as a theory . . . Zero COVID is equivalent to flat earth nowadays, but you had all these people believing that SARS-CoV-2 should be eliminated. Only one of about 100 was disclosed. Why? They were proud to do this. I think that they should have disclosed what they had done and what they were struggling to save the world from. Some of that could be disclosed.
Some other aspects are very difficult to disclose. I’m not an activist for the Mediterranean diet, but I do like the Mediterranean diet. Should I disclose that? If I do nutrition research, should I tell you what I ate today for breakfast or what I will eat for lunch? That’s getting very messy, but at least some aspects, I think, could be more clean, more disclosable, and more believable, eventually, because currently, these disclosures are so empty that people justifiably say, “I don’t believe it.” That’s just hidden under the carpet.
Luigi: I’m sorry to say, but with a last name like Ioannidis, you don’t need disclosure about the Mediterranean diet. People just assume.
John Ioannidis: Everybody will notice.
Luigi: Yeah.
Bethany: We’re almost out of time, but I had a last question. You’d mentioned back at the beginning being an effective skeptic. How would you define or how would you recommend somebody go about being an effective skeptic?
John Ioannidis: Goodness. I would have to be an expert to start giving admonitions here, but I think we start from our own selves. I think that I study bias, so I must be very biased, so I always try to look at what I say and write. It’s more difficult to look at what you say, because it’s a done deal, like in a podcast. But at least what you write and what you publish, you have to look again and again and again, and try to get some contrarian to your views to look at it, and try to see what their arguments are, and where you think that you can meet their arguments, or rebut their arguments, or perhaps agree that they may be correct in what they challenge.
Be open to asking for replication, strategically placed replication. Be ready to consider alternative possibilities. Don’t be inflexible. Science is self-correcting, and I think that we should be happy with whatever the results are.
Don’t start a study with a preconceived notion that this is the result that I want to get. If you do a study to get a particular result, probably we’re off on the wrong foot. We run a study because any result in one or the other direction or null is going to be very interesting. It will change our information landscape no matter what the result is.
I’m not sure. I think that it’s something that needs training, and it needs consistency and persistence to overcome these biases that are innate. They’re part of how we think and how we work.
Bethany: Well, I loved this whole concept of evidence versus eminence. I love frameworks. I love when you hear a framework that helps you organize the rest of your thought. To me, that idea of are you hearing this because it’s evidence, or are you hearing this because of eminence, and how are those conflated, and when should eminence confer evidence, and when should it not? I found all of that just a really, really fascinating framework.
My sister and I share this irritation and see it quoted everywhere now. Speaking of evidence, I have not done this. This is pure opinion. But I think that 20 years ago, if you read the paper, it wasn’t, “The experts tell you to do this. The experts tell you to do that,” and sometimes this notion of expertise is completely devoid of any underlying evidence. I was giggling to myself at this, but I thought that was just an incredibly useful framework, for more broadly, even, than this podcast. Did you?
Luigi: I completely agree, and I actually was a little bit surprised that he didn’t use it more when it came to democracy. He was very humble. In a sense, the beauty, but also the threat of evidence-based is that it’s equal opportunity in destroying the existing structure.
If you have a very strong political structure, this political structure cannot afford to be attacked by the evidence, and so it makes it very difficult to have a system that is evidence-based, because if you have 99 percent of the stuff that is evidence-based, but not one, the temptation to bring evidence also to that 1 percent is too big. As a result, it will bring the political structure down.
Bethany: Also, another text and subtext of this conversation is the way in which what we think of as science, in its best incarnation, as this disinterested pursuit of truth is so interwoven with the times in which we live, the political environment in which we live, and just very human failings, from cognitive biases to incentive structures. That was fascinating, too, that science is trying to be this disinterested pursuit of truth in a world that, broadly speaking, has always made it very difficult.
Luigi: What I find particularly interesting is that he is not a radical, because when you realize this, you could easily radicalize and say: “We cannot trust anything. We cannot do this. We should ban funding. We should ban conflicts of interest. We should ban . . .” We know how some leaders have gone, without naming names, in this direction.
He actually takes a very scientific approach to that, which I will call the Bayesian approach, i.e., to say, look, we need to factor in these elements, bring some biases. We need to correct for the bias, but it doesn’t mean that the evidence is useless. It means that it needs to be debiased and still can be used. His papers provide a framework, which is a very useful framework.
Bethany: I wasn’t aware until we began to work on this and, really, the extent of it even until this conversation, of the growing number of scientific papers coming out of totalitarian regimes. We didn’t get to this with him, and nor is it necessarily his area of expertise, but I was thinking how frightening that is, even putting aside politics and misaligned incentives among nation-states, in a world of large language models, where all of that slop is going to be increasingly combined into more slop. That’s a really frightening corollary of this, isn’t it?
Luigi: It is. This is where you need to divide between people who are religious about science . . . The science is so perfect that it cannot be affected by anything. And maybe mathematics is like this. There is a famous result in dynamic programming called the Pontryagin principle, and it was reached by a Soviet mathematician. In fact, it was actually used to send the rockets to space, but the fact that this guy was a communist doesn’t really make the result untrue. But why? Because you can prove the result after you see it.
When you do empirical science, there are so many steps in the process that make the results not completely independent. I don’t think that we can say that the Pontryagin principle was reached because the guy was a communist, but a lot of results that people present might be due to the fact that they’re communists, they’re vegan, they’re this, they’re that.
Bethany: Did this change your mind about how you think about your own field at all?
Luigi: Yes and no. I adopted a more cynical view a long time ago, before I even was aware of this research. In fact, one of the least-cited papers I have, and one of the ones I like the most, actually, is a simple paper in which I make the point that if we economists think that regulators are captured, not because they are bad people, but because of incentives, we should conclude that academics are often captured as well.
The reason why regulators are captured is because of career incentives and access to information. Basically, those are the two elements that make them captured. Those two things are very much presence in academia as well. The only difference is that, as regulators, especially now under President Trump, you can be fired. As an academic, if you have tenure, it is difficult to fire you. Not impossible, but it is very difficult to fire you. That’s the big difference.
But on the positive side, if you reach certain results, you help your career. That’s true in both places, and most importantly, you need information, and now in economics, particularly, you need data to actually do research. Exactly the same way in which regulators are biased by these career incentives and these data needs, I think academics can be biased as well. When I wrote this paper, I thought the paper was obvious, and not particularly interesting because it’s obvious. In fact, what I was shocked by is that it’s actually rejected by 99.9 percent of the profession.
In fact, the most surreal story is that I presented it a long time ago at a seminar at Harvard, and there were people from Harvard Business School and people from the Harvard economics department. I made the case both that Harvard business cases are biased and that academic research is biased. The interesting thing is that the people from Harvard Business School were very willing to admit that academic research was biased, but not the cases. The people from the economics department were very willing to admit that the cases were biased but not the academic research, and they were in the same room and did not understand the irony of that.
Bethany: Did anyone even acknowledge the irony?
Luigi: No, no. Actually, I learned this greatest line from Orwell—it’s always Orwell—who says, “It’s very difficult to see what is in front of you.”
Bethany: That brought to mind that our own Matt Lucky, in helping us prepare for this episode, wrote in his notes that what we know is conditional on who owns what. There’s certainly a lot of that through academia, through economic research, through science, but also, really, through journalism.
I think that it’s probably true that most journalists don’t admit that they’re biased, either biased for reasons that are not great, i.e., a pre-existing viewpoint, whether ideological or otherwise, or biased because of the inherent limitations of the journalistic profession, which is that what you learn is dependent on who’s willing to talk to you, and who you think to ask to talk to you. All of that creates biases, both bad in a moral sense but also unavoidable.
Luigi: But also, there is another source of potential bias that interacts a lot with libel law. And since this is under attack in the United States, I want to bring it up. You know much better than I that when you’re a journalist, you are trying to do your best to collect research, but you make mistakes. The question is, if you are massively penalized for one type of mistake, and not at all or very little for the other, you are going to report in a biased way. That’s human nature. You’re very much afraid of one mistake and not the other.
If every time you say something negative and wrong about a company, you can be sued, but if you say something positive, nobody sues you, then you’re going to bias in favor of companies no matter what. That’s the reason why I thought that the New York Times v. Sullivan decision that says that the proof of libel is not that you’re wrong, but it is malicious intent, was so important in creating the freedom of the press in the United States. The true freedom of the press where you challenge, of course, the political authority, and you also challenge the economic authority.
I don’t know if you know, but there is a concerted effort now to try to overturn this decision at the Supreme Court level. There are active groups that are trying to bring cases up to the Supreme Court because they know that Clarence Thomas is very much against New York Times v. Sullivan. They hope that they can bring a case in which they will overturn it, because that will basically kill any ability to criticize any company in the world.
Bethany: Wow. I actually did not know that. As soon as we’re done recording this, I’m going to go look into that, because that is terrifying. The reality is that bias already exists, because many reporters need access to the companies that they cover. If you write things the companies don’t like, they don’t give you access.
The same, actually, is true of Wall Street analysts. Eliot Spitzer did this whole intervention at the start of the 2000s because Wall Street research was corrupted by the fact that the research analysts were being paid, that the investment banks weren’t making money because of buy ratings on stock. He drew a hard line there.
But now the reality is the way research analysts make money through their firms is through access to a company. The bias still exists, and the incentive structure is still there. It’s just that what you do in order to cater to that is different.
New York Times v. Sullivan being overturned would obviously be terrible in all the ways you just said, but that bias already exists in journalists who write about companies. It’s bad enough already.
Luigi: No, of course it exists, but I think it would be much, much bigger in the future.
Bethany: I agree.
Luigi: Coming back to Ioannidis, what he opened up—I think this is a very important topic that cannot be addressed just in this podcast but is meant to be studied much more—is to what extent are the very strong incentives that the system is providing now for research making research worse?
I think that this is very problematic, because as we said at the beginning, the entire growth of the system is dependent upon the production of better and better knowledge. If that production function is contaminated, we’re really undermining one of the major sources of growth of capitalist systems.
Bethany: It’s always been the case in science that there have been elements that have distorted the system and biased it. I think the problem with the complexity of our modern world is, can you be transparent? He was talking, at the end, about the difficulty of being transparent about some of our inner motives that might dictate what we do, but can you be transparent enough to have people be able to be legitimately skeptical about what you do, and have the means to look at it and think through it for themselves? Can you encourage a world where people open themselves up to that sort of criticism, because, of course, we all have a disincentive to do that? Those are, to me, really fascinating questions.
Luigi: I was really intrigued by his criticism of the way we fund research today. Kudos to him, because it’s not easy today to criticize the NIH. You’re immediately seen as pro-Trump or justifying what Trump does. But at some level, he’s saying that it might not be that crazy to cut out some of the NIH funding, especially if that funding is reallocated to something else.
Bethany: Yeah, if that funding is reallocated to something else, and if that funding is cut out sensibly and strategically and intelligently. I think we can both say those are big ifs in the current environment, right?
Luigi: Yeah.
Bethany: On that note . . .
Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.