AI diver with two types of protection
Credit: Dan Page

AI Doesn’t Need One-Size-Fits-All Regulation

Neither US nor European regulators get the balance quite right.

The problem with European regulators, a German businessman recently told me, is that they are too scared of downside risks. “In any innovative new business sector, they overregulate and stifle any upside potential,” he said. In contrast, he argued, Americans care more about the upside potential, and thus hold off on regulation until they know more about the consequences. He said, “Not surprisingly, the United States has much more of a presence in innovative industries.”

Artificial intelligence is a case in point. The European Union enacted the world’s first comprehensive AI regulation in August 2024, establishing safeguards against risks such as discrimination, disinformation, privacy violations, and AI systems that could endanger human life or threaten social stability. The law also assigns AI systems different risk levels, with different treatments for each. While AI-driven social-scoring systems are banned outright, higher-risk systems are heavily regulated and supervised, with a list of fines for noncompliance.

But Europe has little presence in the burgeoning AI industry, especially relative to the US or China. Those leading the charge in generative AI are US-based companies such as Anthropic, Google, and OpenAI; no European company meets the mark. Such a glaring gap seems to speak for itself. For now, the Trump administration’s America’s AI Action Plan, which seeks to limit red tape and regulation in AI, looks like the better approach.

The problem with the European way is that it burdens fledgling companies with the costs of regulatory compliance before the technology’s potential has become clear. A chatbot that spreads falsehoods or discriminates against certain ethnic groups is certainly not desirable, but there must be some tolerance for such errors in the early stages of a system’s development.

Moreover, when developers can explore a system’s positive possibilities more freely, they also have time (and perhaps resources generated from successful but error-prone launches) to figure out cost-effective ways to address issues that undermine the system’s reliability. Demanding near perfection from the outset does not safeguard society so much as it stifles the trial-and-error process through which breakthroughs emerge.

It is worth emphasizing that regulation does not stop at national borders. In fact, the world may benefit from having somewhat different approaches.

Of course, errors such as racial discrimination can be extremely costly, especially if made by chatbots that interact with millions of people. Recognizing this risk, one regulatory approach allows new products to be tested only in tightly controlled settings. Innovators can experiment with a limited group of users, and always under the regulator’s watchful eye. This “sandbox” approach helps to contain any harms from spilling over to the broader public—Europe’s main concern.

But sandboxes might also limit what can go right. Trials with small, restricted groups cannot capture the benefits of network effects, whereby products become more valuable as more people use them. Nor can they reveal unexpected breakthroughs that come when the “wrong” people adopt a product. (For example, online pornography drove early innovations in web technology.) In short, sandbox trials may keep disasters at bay, but they also risk stifling discovery. They are better than outright bans, but they may still cause innovators to bury too many promising ideas before they can scale.

What, then, are the costs of the laissez-faire American approach? Most obviously, the system can blow up because of rogue products, as happened with subprime mortgage-backed securities before the 2008–09 global financial crisis. Today, one hears similar fears about generative AI and the crypto industry (with FTX’s implosion cited as an early warning signal).

Historically, the US, with its deep fiscal pockets, may have been more willing to take such risks, while the fragmented EU may have been more cautious. But with fiscal space shrinking in the US, a rethink may be in the offing.

Even if the US wants to regulate more, though, can the authorities really pull it off? The American way is to wait until an industry is large enough to matter. But by that point, the industry will have grown powerful enough to shape any rules meant to rein it in. Consider crypto: Flush with cash, armed with lobbyists, and laser focused on its interests, it has proven adept at swaying politicians—and public opinion—in its favor. The consequence invariably is underregulation, even when the risks to the public are glaring.

Risk-averse Europe, by contrast, steps in early, when an innovative sector is still small and its voice barely audible. At this stage, it is the incumbents—the banks threatened by crypto, for example—that dominate the debate. Their influence pushes the needle toward excessive caution and heavy-handed rules. The US tends to regulate too little, too late, whereas Europe does too much, too soon. Neither gets the balance quite right.

Even though there is a case for each side moving toward the other, it is worth emphasizing that regulation does not stop at national borders. In fact, the world may benefit from having somewhat different approaches. US chatbots can thrive in a relatively unregulated environment, experimenting and scaling quickly. But once they seek a global presence, they will run into Europe’s stricter standards. With sufficient resources and strong incentives, they will find creative, low-cost ways to comply, and those risk-reduction strategies may eventually flow back into the US, leaving the world with more, safer innovation.

That is the ideal scenario, anyway. Reality is likely to be messier. American companies could cause global harm before European regulators catch up. Europe may continue discouraging innovation before it starts, leaving the world with too little. But perhaps the greatest danger is if regulators on either side of the Atlantic export their own rule book, forcing the other to fall in line. The world may be best served if US and European regulators keep seeing regulations differently.

Raghuram G. Rajan is the Katherine Dusak Miller Distinguished Service Professor of Finance at Chicago Booth. Copyright 2025 by Project Syndicate.

More from Chicago Booth Review
More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.