In 1965, the legendary consumer-rights crusader Ralph Nader, a 31-year-old lawyer at the time, published his first book, Unsafe at Any Speed: The Designed-In Dangers of the American Automobile. An explosive and unexpected best seller, Nader’s book took a gloves-off approach from its very first sentence: “For over half a century the automobile has brought death, injury and the most inestimable sorrow and deprivation to millions of people.”

In each chapter that followed, Nader unpacked a different facet of car design that needlessly or excessively endangered drivers, passengers, and pedestrians. He examined the science of collisions, for example, and showed how manufacturers were ordering engineers to prize style over safety; he discussed air pollution caused by cars in traffic-heavy cities like Los Angeles. His jeremiad was an impassioned call for regulation—and it worked. Within months of the book’s publication, the usually sluggish US government moved quickly to create the National Highway Traffic Safety Administration. By 1968, a flagship federal law was passed requiring all personal vehicles to be outfitted with seatbelts.

Book cover of "The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale"

On the surface, it appeared that Ralph Nader had succeeded at making America’s roads safer. Or was it more complex than that?

Flash forward to 1975, when my adroit University of Chicago colleague Sam Peltzman published a paper titled “The Effects of Automobile Safety Regulation.” The unassuming title belied Peltzman’s striking conclusion: the decade of measures spearheaded by Nader to increase automotive safety hadn’t actually made people safer at all. As Peltzman put it, “The one result of this study that can be put forward most confidently is that auto safety regulation has not affected the highway death rate.” More surprising than this, perhaps, was his explanation for why. Drivers felt safer because of the legislated measures put in place to protect them, so they took more risks while driving, and in turn had more accidents. Since I’m so safe with my seat belt, a driver might reason (consciously or not), why not lay the pedal to the metal? Seat belts make any individual driver safer in the event of an accident, but at scale, they also appeared to lead to more total accidents. It was as if one voltage gain had been wiped out by a consequent voltage drop—an unintended and shocking consequence.

While Peltzman’s paper was controversial at the time—unsurprisingly, it was politicized by pro- and anti-regulation advocates—much research in the intervening years has borne out similar conclusions in other domains. It turns out people have a tendency to engage in riskier behaviors when measures are imposed to keep them safer. Give a biker a safety helmet and he rides more recklessly—and, even worse, cars around him drive more haphazardly. And a 2009 study directly following the line of research pioneered by Peltzman found that NASCAR drivers who used a new head and neck restraint system experienced fewer serious injuries but saw a rise in accidents and car damage. In short, safety measures have the potential to undermine their own purpose.

Nothing makes spillovers more likely and visible than scaling an endeavor to a wide swath of people.

This phenomenon—which came to be known as the Peltzman effect—is often used as a lens for studying risk compensation, the theory that we make different choices depending on how secure we feel in any given situation (i.e., we take more risk when we feel more protected and less when we perceive that we are vulnerable). This is why, in the wake of the 9/11 attacks and the rise in fear of terrorists gaining access to nuclear weapons, Stanford political scientist Scott Sagan argued that increasing security forces to guard nuclear facilities might actually make them less secure. The Peltzman effect also reaches into insurance markets, whereby people who have coverage engage in riskier behavior than those without coverage, a phenomenon known as moral hazard. Clearly, this pattern of human behavior has potentially huge implications when taken to scale.

The most obvious takeaway here is that seemingly free-will choices we make every day may in fact be shaped by hidden effects we are not aware of. (Also, you should wear a seat belt and drive safely!) But in the context of scaling, this illuminates another cause of voltage drops that we must avoid: the spillover effect. This is the unintended impact one event or outcome can have on another event or outcome, a classic example being when a city opens a new factory and the air pollution it produces impacts the health of residents in the surrounding area. That this effect occurs speaks to the inescapable web linking events, the things humans create, and the natural world. The term “spillover effect” has been applied in fields as far-ranging as psychology, sociology, marine biology, ornithology, and nanotechnology, but we will define it in a human sense, as the unintended impact of one group of people’s actions on another group. And nothing makes spillovers more likely and visible than scaling an endeavor to a wide swath of people. Remember the Murphy’s law of scaling: anything that can go wrong will go wrong at scale. Or to put it slightly less memorably, something unexpected has a much higher probability of occurring at scale than not at scale.

John A. List is the Kenneth C. Griffin Distinguished Service Professor in Economics and the College at the University of Chicago. Excerpted from THE VOLTAGE EFFECT: How to Make Good Ideas Great and Great Ideas Scale. Reprinted by arrangement with Currency, an imprint of Crown Publishing Group. © John A. List

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.