Edmon De Haro
Yet while this arms race is real, we think there’s a much more powerful trend at work in AI today—a trend of diffusion and dissemination, rather than concentration. Yes, every big tech company is trying to hoard math and coding talent. But at the same time, the underlying technologies and ideas behind AI are spreading with extraordinary speed: to smaller companies, to other parts of the economy, to hobbyists and coders and scientists and researchers everywhere in the world. That democratizing trend, more than anything else, is what has our students today so excited, as they contemplate a vast range of problems practically begging for good AI solutions.
Who would have thought, for example, that a bunch of undergraduates would get so excited about the mathematics of cucumbers? Well, they did when they heard about Makoto Koike, a car engineer from Japan whose parents own a cucumber farm. Cucumbers in Japan come in a dizzying variety of sizes, shapes, colors, and degrees of prickliness—and on the basis of these visual features, they must be separated into nine classes that command different market prices. Koike’s mother used to spend eight hours per day sorting cucumbers by hand. But then Koike realized that he could use a piece of open-source AI software from Google, called TensorFlow, to accomplish the same task, by coding up a “deep-learning” algorithm that could classify a cucumber based on a photograph. Koike had never used AI or TensorFlow before, but with all the free resources out there, he didn’t find it hard to teach himself how. When a video of his AI-powered sorting machine hit YouTube, Koike became an international deep-learning/cucumber celebrity. It wasn’t merely that he had given people a feel-good story, saving his mother from hours of drudgery. He’d also sent an inspiring message to students and coders across the world: that if AI can solve problems in cucumber farming, it can solve problems just about anywhere.
That message is now spreading quickly. Doctors are now using AI to diagnose and treat cancer. Electrical companies use AI to improve power-generating efficiency. Investors use it to manage financial risk. Oil companies use it to improve safety on deep-sea rigs. Law enforcement agencies use it to hunt terrorists. Scientists use it to make new discoveries in astronomy and physics and neuroscience. Companies, researchers, and hobbyists everywhere are using AI in thousands of different ways, whether to sniff for gas leaks, mine iron, predict disease outbreaks, save honeybees from extinction, or quantify gender bias in Hollywood films. And this is just the beginning.
We see the real story of AI as the story of this diffusion: from a handful of core math concepts stretching back decades, or even centuries, to the supercomputers and talking/thinking/cucumber-sorting machines of today, to the new and ubiquitous digital wonders of tomorrow.
What does ‘AI’ really mean?
When you hear “AI,” don’t think of a droid. Think of an algorithm.
An algorithm is a set of step-by-step instructions so explicit that even something as literal-minded as a computer can follow them. (You may have heard the joke about the robot who got stuck in the shower forever because of the algorithm on the shampoo bottle: “Lather. Rinse. Repeat.”) On its own, an algorithm is no smarter than a power drill; it just does one thing very well, like sorting a list of numbers or searching the web for pictures of cute animals. But if you chain lots of algorithms together in a clever way, you can produce AI: a domain-specific illusion of intelligent behavior. For example, take a digital assistant such as Google Home, to which you might pose a question like “Where can I find the best breakfast tacos in Austin?” This query sets off a chain reaction of algorithms:
One algorithm converts the raw sound wave into a digital signal.
Another algorithm translates that signal into a string of English phonemes, or perceptually distinct sounds: “brek-fust-tah-koze.”
The next algorithm segments those phonemes into words: “breakfast tacos.”
Those words are sent to a search engine—itself a huge pipeline of algorithms that processes the query and sends back an answer.
Another algorithm formats the response into a coherent English sentence.
A final algorithm verbalizes that sentence in a non-robotic-sounding way: “The best breakfast tacos in Austin are at Julio’s on Duval Street. Would you like directions?”
And that’s AI. Pretty much every AI system—whether it’s a self-driving car, an automatic cucumber sorter, or a piece of software that monitors your credit-card account for fraud—follows this same pipeline-of-algorithms template. The pipeline takes in data from some specific domain, performs a chain of calculations, and outputs a prediction or a decision.
There are two distinguishing features of the algorithms used in AI. First, these algorithms typically deal with probabilities rather than certainties. An algorithm in AI, for example, won’t say outright that some credit-card transaction is fraudulent. Instead, it will say that the probability of fraud is 92 percent—or whatever it thinks, given the data. Second, there’s the question of how these algorithms “know” what instructions to follow. In traditional algorithms, such as the kind that run websites or word processors, those instructions are fixed ahead of time by a programmer. In AI, however, those instructions are learned by the algorithm itself, directly from “training data.” Nobody tells an AI algorithm how to classify credit-card transactions as fraudulent or not. Instead, the algorithm sees lots of examples from each category (fraudulent, not fraudulent), and it finds the patterns that distinguish one from the other. In AI, the role of the programmer isn’t to tell the algorithm what to do. It’s to tell the algorithm how to train itself what to do, using data and the rules of probability.
How did we get here?
Modern AI systems, such as a self-driving car or a home digital assistant, are pretty new on the scene. But you might be surprised to learn that most of the big ideas in AI are actually old—in many cases, centuries old—and that our ancestors have been using them to solve problems for generations. For example, take self-driving cars. Google debuted its first such car in 2009. But one of the main ideas behind how these cars work was discovered by a Presbyterian minister in the 1750s—and this idea was used by a team of mathematicians over 50 years ago to solve one of the Cold War’s biggest blockbuster mysteries.
Or take image classification, such as the software that automatically tags your friends in Facebook photos. Algorithms for image processing have gotten radically better over the last five years. But the key ideas here date to 1805—and these ideas were used a century ago, by a little-known astronomer named Henrietta Leavitt, to help answer one of the deepest scientific questions that humans have ever posed: How big is the universe?
Or even take speech recognition, one of the great AI triumphs of recent years. Digital assistants such as Alexa and Google Home are remarkably fluent with language, and they’ll only get better. But the first person to get a computer to understand English was a rear admiral in the US Navy, and she did so almost 70 years ago.
Those are just three illustrations of a striking fact: no matter where you look in AI, you’ll find an idea that people have been kicking around for a long time. So in many ways, the big historical puzzle isn’t why AI is happening now, but why it didn’t happen long ago. To explain this puzzle, we must look to three enabling technological forces that have brought these venerable ideas into a new age.
Nobody today has any idea how to create a robot with general intelligence, in the manner of a human or a Terminator.
The first AI enabler is the decades-long exponential growth in the speed of computers, usually known as Moore’s law. It’s hard to convey intuitively just how fast computers have gotten. The cliché used to be that the Apollo astronauts landed on the moon with less computing power than a pocket calculator. But this no longer resonates, because . . . what’s a pocket calculator? So we’ll try a car analogy instead. In 1951, one of the fastest computers was the UNIVAC, which performed 2,000 calculations per second, while one of the fastest cars was the Alfa Romeo 6C, which traveled 110 miles per hour. Both cars and computers have improved since 1951—but if cars had improved at the same rate as computers, a modern Alfa Romeo would travel at 8 million times the speed of light.
The second AI enabler is the new Moore’s law: the explosive growth in the amount of data available, as all of humanity’s information has become digitized. The Library of Congress consumes 10 terabytes of storage, but the Big Four tech firms—Google, Apple, Facebook, and Amazon—collected about 120,000 times as much data as this in 2013 alone. And that’s a lifetime ago in internet years. The pace of data accumulation is accelerating faster than an Apollo rocket; in 2017, more than 300 hours of video were uploaded to YouTube every minute, and more than 100 million images were posted to Instagram every day. More data means smarter algorithms.
The third AI enabler is cloud computing. This trend is nearly invisible to consumers, but it’s had an enormous democratizing effect on AI. To illustrate this, we’ll draw an analogy here between data and oil. Imagine if all companies of the early 20th century had owned some oil, but they had to build the infrastructure to extract, transport, and refine that oil on their own. Any company with a new idea for making good use of its oil would have faced enormous fixed costs just to get started; as a result, most of the oil would have sat in the ground. Well, the same logic holds for data, the oil of the 21st century. Most hobbyists or small companies would face prohibitive costs if they had to buy all the gear and expertise needed to build an AI system from their data. But the cloud-computing resources provided by outfits such as Microsoft Azure, IBM, and Amazon Web Services have turned that fixed cost into a variable cost, radically changing the economic calculus for large-scale data storage and analysis. Today, anyone who wants to make use of their “oil” can now do so cheaply, by renting someone else’s infrastructure.
When you put those four trends together—faster chips, massive data sets, cloud computing, and, above all, good ideas—you get a supernova-like explosion in both the demand and capacity for using AI to solve real problems.
We’ve told you how excited our students are about AI, and how the world’s largest firms are rushing to embrace it. But we’d be lying if we said that everyone was so bullish about these new technologies. In fact, many people are anxious, whether about jobs, data privacy, wealth concentration, or Russians with fake-news Twitter-bots. Some people—most famously Elon Musk, the tech entrepreneur behind Tesla and SpaceX—paint an even scarier picture: one where robots become self-aware, decide they don’t like being ruled by people, and start ruling us with a silicon fist.
Let’s talk about Musk’s worry first; his views have gotten a lot of attention, presumably because people take notice when a member of the billionaire disrupter class talks about AI. Musk has claimed that in developing AI technology, humanity is “summoning a demon,” and that smart machines are “our biggest existential threat” as a species.
You can decide for yourself whether you think these worries are credible. We want to warn you up front, however, that it’s very easy to fall into a trap that cognitive scientists call the “availability heuristic”: the mental shortcut in which people evaluate the plausibility of a claim by relying on whatever immediate examples happen to pop into their minds. In the case of AI, those examples are mostly from science fiction, and they’re mostly evil—from the Terminator to the Borg to HAL 9000. We think that these sci-fi examples have a strong anchoring effect that makes many people view the “evil AI” narrative less skeptically than they should. After all, just because we can dream it and make a film about it doesn’t mean we can build it. Nobody today has any idea how to create a robot with general intelligence, in the manner of a human or a Terminator. Maybe your remote descendants will figure it out; maybe they’ll even program their creation to terrorize the remote descendants of Elon Musk. But that will be their choice and their problem, because no option on the table today even remotely foreordains such a possibility. Now, and for the foreseeable future, “smart” machines are smart only in their specific domains:
- Alexa can read you a recipe for spaghetti Bolognese, but she can’t chop the onions, and she certainly can’t turn on you with a kitchen knife.
- An autonomous car can drive you to the soccer field, but it can’t even referee the match, much less decide on its own to tie you to the goalposts and kick the ball at your sensitive bits.
Moreover, consider the opportunity cost of worrying that we’ll soon be conquered by self-aware robots. To focus on this possibility now is analogous to the de Havilland Aircraft Company, having flown the first commercial jetliner in 1952, worrying about the implications of warp-speed travel to distant galaxies. Maybe one day, but right now there are far more important things to worry about—such as, to press the jetliner analogy a little further, setting smart policy for all those planes in the air today.
This issue of policy brings us to a whole other set of anxieties about AI, much more plausible and immediate. Will AI create a jobless world? Will machines make important decisions about your life, with zero accountability? Will the people who own the smartest robots end up owning the future?
These questions are deeply important, and we hear them discussed all the time—at tech conferences, in the pages of the world’s major newspapers, and over lunch among our colleagues. We can’t tell you the answers to these questions. Like our students, we are ultimately optimistic about the future of AI. But we’re not labor economists, policy experts, or soothsayers.
We can tell you, however, that we’ve encountered a common set of narratives that people use to frame this subject, and we find them all incomplete. These narratives emphasize the wealth and power of the big tech firms, but they overlook the incredible democratization and diffusion of AI that’s already happening. They highlight the dangers of machines making important decisions using biased data, but they fail to acknowledge the biases or outright malice in human decision-making that we’ve been living with forever. Above all, they focus intensely on what machines may take away, but they lose sight of what we’ll get in return: different and better jobs, new conveniences, freedom from drudgery, safer workplaces, better health care, fewer language barriers, new tools for learning and decision-making that will help us all be smarter, better people.
Take the issue of jobs. In America, jobless claims kept hitting new lows from 2010 through 2017, even as AI and automation gained steam as economic forces. The pace of robotic automation has been even more relentless in China, yet wages there have been soaring for years. That doesn’t mean AI hasn’t threatened individual people’s jobs. It has, and it will continue to do so, just as the power loom threatened the jobs of weavers, and just as the car threatened the jobs of buggy-whip makers. New technologies always change the mix of labor needed in the economy, putting downward pressure on wages in some areas and upward pressure in others. AI will be no different, and we strongly support job-training and social-welfare programs to provide meaningful help for those displaced by technology. A universal basic income might even be the answer here, as many Silicon Valley bosses seem to think; we don’t claim to know. But arguments that AI will create a jobless future are, so far, completely unsupported by actual evidence.
Then there’s the issue of market dominance. Amazon, Google, Facebook, and Apple are enormous companies with tremendous power. It is critical that we be vigilant in the face of that power, so that it isn’t used to stifle competition or erode democratic norms. But don’t forget that these companies are successful because they have built products and services that people love. And they’ll only continue to be successful if they keep innovating, which isn’t easy for large organizations. Besides, we’ve read a lot of predictions that the big tech firms of today will remain dominant forever, and we find that these predictions usually don’t even explain the past, much less the future. Remember when Dell and Microsoft were dominant in computing? Or when Nokia and Motorola were dominant in cell phones—so dominant that it was hard to imagine otherwise? Remember when every lawyer had a BlackBerry, when every band was on Myspace, or when every server was from Sun Microsystems? Remember the dominance of AOL, Blockbuster Video, Yahoo, Kodak, or the Sony Walkman? Companies come and companies go, but time marches on, and the gadgets just keep getting cooler.
We take a practical outlook on the emergence of AI: it is here today, and more of it is coming tomorrow, whether any of us like it or not. These technologies will bring immense benefits, but they will also, inevitably, reflect our weak spots as a civilization. As a result, there will be dangers to watch out for, whether to privacy, to equality, to existing institutions, or to something nobody’s even thought of yet. We must meet these dangers with smart policy—and if we hope to craft smart policy in a world of “hot takes” and 280 characters, it is essential that we reach a point as a society where we can discuss these issues in a balanced way, one that reflects both their importance and their complexity.
Nicholas Polson is Robert Law Jr. Professor of Econometrics and Statistics at Chicago Booth. James Scott is associate professor of statistics at University of Texas at Austin. This is an edited excerpt from their book, AIQ: How People and Machines Are Smarter Together, reprinted with permission from St. Martin’s Press.
More from Chicago Booth Review
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.