Which Workers Will Benefit from AI?
Earlier job transformations prompted by the onset of manufacturing and IT may hold clues for how fast the labor market will adapt to the latest technology revolution.
Which Workers Will Benefit from AI?Chris Gash
The scope and reach of fake news has made it a matter of concern for voters in the United States and other democracies as well as a subject of considerable study and debate among academics, journalists, and politicians. New York University’s Hunt Allcott and Stanford’s Matthew Gentzkow, studying fake news’ influence on the 2016 US presidential election, find that just 156 false election-related articles were collectively shared more than 37 million times on Facebook alone. A 2016 Ipsos poll conducted on behalf of BuzzFeed found that five of the most successful fake-news headlines circulated on Facebook prior to the same election fooled respondents 75 percent of the time.
It may be tempting to look to algorithms to help fight the spread of fake news. After all, social media algorithms that decide what people see help to power that prolific sharing. Couldn’t those same algorithms, or others, help to vanquish the spurious articles that have fueled so much disinformation about politics, the pandemic, and other divisive topics? Given the deluge of articles being circulated online, what else but an algorithm could sort through it all?
Unfortunately, artificial intelligence cannot solve this problem—at least not with something as simple and comprehensive as an algorithmic “fake news detector.” The reasons why reveal not only the issues with such a product but also the limitations of A.I. in general. It is tempting to think of A.I. as a kind of superintelligence that is smarter than humans, but algorithms are just tools, like hammers, and not every problem is a nail.
Fake news seems like a perfect problem for A.I. to solve. There are massive amounts of data, both of the real and fake variety, and there’s a clear business and social need to tag news correctly. The daunting volume of the data means it needs to be processed and tagged efficiently—far more efficiently than humans are capable of. Sounds like a job for machines.
But there’s a reason that Facebook and YouTube have armies of people helping filter content: it’s not that simple to just “turn on an algorithm” because of how algorithms learn. An algorithm has two parts: a trainer and a predictor. The trainer is based on information the algorithm has “seen”—called, as you might guess, training data—and the predictor is applied to new sets of data, those we want the algorithm to evaluate.
Imagine an algorithm designed to recognize stop signs. We would give it training data, which by nature include both an input (images) and an output (labels indicating whether or not the images contain stop signs). Once the algorithm has ingested these training data and identified patterns between the images and their labels, we can give it new data (unlabeled images), allow it to produce the output (the labels), and check its accuracy.
In the case of fake news, the input could be news stories, social media posts, videos on YouTube, and other digital content, and the output would be a label indicating real or fake. Once the trainer learns how input predicts output, we could deploy the predictor in the real world where it’s only given inputs (that it hasn’t seen before), and ask it to predict the outputs.
Our fake-news-detecting algorithm runs into trouble when it attempts to generate the output label, however. Ideally, our training data set would include input data matched to output labels of real or fake. But the problem is we don’t actually know which items are real or fake. We don’t know the ground truth; we only know what humans judge to be real or fake. Our label is a label of human judgment.
Maybe this sounds absurd. Of course we can determine whether something is real or fake news! We pride ourselves on our ability to discern the truth.
But let’s consider another hypothetical algorithm, this one intended to categorize pitches as balls or strikes in baseball. The input data we use to train the algorithm would be videos of pitches from baseball games; the output label we apply would be whether the pitch was called a ball or a strike. But that label is not a reflection of the ground truth—that is, whether the pitch was actually a strike—but rather a label of the umpire’s judgment. We won’t have an algorithm that calls pitches, but an algorithm that predicts how an umpire would call pitches.
Now let’s return to fake news and think about a now-infamous video of US Representative Nancy Pelosi doctored—specifically, slowed down—to show her speaking with slightly slurred speech. That video stayed up on social media for days and racked up millions of views before being taken down. Fact-checkers labeled the video fake because, among other evidence, they were able to compare it to footage of the same event run at normal speed that showed Pelosi talking normally. But that still isn’t the ground truth—that is the fact-checkers’ judgment that the video is a fake.
To get at the ground truth of news about someone, we’d likely need a first-hand account—but of course we’d be relying on the person to be honest. Even if we could trust the information we received, doing this for every piece of news wouldn’t scale, and would defeat the purpose of the fake-news algorithm, whose entire value proposition is evaluating media on our behalf.
Human-judgment labels aren’t inherently bad, and they are useful in a lot of A.I. applications. To return to our baseball example, many pitchers would likely care less about whether a pitch actually is a strike than whether it will be called a strike. Nevertheless, it’s important to realize that we aren’t building a true fake-news detector, but rather a “what humans perceive to be fake news” detector.
Since the label “fake news” doesn’t exist as we originally conceived of it, we have to alter the functionality of the algorithm by changing the choice of output in the training data. So how can we use human-judgment labels to tag news stories as real or fake in our training data?
We could have individual people tag news articles, but we may not want what the general population thinks is fake news. (We want a fake-news detector precisely because we think the general population can’t easily and accurately determine what is real or fake.)
We could count how many times a news item is mentioned and set a threshold for the number of social media posts necessary in order to call something real. But then fake-news sites and their producers could affect the algorithm’s accuracy by having bots repost the article over and over, so that the volume goes up even though the story isn’t any more accurate.
We could evaluate on the basis of the quality of news sources, but then some people may say that Fox News or MSNBC is a credible source and others may say it’s not.
This fake-news detector sounds a lot harder than originally pitched, which tends to be the case with A.I. projects: the functionality sounds simple, but when you dig into the datafication process and actually try to build out a blueprint with all the necessary components for an algorithm, it gets a lot more complex.
We can’t build a blanket fake-news detector, but perhaps there’s a way to pivot the functionality of our algorithm. We could get human labelers and segregate them by political preference, age, gender, and other characteristics, and then have separate training sets for each group. Users could select who they are and what news they want. This isn’t really a fake-news detector anymore; it’s more of a personalized news filter.
Another function would be for a news agency—let’s say CNN—to train the algorithm based on what stories that agency considers fake or real, and then build a “CNN fake news detector” to help reporters. CNN would probably want more of a triage functionality than a binary fake/real prediction: the algorithm could predict with a certain probability that a story is fake based on previous CNN stories. Each story the algorithm looks at during the training process could be separately sent to a CNN reporter to evaluate (without the reporter seeing the algorithm’s label), and the algorithm would improve on the basis of the feedback it receives. Once the algorithm is accurate to the comfort of the news producers, it could be used as a filtering mechanism when the probability of a story being fake is high enough.
So perhaps there is a use for A.I. when it comes to detecting fake news. It’s a great application for an algorithm because it can handle information at scale much better than a human can. But the decisions made in how you train the algorithm and what data you use are not trivial—they dictate the functionality entirely.
The next potential pitfall involves deploying the predictor. It’s important that the distribution of input data we use in training matches the distribution of input data we will use in deployment. That is, if we don’t use social media posts in training data, we can’t expect to get good results if we deploy the algorithm to evaluate social media posts. The algorithm won’t be able to make good predictions when it doesn’t have previous similar data to refer to. Machine learning is “just” statistics.
This is another example of how the choices we make with data up front dictate the functionality down the road. If we want our fake-news detector to find fake stories on social media, we need to use social media in our training data. And we need to go further and be more precise about that: if we want our algorithm to evaluate fake stories on Twitter, we need our inputs to include a significant number of fake and real stories from Twitter. If we want our algorithm to evaluate fake images on Twitter, we similarly need our inputs to include a significant number of real or fake images from Twitter.
Failures tend to arise from the critical design decisions that sit upstream of coding challenges, like defining the data to use and setting the objective for the algorithm. Those are just as essential to the successful and effective use of A.I. as good code is.
We’ve established that we can’t build a fake-news detector that makes a determination with objective certainty about every piece of content in every context and for every user. But there’s room for a more nuanced product tailored to the needs of a given set of users. Here’s one version of what it could look like:
Functionality: Assist CNN reporters by predicting whether news stories on Twitter are real or fake, thereby helping them determine whether they should follow up on a story. The algorithm provides a recommendation and is sent to a reporter for validation.
Input data: Twitter news stories from 2022
Output label (training data labeling process): Existing CNN fact-checkers and reporters label each story as real or fake; this process doesn’t stop until there is a critical mass of both real and fake news stories in the training dataset.
Payoffs (how much the algorithm should punish an incorrect answer): Since this functionality is a triage setup as opposed to a straight prediction model, sending a fake story to a reporter may not be as big a deal as missing an important real story, so we should punish false negatives (calling a news story fake that isn’t) more strongly than false positives.
This sample algorithm still isn’t without challenges. For example, we specified data from 2022 because the content on Twitter changes over time. There’s no guarantee that training on 2022 data will be helpful in 2023. We also don’t know what happens if the reporters and fact-checkers at CNN disagree on whether a story is real or fake—do we get three opinions and take the majority? Do we get more and average? There are multiple questions we still need to answer about this design for the algorithm, but at least we now have refined functionality based on what’s actually possible.
The difficulty of creating a general-purpose fake-news detector highlights an important and often-overlooked phenomenon in A.I. development. Perhaps because the phrase artificial intelligence implies a technologically derived superintelligence, and perhaps because in the right settings A.I. applications can far outpace human performance, it’s easy to forget that they are, fundamentally, tools created by humans. They can fail because humans can fail.
In some cases they may fail because of a technical shortcoming, but since most of the attention in the field has been directed at machine learning engineering, in our experience most failures don’t stem from problems with the engineering itself. Failures tend to arise instead from the critical design decisions that sit upstream of coding challenges, like defining the data to use and setting the objective for the algorithm. Those are just as essential to the successful and effective use of A.I. as good code is.
Paying attention to these considerations can show us why ideas that seem perfect for A.I. are often more complicated than they first appear. That doesn’t mean we shouldn’t pursue A.I. applications, but that we need to be vigilant as we build out the solutions. If we carelessly build algorithms without thinking through the full end-to-end set of design decisions, we miss that the key to success in A.I. is the data that drives it. If we do, not only will we not solve problems such as fake news; we could even make them worse.
Naila Dharani is principal consultant for Chicago Booth’s Center for Applied Artificial Intelligence.
Jens Ludwig is the Edwin A. and Betty L. Bergman Distinguished Service Professor at the University of Chicago Harris School of Public Policy.
Sendhil Mullainathan is the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth.
Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives, Spring 2017.
Earlier job transformations prompted by the onset of manufacturing and IT may hold clues for how fast the labor market will adapt to the latest technology revolution.
Which Workers Will Benefit from AI?Monetary policy decisions can have meaningful long-term effects.
How High Interest Rates Harm InnovationAn expert panel discusses how A.I. is reshaping entrepreneurship, and how founders and investors are responding.
Is A.I. Startup Funding a Rerun of the Dot-Com Bubble?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.