Veronika Ročková
Credit: Jeff Sciortino

AI’s Random Responses Can Improve Decision-Making

A Q&A with Chicago Booth’s Veronika Ročková about useful variation in the output of LLMs.

AI chatbot responses can be random and varied, and most of us think of that variability as problematic. Are we wrong?

Randomness is something that people are not used to coping with, but we should acknowledge it as an inherent part of our world. Whenever we have a conversation, there’s going to be randomness involved—if you asked me the same question twice, I would answer in a slightly different way each time. Generative intelligence embodies randomness—we have some random input and transform it into some random output. I worry sometimes that people are not aware of the degree of randomness. 

How did you turn that randomness into an area of research? 

In statistics, we like to distinguish between a point prediction—a guess at some unknown quantity—and a distributional prediction, where we acknowledge all possible outcomes and attach probabilities to those outcomes. 

For statistical inference, it’s much better to have distributional predictions because they embody uncertainty. In our research, we decided to leverage the uncertainty in these black-box AI systems. Although the information might be variable, or even biased, we can still use it in the form of a distributional prediction. We acknowledge that there is uncertainty, and we try to assess the degree of it and then incorporate that into an analysis of real data. 

Your work involves Bayesian inference. What is that and how does it help us make decisions under uncertainty?

Bayesian statistical inference hinges on the idea that we start with some prior beliefs, and as we accrue data, we update these beliefs with evidence. The mental updating is done through a certain mathematical formula attributed to Thomas Bayes. It boils down to the coherent probabilistic updating of information. Probability is the main instrument of statistics—we measure uncertainty with probability in a similar way as we measure temperature with a thermometer. 

What we obtain from Bayesian inference is uncertainty quantification. We start with some prior beliefs, then collect data, modify or shift that prior distribution toward evidence in the data, and end up with a posterior distribution that encapsulates the uncertainty after having gathered more information. 

The fact that there’s a range of these outcomes and probabilities attached to them is important, because people typically rely on a single prediction. That is far riskier than using multiple scenarios. We know how likely these scenarios are, and we can incorporate this uncertainty in decision-making through weighted averages. That leads to less risky decisions. 

You’re using Bayesian inference with AI to see if you can improve diagnoses for skin diseases. How does that work?

We run patient characteristics through AI and get it to predict, multiple times, what conditions they might be connected with. Then we treat those predictions as proxies for true diagnoses and use them to enhance analyses of limited datasets, such as samples of patients with rare diseases. Essentially, we’re using generative AI to create a prior distribution that can be combined with data to improve medical predictions. 

Veronika Ročková is the Bruce Lindsay Professor of Econometrics and Statistics in the Wallman Society of Fellows at Chicago Booth.

More from Chicago Booth Review
More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.