Chicago Booth Review Podcast Should AI Disagree with You?
- February 04, 2026
- CBR Podcast
When you search the internet or use artificial intelligence, do you want it to agree with you, or are you open to having your mind changed? Chicago Booth’s Oleg Urminsky tells us about his research, which suggests that we often search in a narrow way that ends up giving us results that confirm our views. Should search engines instead aim to open us up to opposing opinions?
Oleg Urminsky: One interesting thing with chatbot, AI chatbots, and LLM technology is, we're seeing these new versions coming out. And then one version comes out and the people are like, "It's too sycophantic, and it just agrees with whatever I say." And then OpenAI says, "Got it. We'll change it." We have more of an intuition that this is a fluid tool that can be designed in different ways.
Hal Weitzman: When you search the internet or use AI, do you want it to agree with you, or are you open to having your mind changed?
Welcome to the Chicago Booth Review Podcast, where we bring you groundbreaking academic research in a clear and straightforward way. I'm Hal Weitzman.
Today, Chicago Booth's Oleg Urminsky tells us about his research, which suggests that we often search in a narrow way that ends up giving us results that confirm our views. Should search engines instead aim to open us up to opposing opinions?
Oleg Urminsky, welcome to the Chicago Booth Review Podcast.
Oleg Urminsky: Thank you. Thank you for having me.
Hal Weitzman: Well, we're delighted to have you to talk about searching on the internet, something that we all do. We're talking about the narrow search effect, which is also something that we probably all do. Tell us what is the narrow search effect, and what does it mean for those of us who spend all our days either on Google or ChatGPT?
Oleg Urminsky: The idea is very simple that anytime we're searching for information, we come to that question with some preconceptions, beliefs, attitudes. Without necessarily intending to, those get reflected in how we frame the question we ask. If I think that caffeine is good for you, but I want to understand more about the health effects of caffeine, when I go into Google, I'm more likely to frame that in a positive way. If I think caffeine's bad for you and something you should avoid, I'm more likely to search for what are those bad things.
Hal Weitzman: What are the bad, negative effects of-
Oleg Urminsky: Exactly.
Hal Weitzman: ... versus what are the positive effects of? What I love about this research is we hear so much about how the algorithm is evil, and it's telling us to do terrible things, or confirming our views of terrible things, but we don't really think about how we are shaping the responses. Our inputs, we tend to think a lot about, we blame the machine rather than thinking about ourselves and how we interact with the machine.
Oleg Urminsky: Right.
Hal Weitzman: Yeah, go ahead. Talk about that.
Oleg Urminsky: Yeah, I agree. It's a general issue in lots of technology that the technology is designed to try to be helpful to us and around our needs, but often it takes a simple view of what those needs are and leaves out some of the needs.
Google was designed around this proposition that they could basically deliver two things. One is identify among all the stuff on the internet, the stuff that's the most relevant, and then also do some quality control. If there's something that seems really relevant through keyword matching, but other people aren't linking to it, maybe that's lower quality, less useful information. Google's proposition from the beginning was, "We're going to show you the most relevant stuff with some vetting for quality." That's super useful, but it leaves out some of how we interact with these tools.
Hal Weitzman: Right, okay. When we think about polarization, for example, which is a human thing, the machines haven't necessarily caused polarization-
Oleg Urminsky: Right.
Hal Weitzman: ... but we're at a stage now where people often disagree about what the basic facts of what's happening are. Even the old journalistic motto of, just stick to the facts, is hard to do because it's not clear what the facts are. How in that example does the interaction, what's the interaction between user behavior and this search engine optimization? How does that exacerbate something like polarization?
Oleg Urminsky: Yeah. The scenario you sketched out where we're even a little unclear on what the facts are, is exactly the situation where this matters the most. Because traditionally, we've thought of the cure for that situation as like, "Okay, well, everyone should have their say on how they view the facts." And then we'll have a society-wide conversation and figure it out together. When that conversation is happening on the internet, there's a question of like, "Okay, but which slice of that conversation are you going to see?"
Again, if I'm framing my request in a way that is in line with my prior beliefs, and then if the search engine or other information technology is really optimizing on that relevance, then it's going to take the slice of that conversation that it sees as most relevant and show it to me, but that can be the slice that's most confirming of what I already believed.
Hal Weitzman: Right.
Oleg Urminsky: Right.
Hal Weitzman: We've all had that experience of saying, "Should I start an afternoon tea shop or whatever?" Put into ChatGPT and it says, "Yes, that's a great idea."
Oleg Urminsky: Exactly.
Hal Weitzman: Perfect thing to do at your stage of life. Of course, a terrible idea. We've heard a lot more, much more dangerous, nefarious examples. We've heard about echo chambers, filter bubbles. Is your narrow search effect different from those? Does it contradict those? Is it complimentary to those?
Oleg Urminsky: It's very complimentary. The idea of the filter bubble is that, if the technology platform is doing algorithmic personalization, as it learns more about you, without you even asking it to, it's going to choose the things that it predicts are most relevant to your interests. That can create this situation where you're only seeing a slice of what's out there.
And then the echo chamber is the same idea when you're choosing who you're getting information from. On social media, who do you choose to follow, that's going to shape the information environment you're in. This is complimentary in a third way in which you might only be seeing a part of what's out there or disproportionately be seeing, be exposed to a part of what's out there, which is just the framing of the query, just the questions you ask, the search query on Google, the question that you post to ChatGPT, again, can narrow that set of information you're being shown.
Hal Weitzman: Okay. In my example where I asked ChatGPT, "Should I give up my job and start a tea shop," is it sensing something about that query that says, "Yes, Hal needs a change in career," or needs a break, or loves tea, or whatever? That would be a great idea for him, but objectively that's a crazy idea. What cue is it taking, do you know?
Oleg Urminsky: Yeah. One of the things that's particularly interesting about LLM-based technology being used as an information source is that these are prediction machines. The LLM doesn't know if it's a good idea or a bad idea to start a tea shop. There's information out there on the internet and other sources that it's been trained on that has relevant parts of the answer, but it's not in the business of making evaluations. What it's in the business of doing is predicting. Predicting what kind of text would be a good fit to this conversation. That's the core LLM technology.
And then there's a layer on top of that, which is the reinforcement learning human factors training, which says there are a bunch of things that seem like a continuation of this conversation that I could say next. Which one do people like? Which one do people give a thumbs up to? Either because the information, let's say, on the internet that starts off with that question, tends to then answer in the positive, or because when the LLM is generating its responses, people prefer the more positive upbeat responses. It's going to do what people, what is predicted to be a good continuation to the conversation and what people want to hear.
And then if you ask it for, "Give me five reasons why I'm going to lose my retirement savings if I start a tea shop?" First of all, that's content that appears on totally different webpages. As it's navigating this enormous information space, it's going to find itself in a different corner of the internet where different texts would be predicted to come next. Plus, in this reinforcement learning training, the situations in which it gets posed that question, it's going to get a more positive risk. People will like the negative information more because that's what they're looking for.
Hal Weitzman: Right. But if this is a problem and what your research is highlighting is that it's not a problem with the algorithm per se, it's a problem with how I use it and the terms that I put in, shouldn't I just make my terms broader then it will be more useful a tool?
Oleg Urminsky: Yes. We could think about addressing this in two ways. One way is to make people aware that people should be better calibrated in using these tools. In our studies, we played with that a little bit. There are limitations in how we were able to test it, but it didn't seem to work very well. Instead, what we found a stronger effect of is just changing the behavior of the platform. This is something that if you talk about it in terms of Google, you get a lot of reactions that are like, "Just give the answer." Why are you trying to change Google? Because it's at least seemed to users pretty constant over time, although they're constantly making changes in the background, it creates this sense that there's one right answer.
One interesting thing with chatbot, AI chatbots and LLM technology is we're seeing these new versions coming out. And then one version comes out and people are like, "It's too sycophantic and it just agrees with whatever I say." And then OpenAI says, "Got it. We'll change it. " We have more of an intuition that this is a fluid tool that can be designed in different ways. There's a little more openness now to-
Hal Weitzman: Right.
Oleg Urminsky: ... asking the question of like, "Oh, okay, there are decisions in how to configure and train these things." We should think through those decisions. That's the spirit of our paper is to say, "You can design these tools to give you narrow answers optimizing on relevance," or you can design the tool to give you a broader set of answers, and say, "Let me give you a bunch of information." Some that's optimized on relevance because most of the time that's what you're going to want, but then let's supplement that with some additional information that we're not choosing the second most relevant thing, but instead we're trying to span the space of potentially relevant information to help the user be better oriented about what is the full range of information out there on this topic.
Hal Weitzman: I'm just wondering though, hearing you speak that, that there's an assumption underlying all of this, which is the internet should be true.
Oleg Urminsky: Exactly.
Hal Weitzman: That Google should be the oracle.
Oleg Urminsky: Right.
Hal Weitzman: It should give us the answer to the meaning of life, and the same with ChatGPT, it should be true. Rather than it behaving as it's supposed to behave and giving, as you say, a probable answer, we think, oh, it's wrong. It's hallucinating. It's making a mistake. Maybe you could say, "Well, these are tools." If you read the op-ed page of The Wall Street Journal, you would not say it's true. It's opinion, these are opinions, they're based on certain facts, but there are other facts that might lead you to different opinions. Maybe the problem is not with the machine, but with the way that we use the machine. That's all I'm wondering.
Oleg Urminsky: Yeah, yeah. We're in this time of rapid change in terms of AI technology, and we have some conflicting intuitions about what the benefits of AI are. Some of the ways that we talk about AI, particularly when we're talking about AI risk is like, "What if it becomes super smart?" That can create a perception among the general public sometimes that like, "Oh, it can figure out the truth better than we can." That's actually not what it's good at, unless it's specifically designed with particular parameters. If there's a system of rules, it can analyze that very effectively, often better than humans. But in the much more fluid AI chatbot world, what it's really doing is predicting what would be a good continuation to this conversation.
Hal Weitzman: Have you ever wondered what goes on inside a black hole, or why time only moves in one direction, or what is really so weird about quantum mechanics? You should listen to, Why This Universe? On this podcast, you'll hear about the strangest and most interesting ideas in physics broken down by physicists, Dan Hooper and Shalma Wegsman. If you want to learn about our universe from the quantum to the cosmic, you won't want to miss Why This Universe, part of the University of Chicago Podcast Network.
Oleg Urminsky, in the first half, we talked about your narrow search effect that you found when people put in inputs. Sure enough, they get the narrow responses that reflect their narrow questions. You talk about directionally narrow search terms. What makes a search directionally narrow as opposed to a broad search?
Oleg Urminsky: Absolutely. The directional part of the narrow search is this idea that there's a preconception in some direction. If I ask for, what is the height of the Eiffel Tower, and then Google comes back and just gives me literally that height with no other information, that's a very narrow search result. But that's great because I was asking a very specific question, and it answered exactly that question.
When we think about, what are the health effects of some common food, there, if I ask questions that are framed in positive or negative terms, and then the response I get overemphasizes relative to what the full picture is, either the positive or negative direction that I asked about, that's what we mean by directionally narrow.
Hal Weitzman: Okay. Give us some examples because in your research, you studied searches exactly as you say, for information on the health effects of food and drinks, and you asked participants about their motivations when they were making these searches. What did you find out there? Were they trying to confirm something they already believed, or were they just looking for an, quote-unquote, "objective response"?
Oleg Urminsky: Yeah. One risk in any research, where you're trying to understand the intersection of psychology and technology, is understanding people's goals. There are things that we could say like, "Okay, we've decided our priority that this outcome is better and the technology doesn't lead to that outcome, so that means the technology is bad." That would be a mistake because we're assuming that what we're calling the better outcome is what the user actually wants. We're ascribing a user, if it is a problem, we're ascribing a user problem to the technology.
One of the things we looked into in our studies is maybe when people go to Google, they're looking to win an argument with a friend. You think caffeine's good for you. I think caffeine's bad for you. We've debated last week, and I want to arm with some new facts to go out there and win the argument the next time we talk. If that was the case, then what we're calling this negative consequence, this lack of belief updating that happens from narrow search is actually not a problem. It is my goal.
We asked people what their goals were. We did choose a set of topics where we thought the goals were more likely to be people just wanting to be fully informed. If you ask questions about hot-button political issues, the answers would be quite different.
Hal Weitzman: Right.
Oleg Urminsky: But the results confirmed that when people were searching for information about health effects of caffeine on the internet, they weren't looking to have their views reinforced. They were looking to have a better understanding. That makes the lack of belief updating due to narrow search, that gives us some confidence in calling that a problem.
Hal Weitzman: Just talk through that example. If I type in what are the health effects, I'm keeping it quite bland-
Oleg Urminsky: Right.
Hal Weitzman: ... the health effects of caffeine, what did you find tends to come up? By the way, when you talk about Google, you're talking about the Google, not the AI version of Google, is that right?
Oleg Urminsky: Yes, yeah. This paper actually started out years ago-
Hal Weitzman: Right.
Oleg Urminsky: ... as a search engine paper. Just when we felt that we fully understood what was going on with search engines, the world dramatically changed. We were like, "All right, we need to broaden our scope," but it created interesting opportunities to compare traditional search with these new AI-based information tools. We find quite parallel results with both.
What would happen on Google is you'd get a bunch of webpages that talk about the health benefits of caffeine. If you search for health risks of caffeine, you would get a bunch of different pages. One amusing thing we noticed is sometimes from the same website, just a different page on the same website, because the website is just trying to create whatever information people want to see, but you would get this slanted view. It would mostly be information about health benefits if that's what you searched for. Some of the sites might acknowledge that there are some risks and vice versa, but the overwhelming perspective you would get is like, "Okay, so it's mostly benefits."
Hal Weitzman: Okay. But that's if I put in, what are the benefits of caffeine?
Oleg Urminsky: Right.
Hal Weitzman: If I put in, what are the risks, I get the negative. But I'm asking it, what if I put in, what are the health effects of caffeine?
Oleg Urminsky: Right. If you put in what are the health effects, you get a mix.
Hal Weitzman: Okay.
Oleg Urminsky: If you put in what are the health benefits and risks, you again get a mix. It's not that the technology is preventing us in most cases from getting a balanced view. It's that when the information on the internet evolves in such a way that there are these different pockets of information that don't overlap very much, for example, because different organizations or different websites focus on different things, that's when we can get this effect that the framing of the question you ask steers you to one of those pockets of information already.
Hal Weitzman: You said that sometimes it's conscious, you and I having a debate about caffeine and I say, "It's bad for you," and you say, "It's good for you," I just want to get some ammunition.
Oleg Urminsky: Right.
Hal Weitzman: But in the case where I genuinely want to understand what the health effects are, do people tend to phrase it as risks or benefits? In other words, I asked you, was that intentional, that narrowness, or was it unintentional?
Oleg Urminsky: We're building on a classic literature in psychology that looks in the context of conversation between people that finds that people tend to ask questions of other people that reflect their preconceptions and biases. It's not that they always do it, and it may even be for some topics that the most common would be a balanced search term, but the ones who don't ask it in a balanced way, tend to ask it in a way that reflects their preconceptions.
Hal Weitzman: Okay. In which case, we're going to get the responses that we want.
Oleg Urminsky: Exactly.
Hal Weitzman: All right, okay. But you did find in one of your studies when you, as you say, broadened it out to include ChatGPT, that in one of your studies, you got ChatGPT to acknowledge that there was a opposing viewpoint. But nevertheless, the people's beliefs afterwards didn't really reflect that doubt. Tell us about that.
Oleg Urminsky: Yeah. The AI setting is really interesting because it's quite different. If you search on Google, you just get a list of webpages, you choose which ones to click through. Whereas AI in some ways, feels a lot more like a conversation. In particular, ChatGPT gives you, unless you instruct it otherwise, it'll give you a fairly lengthy response to these kinds of queries. It'll give you a couple paragraphs.
What we found is, and this is a design choice on the part of OpenAI, is it'll tend to answer your question in maybe three paragraphs with details. And then at the end say, "But of course, there is the other thing," whatever that is. If you ask about health benefits, it'll say, "These are the things that have been viewed as potential health benefits of caffeine. However, of course, it is possible to have too much caffeine, and there's guidance you can get on how much caffeine is too much caffeine." If you ask about health risk, it'll say, "Oh, there's all these potential problems with caffeine. However, for most people, consumption of a modest amount of caffeine is fine."
It's very much framed as a caveat. That we thought was a really interesting question, which is, is it enough just to acknowledge that there's more there, or is it really that people are processing the relative weight of the evidence they're being exposed to? Our results suggest that adding a little caveat at the end is not enough.
Hal Weitzman: Right.
Oleg Urminsky: It really is. You've told me so much information...
Hal Weitzman: Like the small print on the drug that say-
Oleg Urminsky: Exactly.
Hal Weitzman: ... it may cause death or whatever.
Oleg Urminsky: Right.
Hal Weitzman: It's not really going to put you off.
Oleg Urminsky: Yeah.
Hal Weitzman: Okay. And then you replicated this again using Bing-
Oleg Urminsky: Right.
Hal Weitzman: ... which not many people do, you added to the Bing users. This was AI assistance. This is the in-between.
Oleg Urminsky: Exactly.
Hal Weitzman: Tell us what happened there, because I know that Bing sometimes attempts to reformulate these narrow queries into broader ones. What happened?
Oleg Urminsky: Yeah. That was a really interesting thing. Bing was the first search engine that tried to incorporate AI into search. In a way, that actually turned out not to be the direction that most of AI-assisted search has gone. Now, the model that most are following is to have an AI summary before the search results.
Hal Weitzman: Right.
Oleg Urminsky: But what Bing was doing was trying to improve the search process and give some summary information, but the focus was on giving you search results. What was very nice about how they initially set it up is quite transparent about what it was doing. We would type in our directionally slanted search terms that we wanted to test, and we saw that in some cases it would reformulate it into a balanced query. This is something actually we've been interested in following up on, which is another way to think about the role of AI is it enables at scale taking questions that are directionally slanted and creating of the more balanced version of the question.
But we would ask like, "Why is nuclear energy dangerous?" Bing would reformulate it into, what are the risks and benefits of nuclear energy, and then show us the results for that. We saw that in those topics where Bing reformulated it into a balanced search, then we no longer saw this narrow search effect.
Hal Weitzman: Okay. You actually, you modified your own custom design search engine-
Oleg Urminsky: Right.
Hal Weitzman: ... to get these broader results, like the ones you're talking about. Tell us about that.
Oleg Urminsky: Yeah. Some of our studies were just having people interact with Google or see ChatGPT output. Towards the end of the project, we wanted to go deeper into the guts of how these things worked. What we did is, we designed a search engine that used the Google API, but without the user's awareness, changed the search terms for some people into more balanced search. And then with ChatGPT, we created short answer chatbots, where we took their query but wrapped instructions to ChatGPT around it.
In one case saying, "Give a narrow answer specifically to the question that's being asked." In the other one, we said, "Give a broad answer that takes into accounts other relevant information, different points of view." Importantly, the user didn't know that this was happening. This parallels what is already happening and likely will be happening even more, which is that different AI information tools will be customizing some of this behind the scenes, and then people will choose in the marketplace between which ones they find more useful, which ones they like better.
Hal Weitzman: I see, so I could get one that's going to challenge me a little bit more.
Oleg Urminsky: Right.
Hal Weitzman: Yeah, interesting.
Oleg Urminsky: Yeah. But what we saw is that, when we customized the AI chatbots so that it gave broader answers, there was more belief updating.
Hal Weitzman: Okay.
Oleg Urminsky: Right.
Hal Weitzman: By belief update, you mean people were just more open to having their minds changed.
Oleg Urminsky: Exactly.
Hal Weitzman: Or open to more different kinds of information than exactly what they were looking for. Presumably, you think that's a good thing.
Oleg Urminsky: We think so.
Hal Weitzman: Right.
Oleg Urminsky: With an important caveat, which is broadening search or information provision more generally is beneficial to the degree that the information pool that it's drawing from is of equal quality. If someone is searching for like, "Where was Obama born," and broadening search means you're going to start scooping up some conspiracy theory websites about Obama, claiming that Obama was born in Kenya, then we're actually degrading the information that's being shown to people.
Hal Weitzman: Yeah, because you said at the beginning that the search engine works by ranking the-
Oleg Urminsky: Exactly.
Hal Weitzman: ... the, what would you call it? Not the authority, what's the right word? The seriousness of the website, the usefulness.
Oleg Urminsky: Usefulness. Even that has had limitations, which is that it is a popularity. First of all, it's game-able and Google has been in this long run battle with people trying to goose their search results on Google. But if there is low quality information that is popular that's being linked to by lots of sites, that it appears that it's being repeated on lots of different sites or platforms, in the Google model, that can rise up and be promoted in the AI world, that can happen as well. Which is, the AI is trained on Wikipedia and published academic articles, but also the full range of news sites, some of which are not very news, and Reddit forums, and things like this. If the information ecosystem has lots of inaccurate information, that can get swept in as well.
Hal Weitzman: Okay. One of your suggestions is that search engines should just broaden the results. You said you thought that would happen naturally. There'd be a market for open-minded type search and less open-minded type search, I guess. Do you think they should broaden it as a default, or only when people are searching for, looking specifically to have their minds changed?
Oleg Urminsky: Yeah. Certainly, my co-author and I are the last people to be able to say, "Here's the right balance on this continuum between narrow and broad." Our recommendations are twofold. One is, it would be good for society to have a better understanding that this is one of the parameters that is being determined, being chosen in the development of any information technology. It's something as a consumer of information technology to consciously evaluate in line with your goals. And then the other is lots of research showing in various ways that things that have a little more friction don't happen compared to things that you make easier. Our modest proposal is for information providers to think about ways to not make the default of whichever level of narrowness or breadth they've chosen too sticky a default, but instead make it easy for people to switch.
Which would have two effects, one is just the fact that it's easy, means it's more likely. But also having the option to choose a broader search is a helpful reminder to the user that, as you said before, there isn't one true set of search results. The set of search results reflects these decisions.
Interesting thing about Google is, they have this button that I don't think many people use that's like, "I'm feeling lucky." What that does is, if you type in a search term and click on it, it automatically takes you to the first result. If you're very confident that what you want is so specific as to be the first website that comes up, you have that flexibility. Our suggestion is, on the other side, there could be another button. Maybe people wouldn't use it very often, but it would still be beneficial to have that says, "Broaden my results." Give me broader results than the way I frame the question might lead you to think I want.
Hal Weitzman: Okay. It's fascinating, so interesting. Thank you very much, Oleg Urminsky-
Oleg Urminsky: Thank you.
Hal Weitzman: ... for coming on the Chicago Booth Review Podcast.
Oleg Urminsky: Thank you very much for having me.
Hal Weitzman: That's it for this episode of the Chicago Booth Review Podcast, part of the University of Chicago Podcast Network. For more research, analysis and insights, visit our website, chicagobooth.edu/review. When you're there, sign up for our weekly newsletter so you never miss the latest in business-focused academic research. This episode was produced by Josh Stunkel. If you enjoyed it, please subscribe and please do leave us a five-star review. Until next time, I'm Hal Weitzman. Thanks for listening.
Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.