Chicago Booth Review Podcast Why Students Lie About Using AI
- August 13, 2025
- CBR Podcast
When students are asked if they use artificial intelligence to do their work, many say they don’t. But when they’re asked if other students use AI, many more say that they do. Should we conclude that they’re not being honest about their own AI use? Chicago Booth’s Alex Imas has conducted research on students and AI. Why is using AI such a taboo? And how should schools and colleges respond to its inevitable creep?
Alex Imas: Other faculty think that AI has no place in the classroom right now, and they tell students they will lead to harsh consequences for using AI if it is detected. So I think they're just getting a lot of mixed messages and based on those mixed messages, I think the overall idea is that there should be some shame or should be some norm against using AI to do your work for you.
Hal Weitzman: When students are asked if they use AI to do their work, many say they don't, but when they're asked if other students are using AI, many more say that they do. Should we conclude that they're not being honest about their own AI use?
Welcome to the Chicago Booth Review Podcast, where we bring you groundbreaking academic research in a clear and straightforward way. I'm Hal Weitzman, and today I'm talking with Chicago Booth's, Alex Imas, about his research on students and AI. Why is it such a taboo and how should schools and colleges respond to the inevitable creep of AI?
Alex Imas, welcome to the Chicago Booth Review Podcast.
Alex Imas: Thank you, I'm happy to be here.
Hal Weitzman: Now, we are here to talk about AI and students using AI maybe or anyone using AI and saying maybe that they're not using AI. I mean, it's very interesting, but how did you as a researcher become interested in this topic?
Alex Imas: Well, I've read articles and kind of anecdotally heard a lot of people talk about why they shouldn't use AI in instances where they feel bad about using AI, particularly in situations where there's a manager and an employee and the manager thinks you shouldn't be using AI, but the employee is going to be really helped by using AI.
Hal Weitzman: Why? Is it the idea that people feel like they're cheating?
Alex Imas: Yeah, yeah, exactly.
Hal Weitzman: They're got to make themselves obsolete if they use it, if they admit to using it.
Alex Imas: That's part of it and there's the element of plagiarism, that's in AI. I mean, it's trained at a bunch of data that other people have used and some people think that essentially when you use AI, you're just plagiarizing the entire time. I don't personally share that view, but that is a view that exists.
Hal Weitzman: Okay.
Alex Imas: So I was interested in kind of seeing are people under reporting AI because they're ashamed of using-
Hal Weitzman: Is it taboo?
Alex Imas: ... reason, yeah.
Hal Weitzman: So in this research you'll discuss the challenge of accurately gauging how people are using AI. And so maybe we can start here, what are the current statistics or are there any reliable statistics on the use of AI, particularly in education and how trustworthy of those statistics?
Alex Imas: Right, so one of the things that kind of started me on this path is looking at these statistics, they're kind of all over the place.
Hal Weitzman: Right.
Alex Imas: So there's some statistics that are like 40% of people use them, there's some statistics that say 80% of people use them. And to have that sort of range, 40 percentage point range between statistics of using a technology, you could say, look, these are completely different populations maybe one population does use it 40%, the other one does use it 80%. But it just seems like there was something unreliable about the measurement itself that was generating those differences.
Hal Weitzman: How are they measuring it?
Alex Imas: Essentially, all of the measurements have been self-reports. They just ask, do you use AI? And some say, what kind of AI do you use? How do you use it? But they basic, a question is self self-reports. You have to admit to the researcher or to the survey company that you yourself are using AI for a particular purpose. In education the purpose is kind of clear, it's to help you with your homework or to write an essay for you at the extreme version of it.
Hal Weitzman: Right, and so this is very context specific I was thinking with your research, if you are in education, you're in a class and the teacher says, "Who used AI to write their paper?" I can see that nobody puts their hand up. But if you're in the workplace, typical workplace, and most employers nowadays want their people to use AI so maybe you might get over-reporting there. So the context must matter, right?
Alex Imas: Absolutely. And that's something that we talk about in this paper is that the way that you can potentially identify when there's people being ashamed of using AI is you ask the questions two different ways and you see a bias in one direction. So for example, take the workplace, let's say you're encouraged to use AI but you personally for whatever reason don't want to use AI, maybe you don't know how to use AI.
Hal Weitzman: Environmental impact.
Alex Imas: Right, environmental impact, that's one of them. So then the technique of asking, would you use AI and how much does everybody else use AI? That would go in the opposite direction, you would over-report your own use because you think you should use AI more than you actually do.
Hal Weitzman: Okay, so now you are kind of hinting at the technique you use in this paper, which is very interesting. Maybe before we get to the methodology, which is really fascinating and I think generally very applicable, tell us about what you found, what are the big headline findings here?
Alex Imas: Well, the big headline finding is that, so we did a relatively large representative survey of the student population. So we looked at undergraduates, including freshmen, all the way through senior year, and we looked at different majors, we tried to sample between humanities and STEM majors and things like that. And we basically first asked, do you use AI ever? So just the basic question of whether you yourself use AI, that's the standard question of self-reporting use.
Hal Weitzman: And just to be clear, so you're asking students in your educational life, do you use AI to do your homework, et cetera?
Alex Imas: Exactly, yeah. We don't specify exactly for what we just asked a very general question. But the idea is yes, this would be for purposes of education. And then we ask them what type of system they use, if they use it at all. We also ask a question of to what extent, so do you use it one day a week, zero days a week, seven days a week? And then we also ask the question of to what extent do your peers use AI? So take a representative friend, to what extent are they going to be using AI?
So the exact same questions about yourself versus other people, and we see a huge difference. So the people themselves, about 60% of people admit to using AI, they tell us a lot of information about what models they use. But when we're asking about other people, I mean that number jumps to almost 90%. And that's both kind of yes or no. And also the extent, so for example, the median percent that people say other people use it is four to five days a week. Whereas themselves, they use it zero to one day a week, that's kind of the largest category that we [inaudible 00:06:32] in AI.
Hal Weitzman: Okay, so now let's dig into them. So the methodologies, you say, what is your use? What do you reckon everyone else in the same environment as you, what is their use?
Alex Imas: Right, exactly. And this is like a classic indirect questioning technique that's been used to study situations where people might not want to admit doing something. So you ask the individual, to what extent do you use it? And maybe people under report because they feel bad about saying yes. But then if you ask them about how their friends use this or engage in some activity, they don't feel bad about saying their friends do. And if everybody kind of says, oh yeah, I don't do it, but all of my friends do, then you kind of get a sense that everybody's using it. So you can compare these sorts of percentages and get a more accurate number on the extent that a behavior is getting engaged in, or technology in this case is being used.
Hal Weitzman: So if 60% of people will say, yes, I use AI, and then you ask them about their friends and they say 90% are using AI, how do you estimate what the actual figure is?
Alex Imas: So, the idea is that it's going to be closer to 90%. Now, is it 90%? We can't say with a hundred percent confidence that it's 90%, obviously. We don't have access to the ground truth. But what we do in the paper is we try to say, look, we give a separate sample of people, we give them this data that, look, there's 60% people self-report using AI, 90% say their friends use it, why do you think this gap exists? And we ask direct questions and we actually ask them to just fill in the blank, just freeform tell us why this gap exists.
And so we use topic modeling for analyzing the text data, and then we just look at the direct reports and both sets of data say the same thing that, look, this gap exists because people don't want to admit to using AI. So what that says is the correct answer is closer to 90%. Now, again, is it 90%? Unless we have people's actual use of AI on their computers, we can't say for sure. But certainly it seems that it's much closer to 90%.
Hal Weitzman: But if I asked you to give me an estimate, it would be?
Alex Imas: I would say it's 90%, given my own experience as an educator. Sometimes you go from the front of the class to the back of the class for whatever reason, and on your way back down you see what people are doing on their laptops.
Hal Weitzman: Are you talking about in teaching classes at the University of Chicago?
Alex Imas: Yeah, while teaching. At the University of Chicago, and I mean ChatGPT is pulled up literally in class on all the computers.
Hal Weitzman: But in the old days it was Amazon used to be sitting there shopping, now they're sitting there... At least they're learning.
Alex Imas: Well, in my day, it was AIM Instant Messenger, they were chatting within the class.
Hal Weitzman: That's right, you'd be chatting. That's right. So let's get to the question of why this is happening, you talk about social desirability bias. What do you mean by social desirability bias?
Alex Imas: Yeah, social desirability bias is a bias that's identified in survey research where people might under report engaging in some activity, like students might say they don't drink and drive or they don't binge drink or something like that because they don't want to appear to the researcher or even to themselves like somebody who engages in this activity.
So this tends to be an activity where there's some sort of norm against it. So there could be a norm, like a moral norm, I don't want to admit I'm a bad person or it could be a real legal consequence norm where if I admit to it, somebody might come after me. With AI, it's kind of both in some ways where you could get, a lot of classes have AI policies, you shouldn't be using AI. And there's also some sort of social norm, you don't want to seem like you're plagiarizing all your work you want to seem like you're doing the work and working hard and things like that.
And so when you have these sorts of situations, there's a social desirability bias to shade your answer in a particular direction to under report. When you have these situations, your survey self-reports will be biased. So there's techniques to get around this, so one technique basically other than the one we use says, look, there's going to be some randomization device that's implemented where you could hide your own answer behind this randomization device in order to get people to tell the truth because their answer won't be recorded. This doesn't really work super well when the social desirability bias is internal, where you don't want to admit to yourself that you engage in some activity. It works better when there's some sort of social sanction against it.
Here, the indirect questioning technique was designed to be a general purpose way of getting around social desirability bias as a way to, in most situations, be able to ask two separate questions. How much do you use it? How much do you think others use it? And identify the direction of the gap and quantitatively how big the gap could be.
Hal Weitzman: Okay, so social desirability bias exists everywhere. Is it something particularly common or relevant in educational settings?
Alex Imas: No, no, no, not at all. I mean, social desirability bias is an issue in almost all sensitive survey research. So it's an issue when you're trying to see the prevalence of a disease, the prevalence of certain behaviors like sexual proclivity or something like that. So social desirability bias is just a general issue for survey research where you're trying to get an accurate estimate of something and people have some sort of incentive through this norm to under report or over report for whatever reason.
Hal Weitzman: Are you the first researcher or group of researchers to have taken AI particularly and used these techniques to reveal?
Alex Imas: Yes.
Hal Weitzman: And why is that? Because it seems pretty obvious when you explain it that if you ask people, are you using AI in certain contexts, you won't get an accurate answer.
Alex Imas: I mean, AI is like two years old.
Hal Weitzman: Well, that's what I was going to ask you. So tell us a bit more about when you collected these data.
Alex Imas: So we basically fielded the survey this winter, so everything came in very quickly and very recently. So we fielded the survey and then we wrote up the results right away and then posted it online for other people to use and read right away. So everything came together very quickly, this hasn't even been published yet. We just want kind of the world to know what we found, which is I think is super exciting and I want other people to consume it, to learn from it, and to potentially replicate it in other contexts.
Hal Weitzman: If you're enjoying this podcast, there's another University of Chicago podcast network show that you should check out. It's called Entitled, and it's about human rights co-hosted by lawyers and law professors, Claudia Flores and Tom Ginsburg. Entitled, explores the stories around why rights matter and what's the matter with rights.
Alex Imas, in the first half we talked about your fascinating research about social desirability bias, the use of AI in educational settings. And you talk in this paper about AI shaming, why people feel like they don't want to admit to using AI in an educational setting. And you said actually you don't agree with that, you think that's not helpful. But why do you think AI use is particularly sensitive for students?
Alex Imas: I think right now they're just getting so many mixed messages. So, some faculty that I know embrace AI, they want students to use AI they think this is the future, so the students need to learn how to use AI in order to be effective participants in the economy, which to me makes a lot of sense. Other faculty think that AI has no place in the classroom right now and they discourage it on every part of the syllabus, they tell students there will be harsh consequences for using AI if it is detected. So, I think they're just getting a lot of mixed messages and based on those mixed messages I think the overall idea is that there should be some shame or should be some norm against using AI to do your work for you.
Now, to do your work for you, that's a statement that has some additional content. So you could use AI to help you study, to generally just be kind of a study partner, you bounce ideas off of them and you're confused about something the AI explains it to you, so you almost have a tutor. So that's one way to use AI, and I think most people would agree that that is a very appropriate and super fruitful use of AI in education.
But there's another use of AI, which is to say, here's an essay I have to write, here's the prompt, I put in the prompt, I get the essay and I turn in whatever it's written. And most people would agree that that latter use is not a very good use if the point of being in the class is learning how to write the essay. So, there's also a spectrum in between those sorts of uses, and I think students don't have a good idea of what AI should be used for and is appropriate for and should be encouraged versus the type of AI use that you should try to avoid if you're actually trying to get an education at the University of Chicago.
Hal Weitzman: Okay. So that raises the whole question of policy, the policy of institutions, how they think about delivering, designing education, the policy of policymakers who have to regulate education and what is education, how institutions support education. I don't know, it's kind of fun, the headline as we talked about in the first half, but it raises a lot of interesting issues where you and I are in the education business and it raises a lot of interesting issues about how we are going to integrate this tool into our delivery of education. I don't know if you have any thoughts about that relating to this and the particular AI shaming that you expose in this paper.
Alex Imas: So I think AI shaming is probably a short-lived to medium-lived phenomenon. I think we're, as I said before, we're like two years into the AI boom. We don't know where we're going to be in two years, we don't know where we're going to be in one year, six months, it's moving so fast. So I think we're going to get to an equilibrium where AI gets incorporated in a specific way into the education pipeline and there's not so much shaming or norms against using AI. It's kind of like a calculator, there's no shame against using calculators because there's specific places where you're supposed to use it and students are using it and everything's great. And then when students shouldn't use it, the curriculum is designed where they can't use it.
So I think a similar thing will develop with AI where the classes will shift to have certain parts of the class where the professors maybe don't want the AI to be used for whatever reason. Well, those are designed in a way where AI can't be used, or it's not effective at all, or it can be detected very easily. Whereas in other cases where AI should be used such as kind of in the tutoring sphere or bouncing ideas off of and things like that, then AI should be encouraged and there could be some record of how the AI is used. So my guess is this sort of thing is going to be short-lived. But even though we should still have accurate measures of how much it's used, which is kind of the goal of the paper.
Hal Weitzman: Okay, so you think it will all settle down and does there need to be institutional level guidance for students and for instructors about how to use AI?
Alex Imas: I think there should be some guidance, I think there's just a lot of confusion at every level for how students are using AI. In the case of students, how they should be using or how they could be using AI. This is again, a very new technology you go to the OpenAI website, they have six different models. Each of those models, if you work with the technology, is good at different things, they're all useful models. But you need to invest the time to know what to use AI for and what models to use, what they're appropriate for, what they're not appropriate for. It would help to have some institutional guidance for both the faculty and the students as far as how AI use should be encouraged, when it should be discouraged, and how you can most effectively use AI for education.
Hal Weitzman: I mean, you use the example of the calculator, I'm wondering if when the internet... I guess I was a student when the internet came into popular use. I'm trying to remember, was there a shame about using the internet rather than books?
Alex Imas: There was definitely a shame about Wikipedia.
Hal Weitzman: Would there have been a shame in the old days about using the printing press rather than-
Alex Imas: Memorizing everything?
Hal Weitzman: ... calligraphy. Well, calligraphy like people who actually used to write out the Bible or whatever. Was there a shame about having a Bible that was printed rather than written?
Alex Imas: Well, you see these opinion pieces like books will rot your brain, you shouldn't read too much because it's going to rot your brain when books became very prevalent.
Hal Weitzman: Right.
Alex Imas: So I think there is a sense of with AI that could be a moral panic.
Hal Weitzman: They said the same about novels.
Alex Imas: Yes.
Hal Weitzman: Novels were also going to rot your brain.
Alex Imas: Exactly, so I think there is some sense where this is true. But again, there's so much uncertainty about where things are going. It could be the case where this technology just kind of runs away from us completely, and we are actually in a worse place, I don't know. To me, it seems like that's an unlikely scenario compared to a scenario where companies develop tools where the AI model is trained to be a tutor specifically for that tool. Students want to use this tutor because they're actually helpful, and then that is the way that AI is used in education, that is a potential kind of outcome in this space.
Hal Weitzman: Okay. Now I want to ask, you talked a little bit about your teaching and how you in the classroom and see everyone using, what are they using it for? They're just typing in what you're saying and-
Alex Imas: Oh, I have no idea what they're using it for.
Hal Weitzman: They're probably shopping.
Alex Imas: Yeah, they're probably shopping.
Hal Weitzman: They're using it to look at what they should buy on Amazon. But have you changed your own class because of students using AI?
Alex Imas: Oh, yeah. Very, very, very significantly.
Hal Weitzman: How so?
Alex Imas: So my class used be designed, I have a-
Hal Weitzman: [inaudible 00:20:48], yeah.
Alex Imas: Teacher behavioral economics course, so it's a paper-based class, people read papers and they turn in deliverables. So they turned in idea briefs that they write based on the papers, they turn in-
Hal Weitzman: And you're talking about academic papers, they read academic papers-
Alex Imas: Academic papers, that's right.
Hal Weitzman: ... and then they have to give a recommendation to policy makers or whatever.
Alex Imas: So that's actually exactly one of the assignments. Another assignment is to come up with your own idea for a research topic like three-pager based on the academic papers that you've read. I had a take-home midterm, I wanted students to have enough time to spend on the midterm in order to cohesively write essays and things like that. And then the final culmination of the whole class was a final paper, twenty-pager on a particular topic that they actually thought about and potentially even collected data on. So almost the entire class was take-home, I think in 2021 that was the case. In 2022, that was still the case.
But it's impossible for me to run that class in the current environment. So I no longer have any kind of idea briefs or policy papers that they would turn into me just because I don't want to be in a position where I'm reading 140, I delve into... Delve is a word that the AI uses a lot-
Hal Weitzman: Is that right, is it delve? Okay.
Alex Imas: So I don't want to be in that position, so those assignments are out. So I basically divided the class into two components. One, I want to know when a student exits my class, they have the information, like the basic models of behavioral economics in their heads. For those assessments I need to have in-class midterms, which is what the class has become. In the middle of the class we have an in-class midterm that lasts a certain amount of time and that allows me to assess how much they've internalized. The final project is still there, there's still a paper component, but there's also a presentation component. So students have to write the paper and record a presentation of themselves presenting the results. Now, they could still write the paper in AI and things like that, I can't do anything about that. But from the presentation, I at least know, look, they've done the work of internalizing material.
Hal Weitzman: They've internalize, right, exactly.
Alex Imas: Yeah. So that's kind of how the class has changed. The material hasn't changed, but the way that I'm assessing things has changed substantially.
Hal Weitzman: Right. So you referred to, it was delve, wasn't it? Yeah, delve was the word that is your personal AI detection tip. But you have a paper, don't you? You've got other research surveying different kinds of AI detection tools, so is it possible to detect AI?
Alex Imas: Yeah, so one thing that has been keeping up with the models is the detectors. So I think if you try to use a detector maybe six months ago, they were pretty bad, so you'd get very high false positive rates, pretty high false negative rates, they were almost useless. The current set of models is very, very, very good, the set of detectors that is. So we did a big audit of types of writing, so we have types of writing as in just an essay, we have ads, we have reviews of products, we have tweets that are either we know are generated by people and generated by AI based on the work that we know is generated by people. So we have texts that's very short, so under 50 words, all the way to 2,000, 4,000 words.
And you might be worried, look, the detectors are worse on some dimensions better than other dimensions, but they're actually quite good all over the place. There's some detectors that are better than others, which you'll see in the paper. But in general, if you use a good detector, the best detector, 99.9% you basically detect AI. And the thing that you should be really worried about is accusing somebody that's not using AI for using AI, and that rate is super low, basically negligible.
Hal Weitzman: Okay, so we can use detectors. The bottom line is a lot of students are lying about using AI, and we will know because the detectors are very good.
Alex Imas: Yes, my colleague teaches a class and most of her assignments were still kind of take home so she took these assignments, I gave her one of these detectors, she ran them through, and 40% of them were fully AI generated.
Hal Weitzman: Wow, that hadn't even changed anything. Because you and I were talking before we started recording this about what is human content and what is AI generated content? What constitutes that? Which is a whole other podcast, but you are saying that 40% of people didn't even change a comma?
Alex Imas: Yeah, they just put in the prompt and this thing generated the essay and then they turned in the essay and that was it, 40%. Then at that point, what are you evaluating of those students? It's really hard to get any signal if that's how they're doing-
Hal Weitzman: So if that's you and you're listening to this, you're busted, don't do it again.
Alex Imas: Yeah, this is your only warning, this podcast.
Hal Weitzman: Okay. Alex Imas, thank you very much for coming on the Chicago Booth Review Podcast.
Alex Imas: Thank you so much for having me.
Hal Weitzman: That's it for this episode of the Chicago Booth Review Podcast, part of the University of Chicago Podcast Network. For more research, analysis, and insights, visit our website at chicagobooth.edu/review. When you're there, sign up for our weekly newsletter so you never miss the latest in business-focused academic research.
This episode was produced by Josh Stunkel. If you enjoyed it, please subscribe, and please do leave us a five-star review. Until next time, I'm Hal Weitzman. Thanks for listening.
Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.