AI’s Expanding Role: Progress, Pitfalls, and the Future of Human Decision-Making

Sanjog and Alex Presenting

At a Booth Alumni Club event, Professors Alex Imas and Sanjog Misra explored AI’s growing influence on workplaces, education, and decision-making, highlighting both its potential and its pitfalls.

From bias in hiring algorithms to AI-generated academic research, the discussion unpacked how this transformative technology is reshaping human behavior and challenging traditional notions of expertise.

At a Booth Alumni Club event hosted by the Center for Applied AI on Tuesday, February 4, Alex Imas, Professor of Behavioral Science and Economics and Vasilou Faculty Fellow, and Sanjog Misra, Charles H. Kellstadt Professor of Marketing and Applied AI, explored the evolving role of artificial intelligence, with a focus on its integration into workplaces, education, and decision-making systems. The event, held at Booth’s downtown location, Gleacher Center, brought together alumni, students, and faculty for a discussion on AI’s broader implications—how it is shaping human behavior, reinforcing existing biases, and challenging traditional ways of thinking.

Imas opened the discussion by illustrating AI’s unprecedented growth. Adoption rates have far outpaced past technological shifts, and AI is now influencing fields beyond traditional automation. He demonstrated OpenAI’s Deep Research, a tool that generates detailed reports and academic papers, raising questions about AI’s ability to not only assist professionals but to produce original insights. While this represents a leap in efficiency, he cautioned that it also raises concerns about accuracy, reliance, and the evolving definition of expertise.

Misra, drawing from his role as faculty director of the Center for Applied AI, framed AI’s growth as an opportunity rather than a threat. He compared the current moment to past technological transitions, such as the rise of electricity or the internet, where initial disruptions ultimately led to new ways of working. AI, he argued, will likely follow the same path, shifting how humans engage with knowledge and decision-making rather than fully replacing them. “We’re still in the early stages,” he noted, “and much of what AI will enable remains unknown.”

A central theme of the discussion was AI’s effect on learning and cognitive development. Imas presented new data showing that a significant number of students use AI tools for academic work, sometimes without fully acknowledging their dependence on them. This raises important questions about the long-term impact on learning—will AI enhance problem-solving abilities, or will it erode critical thinking by automating too much of the cognitive process? He and his colleagues are launching a study to measure AI’s effects on knowledge retention and skill development over time.

The conversation then shifted to AI’s influence on human decision-making, particularly in hiring and evaluation. Imas presented his research on AI-generated recommendation letters, which revealed that AI models tend to describe male and female candidates differently even when given identical resumes. The language used for men emphasized leadership and technical skills, while descriptions of women focused more on collaboration and supportiveness. This subtle yet systematic difference resulted in hiring managers assigning lower salaries and fewer job opportunities to female candidates.

Misra saw this as both a challenge and a potential solution. While AI can reflect human biases, he suggested that it also offers a new level of transparency. By analyzing AI’s outputs, researchers and companies can identify bias patterns that would otherwise go unnoticed. “AI gives us a way to measure bias at scale,” he explained. “That’s something we’ve never really had before.” If properly designed, AI could be used to detect and correct these disparities in hiring, lending more consistency to decision-making processes.

A broader debate emerged around the potential risks of AI-generated misinformation. Audience members questioned whether AI systems could be manipulated—either by corporate interests or through subtle shifts in how information is processed and distributed. Imas pointed out that because AI models are trained on vast amounts of existing data, they can be influenced in ways that are difficult to track. Misra, on the other hand, argued that AI is no more vulnerable to manipulation than other information systems; rather, it forces us to confront long standing issues of bias and misinformation in human decision-making.

The session concluded with an acknowledgment that AI’s role in society is still taking shape. While it presents clear risks, particularly in terms of bias and over-reliance, it also offers new ways to improve transparency and efficiency in decision-making. “AI is a tool,” Misra said. “How we use it will determine whether it helps us or harms us.” The discussion left attendees with a more nuanced understanding of AI—not just as a technology, but as a force reshaping how people interact with information, work, and each other.

Miss the conversation? Watch the recording here.

More from Chicago Booth