Exploring the Crossroads of AI, Research, and Society.
- By
- January 29, 2026
- Center for Applied Artificial Intelligence
The room buzzed with anticipation, the kind characteristic of movers and shakers eager to be at the forefront of innovation and societal shifts. What followed was not a polished keynote presentation or a scripted exchange, but a lively, evolving, and nuanced conversation about artificial intelligence as it actually exists today: complex, powerful, and capable of democratizing resources in ways deeply consequential to society.
Gathered by the University of Chicago Booth School of Business and the Center for Applied Artificial Intelligence (CAAI), Innovating Intelligence brought together entrepreneurs from Bay Area startups, seasoned business executives, frontier AI researchers, and academic leadership. After an evening with Madhav Rajan, dean and George Shultz Professor of Accounting for Chicago Booth and chief global strategist for the University of Chicago; Sanjog Misra, Charles H. Kellstadt Distinguished Service Professor of Marketing and Applied AI and faculty director for CAAI, and Ted Sumers of Anthropic, alumni and other attendees were not left with consensus, but rather with a collective reckoning with the future of AI.
The takeaway? Artificial intelligence is scaling faster than our global institutions, and the choices made now will echo for generations.
It takes a special kind of deep, intentional thinker to elucidate the path forward in the face of the unprecedented and the unknown. Alumnus John ‘JG’ Chirapurath, MBA ‘01, president of DataPelago and lifelong technologist, was uniquely suited for the task of setting the tone for the evening’s discussion. Chirapurath has spent decades scaling systems inside the world’s largest technology companies; his career arc includes founding a startup at Booth, leading Microsoft Azure, and a new venture into entrepreneurship. He is a member of CAAI’s Advisory Council, a group that supports the center’s strategic initiatives.
For Chirapurath, AI is not a novelty, but the latest chapter in a long story about scale. Building, he reminded the audience, is not the same as scaling. Scale changes everything: how systems behave, how failures compound, and how values—intended or not—are amplified. The technical challenges of AI, he argued, are often solvable.
The harder questions sit elsewhere in more ambiguous, gray areas: How do we scale intelligence without scaling bias? How do we ensure systems serve people, rather than reorganizing people around systems
Crucially, he did not claim to have answers. Instead, he positioned the evening as a collective inquiry—one that required perspectives beyond the confines of the business, startup, or academic sectors alone.
Data, Cognition, and Human Oversight
First up: framing a researcher whose knowledge roots deep within the realm of industry. Ted Sumers is currently a research scientist at Anthrophic where he works closely to refine and investigate the innerworkings of AI platforms. Trained as an engineer, Sumers entered the startup world only to realize that data—not hardware—was the true substrate of modern systems. During his years at Uber working with real-time sensor data and physical-world modeling, he was exposed to a core paradox: the more sophisticated systems become, the more they depend on fragile human assumptions.
That realization pulled him out of industry and into a PhD in cognitive science. To Sumers, humans hold strong as the most powerful example of intelligence. Human intelligence is adaptive, social, and capable of transferring knowledge across domains through language and culture. When large language models (LLMs) began to show emergent capabilities, Sumers found that the tools he had built to study human cognition transferred surprisingly well to AI systems.
As a research scientist at Anthropic, his focus was no longer making models smarter, but on making them legible across systems and interpreters. “Human oversight is the central bottleneck of AI deployment,” he argued.
“We have limited attention, limited time, and an explosion of AI-mediated actions. The question is not whether AI will act, but how humans can meaningfully supervise, audit, and learn from those actions at scale.” For many technologists, Sumers’ insights offer a reframing. What if progress is no longer defined by performance benchmarks, but by observability, accountability, and trust?
This sentiment seemed to resonate with the audience of alumni, who brought mixed sentiments around AI representative of the ongoing global discourse.
Teaching AI Without Teaching Obsolescence
Sanjog Misra is the driving force behind the Center for Applied AI as the center’s faculty director, and he teaches core courses for Booth’s new Applied AI concentration.
From inside the classroom, Misra described a transformation that would have seemed implausible a decade ago: AI is no longer a niche interest within the business school. It is foundational. Beyond its growth in course offerings and Applied AI faculty appointments, there has been a broader philosophical shift at Booth.
Generative AI, Misra emphasized, is only one slice of a much larger landscape. Booth’s curriculum has evolved from statistics to machine learning to large-scale decision systems. And now, the curriculum includes honed focus on AI systems that can reason, adapt, and even reflect on their own performance. Such a significant evolution has led to a reckoning with education itself. Traditional assignments—like summaries and take-home analyses familiar to many alumni in the room—lose meaning now that they can be easily reproduced by AI.
But rather than banning these tools, Booth has leaned into them. Students are encouraged to use AI, but are then expected to explain, defend, and critique their outputs in real time. The signal is clear: the scarce skill is no longer producing answers, but asking good questions and demonstrating understanding.
Guided by Madhav Rajan, the conversation traced AI’s impact across regulation, work, and education, repeatedly returning to a central tension: how to navigate powerful transformation with lasting ramifications without simple solutions.
On regulation, Sumers argued that concentrating AI decision-making in the hands of a small few challenges democratic values, making regulation a societal necessity rather than an optional strategy. Misra agreed in principle but warned that poorly timed or narrowly designed rules can backfire and limit innovation. Privacy regulations, for example, may protect the privileged while limiting the development of tools capable of providing social benefits for more vulnerable groups. Global competition adds another complication, since unilateral regulation of AI can become a strategic disadvantage. In sum, every intervention carries tradeoffs.
That same complexity complicated the evening’s debate over the future of productivity and jobs. Despite rapid AI adoption, productivity gains remain uneven across sectors and job types. Sumers likened this to the slow impact of electricity, which required decades of organizational redesign and social adoption before delivering its full benefits.
Yet, at the individual level, AI already lowers the barrier to beginning tasks, encourages experimentation in approach, and enables people to perform beyond their formal training. Still, these gains are not evenly distributed. AI acts as a multiplier of judgment and skill for seasoned professionals, reshaping hiring and career ladders and potentially reducing demand for junior roles while increasing the need for oversight and evaluation. How will the next generation of workers inherit the nuance needed to build on the advancements made by current leaders?
Environmental costs added another layer of unease for curious attendees during the Q&A portion of the event. Current AI systems are energy-intensive, with room for efficiency improvements in processing power per prompt or request.
Sumers argued that optimization of existing AI models, to reduce their energy load, may be the source of near-term gains alongside the development of renewable energy breakthroughs. Awareness of these costs exists, but alignment with societal goals remains a central question and challenge.
The conversation closed with humility. Panelists urged Booth alumni as well as current and prospective students to focus less on chasing tools and more on understanding the core problems AI may be able to address with specificity, and learning how to ask the right questions. AI, they concluded, represents not just a technical shift but an institutional and moral one—forcing society to decide not only how to build intelligence, but how it will ultimately be used.
This event was jointly hosted by CAAI and the Chicago Booth Alumni Engagement team. Stay up to date on upcoming opportunities to join the conversation around AI innovation at Booth by joining fellow alumni on Chicago Booth Connect.