An Argument for Less Debate
For better understanding and decision-making, try dialogue instead.
An Argument for Less DebateAs artificial intelligence and related technologies improve, people have found all kinds of ways they can complement, or replace, human labor in many different fields. But can robots and algorithms take over for humans in providing religious guidance? Chicago Booth’s Joshua Conrad Jackson and his coauthors explored how people responded to sermons prepared or delivered by nonhuman means and find that such messages generally decrease religious commitment. Jackson says that perceptions of authenticity may be what give humans an edge in this field—but that changing attitudes about algorithms could eventually dull that edge.
(bright music)
Narrator: As we enter the era of AI and examine the effects it will have on society, one area is often overlooked: religion. Further, what challenges might it pose to the credibility of religious institutions if they try to use AI or robots instead of preachers, pastors, and imams?
Joshua Jackson: So the story of how I got interested in this—it’s kind of a serendipitous story. For a long time, I had been interested in religion and how religion had evolved, why people might stay committed to their religion, why people might deconvert. And one big part of this story has to do with religious elites, so the people that we listen to services from, the people who we trust to communicate religious passages to us. And a big finding in past research has been that credibility really matters for religious elites. You have to think about these people like priests, imams who are communicating beliefs that empirically are quite hard to believe. And so you have to think that they believe these things authentically, and often it really helps when they can sacrifice on behalf of the beliefs that they’re purporting to have.
Narrator: That’s Chicago Booth’s Joshua Jackson. He and his coauthors examined the role of religious credibility and what would happen if AI or robots conveyed sermons to followers. What impact would this have on the religious commitments of those followers? The study was conducted in three settings: the Buddhist Kodaiji Temple in Japan and a Taoist Temple in Singapore, where attendees listened to a sermon from either a person or a humanoid robot; and then also an online study with American Christians reading sermons that they were told were composed by either a human or ChatGPT. After exposure to the sermons, participants in all three groups were asked about their level of religious commitment. The researchers used various measures, such as donations, likelihood of spreading the message of the sermon, or increased religious identity to determine what impact the use of robots or AI had on religious commitment.
Joshua Jackson: And what we find is that across all of our measures of religious commitment, people gave less and people were less committed after watching a robot preacher versus a human. For example, in Kodaiji Temple, people donated less to the temple after they had seen MindAR versus a human monk. We found the same thing in the Taoist Temple, and Christians expressed lower commitment to their Christian identity after reading the sermon that they thought had been composed by ChatGPT.
Narrator: These findings suggest that AI or robots in religious institutions fall short compared with human involvement because of the perception that the robots cannot authentically believe in or practice what they’re preaching. The researchers suggest that these findings could have far-reaching implications for other areas of society besides religion, where credibility is key.
Joshua Jackson: It’s important in teaching, where you need a teacher who you think authentically holds the beliefs that they’re teaching their students. It’s important in politics, where you have a politician who’s trying to convince you to adopt some sort of moral or political sentiment that you may not wanna agree with, and you would be more likely to adopt it if you felt that they were a credible figure, if they were credibly expressing a belief they held authentically. We think that the domains that I just listed, those domains where credibility is really important, are gonna be domains that AI has a very difficult time infiltrating because in some domains we just need a human voice. And it’s really a question of trust. In the domains where we need to trust the person who’s telling us to do something, that’s gonna be a place where robots have a difficult time replacing humans.
I’ll end with the caveat, though, that attitudes about AI are changing really rapidly. A few years ago, there was a paper published by a colleague of mine here at Booth displaying a phenomenon called algorithmic aversion. People didn’t trust algorithms. Now there’s a phenomenon called algorithm appreciation, which describes how people actually trust algorithms in most cases more than proven experts. And that just shows how quickly people’s attitudes about AI are becoming more favorable. And so while I think that AI is having a difficult time making inroads in credibility-enhanced professions right now, that could change in the next 10 years.
For better understanding and decision-making, try dialogue instead.
An Argument for Less DebateBut smiling may make others trust you more.
Someone Looks Trustworthy? You May Be Wrong.Economic experts consider what a nationwide ban on the app would mean for innovation and for the rest of the tech industry.
What Would Banning TikTok Mean for the US Economy?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.