The Hidden Cost of Choice in AI Hiring

AI in the Labor Market

You are about to step into a job interview and are given a choice: speak to a human, or an AI agent. Which do you pick? And what does that choice reveal about you?

About the Research

Based on the research paper Choice as Signal: Designing AI Adoption in Labor Market Screening by Brian Jabarian, University of Chicago; and Pëllumb Reshidi, Florida State University.

Read the Research Paper

 
When an applicant participates in a job interview, it is no longer guaranteed that the first step will be the standard introductory handshake. As the hiring landscape shifts to accommodate the rapid development and deployment of AI, some job-seekers now face a choice between a human interviewer or an AI agent. Which intermediary—human vs. AI agent—an applicant chooses becomes valuable information in itself, a form of previously unobservable self-selection as applicants strategically choose the evaluation method under which they expect to appear stronger.

As a firm infers overall productivity from a plethora of information generated during the hiring process—analyzing credentials, tests, and interviews—applicant choice of human or bot becomes an additional signifier of strengths and weaknesses that an employer can strategically factor into their hiring decisions. This information can benefit firms as more comprehensive screening helps identify strong matches, resulting in better hires and lower turnover rates. However, it also facilitates inequality.

Choice benefits firms and highly skilled applicants, but disadvantages less experienced applicants when compared to traditional screening assignments where choice is not a factor. Providing applicants with the choice between human and AI led interviews alters how screening technologies shape who gets hired and where talent goes, raising important questions about the unintended effects of AI adoption in the workplace. By analyzing screener assignment as a design problem, this paper aims to examine the distribution of gains across firms and applicants.

By measuring welfare for both firms (hiring quality and employment retention) and workers (employment rates), the researchers draw conclusions about one of the many ways AI is reshaping the labor market.

Testing the Invisible Signal

Leveraging a partnership Jabarian has developed with PSG Global Solutions, the authors conducted a large-scale experiment involving 70,000 job applicants for customer service positions moving through the hiring process for real jobs.

Applicants were randomly assigned to one of three groups: i) a hiring process with only a human interviewer, ii) a hiring process with only an AI voice agent, or iii) the choice of whether to proceed with either a human or an AI interviewer. The presence of two control groups allows for comparisons between hiring processes with and without applicant choice present, while the choice group reveals whether more confident or higher-ability applicants systematically select one screening method over the other.

Using data on applicant preference (i.e. AI versus human-led interviews), human-made hiring decisions, and post-hiring employee retention rates, the researchers developed a model of applicant behavior evaluating three scenarios: i) a benchmark case in which firms ignore screener choice, ii) a case where firms incorporate choice without applicants knowing it is being considered, and iii) a strategic equilibrium structured case in which both firms and applicants understand that choice of AI or human-led interviews inherently conveys information.

Importantly, the model shows that allowing for choice does not have a single uniform effect on firm or applicant welfare. In some cases, choice benefits both parties, while in others, it benefits one party at the expense of the other. These effects depend largely on two key factors reflected in the aforementioned scenarios: whether firms treat applicant choice as a factor during their evaluations, and whether applicants understand that their choice may impact how they are assessed.

Key Takeaways:

Explore the key takeaways from the research — click each finding to learn more:

▶ Applicant choice creates a powerful new signal. 

Allowing applicants to choose between a human or AI interviewer reveals additional information about their confidence and strengths, which firms can use alongside interview performance when making hiring decisions.

▶ When firms ignore applicant choice, firms lose and workers gain. 

Offering choice without using it as information reduces firm welfare and increases worker welfare. This results in more hires overall, including both strong and weak matches. Applicants exploit their ability to choose and generate selection bias that works in their advantage. If firms fail to incorporate the information revealed by these choices, the bias remains uncorrected.

▶ When firms use choice as information, but applicants are unaware, inequality emerges. 

Firms that incorporate an applicant's screener choice greatly improve hiring accuracy and increase their own welfare as they select stronger matches. However, these gains consistently benefit high-ability applicants while low-ability applicants are systematically disadvantaged.

▶ In full strategic equilibrium, inequality persists. 

When both firms and applicants understand that choice is informative, both sides act strategically. Firms and high-ability applicants gain slightly, while low-ability applicants remain worse off compared to systems with random assignment.

▶ Hybrid human-AI screening delivers the best overall outcomes. 

Simulations show that combining human and AI screening produces the highest overall welfare, improving match quality.

The Future of Fairness

This research reveals a counterintuitive truth about AI adoption in hiring: how technology is introduced matters as much as the functionality of the technology itself. When applicants can choose between a human or AI interviewer, that choice is indicative of underlying beliefs that can sway the hiring process in unequal ways.

Choice is not automatically empowering; once firms use it as information, weaker candidates often end up worse off than if everyone were randomly assigned a screener. Which groups benefit most depends on how the information embedded in choice is interpreted.

Future policy designed to protect vulnerable applicants may need to place limitations on the uses of personal data, or require arbitrary assignment of screening methods to promote more equitable hiring processes. As the labor market navigates AI-augmented hiring, design is a vital consideration. Technology can produce vastly different outcomes depending on its application. This research highlights that future research must move beyond questions of “human or AI?” and seek to identify the populations who benefit, who are harmed, and to seek solutions that prioritize fairness and efficiency as AI continues to evolve. 

More from Chicago Booth