How Some Experiments Use Emails to Control for Systemic Bias
- By
- February 20, 2024
- CBR - Race
Social scientists have long run audit experiments to transport lab standards to the field. Researchers might send largely identical résumés to recruiters, or credit reports to bankers, varying only one thing—perhaps the age, race, or gender of the applicants. The real-world subjects are then asked to make decisions on the basis of the documents: Do they grant a job candidate an interview, or a loan applicant a mortgage?
The responses can reveal bias if, say, a résumé with a man’s name on top elicits more interview offers than the identical résumé with a woman’s name attached. This would be an example of direct discrimination, or bias evident at a single point in time. But the design doesn’t control for the effects of systemic bias, points out Chicago Booth’s Erika Kirgios.
Audit experiments—such as the ones conducted by Booth’s Marianne Bertrand and Sendhil Mullainathan in 2001 and 2002—can equalize the résumés of fictional job applicants to isolate the impact of race. The trouble is that in reality, the average Black job applicant in the United States doesn’t have a similar résumé or credit report as the average white applicant, due to unequal access to education and other opportunities.
“To have perfect experimental control, we send out the exact same résumé and ignore what in sociology they call cumulative disadvantage,” says Kirgios. “The fact is, you often don’t have the same résumé.”
White, male councillors who received an email from fictitious students asking for career advice were more likely to respond if the email came from a student who was a racial minority and explicitly mentioned their demographic identity.
For this reason, Kirgios has been using a twist on the well-accepted methodology. Instead of attaching different genders or races to identical résumés or other accountings of experience, she now does this with short emails that ask for help of some variety. For the past decade, researchers have been conducting these “correspondence studies,” which use emails and letters rather than other types of materials.
In one project, Kirgios and her coauthors—University of Maryland’s Aneesh Rai, Harvard’s Edward Chang, and University of Pennsylvania’s Katherine L. Milkman—sent emails ostensibly from students asking for career advice to nearly 2,500 white, male city council members across the United States. The recipients were about 25 percent more likely to respond to an email purportedly from a woman or minority writer when it mentioned the requester’s demographic identity.
Kirgios says that it’s possible in general, in a short email, to avoid showing evidence of cumulative disadvantage. “I send out pretty average emails, and now anyone can write those,” she says.
ChatGPT could potentially be used to further address language, spelling, and any other differences that could indicate someone’s background and circumstances. Kirgios hasn’t used ChatGPT to generate emails but left open the possibility that she could in future research. “It doesn’t create a circumstance that is unrealistic given the world we’re living in.”
And if you did, what would you actually say, and when and how would you say it?
Would You Call Out a Microaggression?A study of two multibillion-dollar retail chains homes in on managers.
A Good Boss Can Boost Team ProductivityMachine learning can help identify new hypotheses to test.
What’s the Next Big Question in the Social Sciences? Ask an AlgorithmYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.