What Are Human Rights in an A.I. World?
- By
- November 14, 2023
- CBR - Artificial Intelligence
Concerns about artificial intelligence displacing workers have tended to dominate discussion of how the technology will affect the future of work. Yet A.I. has big implications not just for who works and how much, but also for how they work and how they’re managed. And experts warn that without the right implementation, A.I. could have profoundly negative effects, such as diminishing worker privacy and facilitating wage and employment discrimination.
Employers are already using A.I.-powered surveillance systems to track their workers in numerous ways. Proponents say this increases efficiency, productivity, and even safety—for example, Amazon uses A.I.-enabled surveillance to ensure its drivers aren’t taking their eyes off the road, committing potential traffic violations, or engaging in other risky behaviors.
However, unease about the use of A.I. surveillance has been growing. A Pew survey released in April finds that more Americans opposed than favored companies using A.I. for tasks such as recording what workers are doing at their computers and tracking their movements while they work.
In its Blueprint for an A.I. Bill of Rights, released in October 2022, the White House notes, “Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.” Some states have enacted restrictions on such employee monitoring or require that workers be notified.
A.I. is also increasingly being used to scan résumés for relevant keywords and other information. At a January hearing of the US Equal Employment Opportunity Commission, EEOC chair Charlotte A. Burrows cited statistics that suggest that as many as 83 percent of employers and almost all Fortune 500 companies use automation to screen or rank potential candidates. A.I. tools could “unintentionally” discriminate, panelists at the hearing said, by, for example, incorporating an employer’s past nonexplicit but clear bias in hiring decisions into its algorithm.
The European Union’s proposed A.I. Act, which would broadly regulate how A.I. is used in member countries, includes protections for workers subject to these tools, noting they could “appreciably impact future career prospects, livelihoods . . . and workers’ rights.”
Research from University of California at Irvine’s Veena Dubal looks at the experience of gig workers, such as those who drive for Uber or Lyft, to demonstrate how A.I. algorithms could lead to wage discrimination. There is no set hourly wage for most gig work; instead, the workers are offered jobs they can accept or decline, and rates vary depending on external conditions. Rideshare companies collect information on drivers that allows them to predict which worker might accept a lower rate, Dubal says, or even “predict the amount of time a specific driver is willing to wait for a fare,” making it difficult for them to clear a daily earnings goal.
“These methods of wage discrimination have been made possible through dramatic changes in cloud computing and machine learning technologies in the last decade,” Dubal writes. She proposes an outright ban on using algorithms to set wages.
Veena Dubal, “On Algorithmic Wage Discrimination,” Working paper, April 2023.
A series of studies suggests an inverse relationship between automation and religiosity.
Where AI Thrives, Religion May StruggleIt’s even effective in identifying risks related to AI itself.
AI Reads between the Lines to Discover Corporate RiskThey tend to struggle with debt instead, but research suggests two ways to change the situation.
Why Small Businesses Are More Reluctant to File for Chapter 11Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.