(gentle music)
Narrator: Around the world, companies are switching to algorithms for their hiring needs. These algorithms handle data more efficiently and have more computational power than humans, potentially leading to better decisions about whom to interview and hire. And not just better decisions, but also choices that are free from bias, an issue that has haunted hiring agencies for years. But are these algorithms truly bias free, or are companies still unknowingly discriminating in their hires?
Rad Niazadeh: Yeah, so one of the reasons that many times companies, they tend to use algorithms is because they don’t have any explicit bias. There is nothing written in any part of the code that says you should discriminate against a particular demographic or sensitive societal group. However, the issue is hidden somewhere else. So these algorithms are using data, and many times the way that the data is collected is already biased. Companies such as LinkedIn or Hyre, they think about, well, so how to employ algorithms to not suffer from the same issues. But now the algorithm definitely is not going to suffer from explicit bias, but then the data that they use might suggest that, on average, someone coming from the South Side of Chicago might be less likely to be successful than someone coming from a Whiter neighborhood in Chicago. And so even though these algorithms, they don’t have an explicit bias, but because the data that they use is biased, they can create some more complicated form of implicit or hidden bias in their decisions. And this has been a concern for scholars and industries, how to identify this form of discrimination and how actually to change this algorithm to mitigate this issue.
Narrator: That’s Chicago Booth’s Rad Niazadeh. He and his coauthors designed their own hiring algorithm in an attempt to address algorithmic hiring bias and noninclusion. To do this, they imposed ex ante fairness constraints, or guardrails that test their algorithm’s average behavior, making sure that there is no statistical evidence of discrimination in the process of hiring. These constraints can also change the outcomes of hiring, making sure the final outcome meets the organization’s diversity goals.
Rad Niazadeh: One example of these average constraints that can calibrate the outcome of the algorithm to be more nondiscriminatory is what’s known as demographic parity. So imagine we have different demographic groups. To just give you an example, we have male and female. And then we would like to have an interview process that, on average over time, is interviewing the same fraction of females versus males. Another example is when a company is setting a quota on an average sense over time, meaning that there is a particular, let’s say, underrepresented group of individuals. In order to give them the opportunity to get good job interviews, good positions, et cetera, we wanna make sure that a certain fraction of them on average over time are in their interview pool or in the final selection pool.
Narrator: The researchers also went beyond nondiscriminatory constraints, creating what they call socially aware constraints. These constraints are not necessarily made to induce fairness and parity, but to give more advantage to underrepresented individuals.
Rad Niazadeh: So one example I can mention is subsidy constructs. For example, in a related context such as school admissions, individuals apply from all over the board to the same school and they have to pay application fees in order to submit their applications. But then they may not be able to afford that. And in many scenarios, the schools, they tend to waive these application fees for people applying from underrepresented countries. And now the question is that, given a certain budget, on an average sense over time, how should they allocate the budget to different individuals applying in order to, again, give opportunity to people from all demographic groups?
Narrator: After redesigning the algorithm to incorporate socially aware constraints, the researchers asked the question: To what extent would an organization or a business benefit from using this algorithm? The researchers ran a simulation to find out.
Rad Niazadeh: So we asked the following question, If we just care about the short-term benefits of hiring, forget about the downstream outcomes, how much we are gonna hurt the short-term objective of the platform by incorporating these constraints? Because obviously if you restrict the hiring to satisfy a certain form of constraint, this is only going to decrease the short-term utility of the search. But the question was: How much? It turned out that in many practical instances, it doesn’t harm the hiring that much. In fact, there are so many ways of hiring all of them, creating the same amount of utility. And then our algorithm is going to do the tiebreaking in a smart way, switching both the interview process and the selection process so that not only it is diverse, but also it doesn’t hurt the short-term utility of the search that much. But more importantly, it turned out that if we really believe the society is symmetric, there are not inherent differences between demographic groups. It turns out from the simulation outcomes that adding this constraint benefited the firm in the long run. So basically having a more diverse set of employees, either in your company or at your school, in other contexts, will basically maximize the long-term utility of the search. And this is something that is hard to model interior. It’s something that is even hard to measure in practice, because long-term outcomes, the one thing about them that makes them interesting is that they’re unobservable. So you need to give the opportunity to someone and then wait to see how do they perform in your company for four years to be able to evaluate that. But our simulation was capable of basically simulating this trajectory and predicting what would happen down the road. And the result was maybe not surprising but kind of a pleasure to see, that by adding more diversity to the search process, you can actually benefit the firm in the long run.
Narrator: Even though the researchers showed positive outcomes, they say it’s important to set hiring rules carefully. For example, if there’s already a big gap between different groups due to historical disadvantages, strict quotas for hiring people from underrepresented groups can backfire.
Rad Niazadeh: If I’m forced to hire the same number of people from Group X and Group Y, if Group X seems to be very weak on the paper, under that constraint I might end up not hiring anyone because hiring no one is not efficient, but it’s very fair. So these unintended consequences of adding socially aware constraints is something that a decision-maker should be aware of, and therefore the type of constraint that you pick for different applications should be different and it should be adjusted properly.
Narrator: So what’s stopping the researchers’ algorithm from becoming widely used? The researchers suggest that companies need to think carefully about their long-term goals. If a diverse workforce is part of that vision, they should identify and address the barriers that are in their way.