MBA Masterclass Psychological Sources of Bias
with Professor Jane Risen
Explore how psychological biases affect behavior in the form of discrimination, stereotypes, prejudice, and implicit bias; and how to help prevent it by shifting one's mindset.
- February 11, 2021
- MBA Masterclass
Kara Northcutt: Hey, everyone. Thanks for joining us today. We'll get started in just a couple minutes as everyone comes in. If you want to, feel free to let us know in the chat where you're joining us from today, it's always great to see where everyone is geographically. I am in the Logan Square neighborhood of Chicago right now. What about you, Jane?
Jane: I am in my office in Harper Center in Hyde Park in Chicago right now.
Kara: Okay, good.
Jane: Look at all these.
Kara: India, fantastic. Russia, fantastic.
Jane: Oh, my.
Kara: This is great. Great representation. I wouldn't mind being in some of these warmer climates.
Jane: I know.
Kara: Pretty chilly here in Chicago, as it does, obviously, but still a little bit of a shock to the system. And again, for those who just joined, we'll probably give one minute, and we will get started. And throughout today's session, I see a couple like hand raise. What we'll use today, primarily during the session is a Q&A, and I'll repeat this as well. And obviously we're using the chat for now, but we'll use the Q&A portion to be able to pose questions to Professor Risen. So I'll mention that again once we get officially started. Who is in London, if you haven't, you'll have to check out our new London campus when you get a chance. All right, we are starting to level out, and I wanna be very cognizant of time today. So we are going to go ahead and get started. So good morning, good afternoon, depending on where you might be in the world. We're thrilled to have you with us today. My name is Kara Northcutt. I am a Senior Director of Admissions at Chicago Booth, and on behalf of the evening, executive, weekend, and full-time admissions teams, I'm really pleased to welcome you to our psychological sources of bias masterclass with Professor Jane Risen. Today's session is part of both our diversity-focused events that we have going on throughout the week for the admissions teams, as well as our masterclass series that is really here to provide you and give you that classroom experience, get to know the faculty without having to actually come to campus. When we can, we will, of course, resume those in-person visits, but we will continue these masterclass series at least two or three a quarter going forward, because they just give so much accessibility, and we're really able to share the Booth story much more widely, and give you all a bit of that academic experience. Since this is a 60-minute session, we'll just get right to it. Feel free to type any questions that you have for Professor Risen into the Q&A throughout the session. If time permits, I will moderate 15, 10, 15 minute Q&A toward the end of this session. With that, it is now my pleasure to introduce to you Professor Jane Risen, who is Professor of Behavioral Science and John E. Duke faculty fellow. Jane, you can take it from here. Thank you very much, everybody. Enjoy.
Jane: Thank you, Kara, and thank you to everyone who's joining us today. I'm delighted to be able to talk to all of you. Today, you are gonna be dropped right into the middle of class three. And so there would've been a couple of classes earlier in our quarter together, but I thought that this would make potentially a good sort of standalone lecture. And so in week one, if you remember, of course you weren't there to remember this, but in week one, we sort of introduced ourselves to the topic of diversity, equity, and inclusion in the workplace and to the class itself. Last week, week two is set to think about the question of why, why do we care about this? Why is diversity equity and inclusion important? And some diversity classes, that could be sort of the frame of the whole class, that's not the frame of this class. This class is much more about a how-to, so we wanna understand why diversity, equity, and inclusion are important, but in a sense, I'm gonna kind of take that a little bit as a given. We're gonna assume that it's important.
We're gonna assume that that's what we're striving to do. And so for much of the class, we're gonna spend time thinking about how do we do that? There are a lot of organizations, they have identified this as something that they care about, that's something that's important, and yet they're still struggling to make progress, and so the course is more oriented in that direction. And specifically, early in the class, like in week three, we're gonna start to think about different sources of bias. And so in week three, today's class is gonna focus on psychological sources of bias. In the following week, we would think about more systemic or institutional forms of bias. And the reason we unpack where the bias comes from, the reason we unpack that is because that's, what's gonna make it easiest for us to think about solutions. How do we actually fix things? Well, we have to know why things aren't working right now. And so this week is focused on psychological sources of bias and setting up questions around what we can do to interrupt that bias, and make our workplace diverse, inclusive, and equitable for everybody. So every week I always have sort of goals for the particular class, as well as an outline of what we're gonna do. You're joining me on a webinar, which means that we can't do some of the activities that we would usually we do in class, which so we're not gonna do the social categorization activity. We're not going to do our case on gender and free speech. Those would be part of class three, but instead, what I'm gonna do is pull out a lecture portion, and I'm going to talk about psychological sources of bias, and then answer any questions that you may have afterwards. So to zero in on the goals of just this portion of the lecture, my goals are first to help you understand how psychological biases can unintentionally affect behavior.
So we'll think about discrimination, stereotypes, prejudice, and implicit bias. And the reason we do that is because we wanna prevent psychological biases from affecting behavior, and so we need to understand it. And so here, we'll think about three kind of categories or levers that we can pull to try to prevent these biases from creeping into our behavior, and so that will be focused on our mindset, de-biasing over time, and decoupling. Let's jump in with some definitions, just to understand how these terms differ from one another. So discrimination is a behavior. It's when different actions limit the social, political, or economic opportunities of members of particular groups. Stereotypes are beliefs. This is when you attribute a trait to a person or to a group based solely on the fact that they are members of that group. Lastly, prejudice is an attitude. So your attitudes are how positive or negative you evaluate things in your environment, and prejudice is when you hold a negative attitude. So if you have a negative attitude, a negative evaluation about certain groups and the members of those groups, that would be a form of prejudice. Let me jump in with a few examples of discrimination at play, where we can see it clearly. So I imagine many of you are familiar. This is a paper written by two Booth faculty members, where they sent fake resumes to help wanted jobs in Chicago and in Boston, and they took the same resume, but changed the name that was on the resume. And so they changed the name to either have a resume with African American sounding names, or resumes that had white sounding names, and then they just tracked the callbacks that these applications received. And what they found was that white names received 50% more callbacks compared to those exact same CVs that were attached to a name that sounded African American or black. And this is equivalent to eight additional years of experience. So to get the callback rate equated across the races, you would've needed eight more years of additional experience on your CV to sort of match that amount of discrimination. And this was true across occupations and industries. So that's an example of discrimination that happens in hiring. Here's another example. This was looking at science faculty members were the participants in the study. So faculty members in the sciences were given application materials, and the application materials were identical again, except for the name that was attached to it. And it was either assigned a male name, John, or it was assigned a female name, Jennifer. And what the scholars found was that the faculty members rated the male version significantly more competent, more hireable, and they offered more mentoring than for the identical application when it was female.
They also selected a higher starting salary for the male. On these last two slides, I've had this word gateway at the heading, and I'm not sure if you all notice that, but what I'm gonna do here is just take a quick moment to think about this term, which is part of a theme that I introduce in the first class, which is a distinction between gateways and pathways. This is language that comes from Dolly Chugh and her book, "The Person You Mean to Be," which is one of the required books for the class, and I like this distinction. So gateways, we can think of when we're trying to create equitable work environment, we can think about both the gateways, which is where we're trying to create equal access to the organization. And gateways include hiring, it includes being admitted to school, being promoted, being compensated fairly, all of these formal processes. All organizations have had to figure out how to hire, how to promote, how to pay. And these formal processes are places where we can see people being treated differently. And so if it's not equitable at the gateways, we are not creating a truly diverse, equitable, and inclusive environment. But it's not just the gateways, and this is an important thing to keep in mind is that we also have to think about every other moments of the day when you're at work, when you're not being hired or promoted, but all those moments in between that get you to that promotion. And so we also need to think about our pathways. This is about ensuring equal opportunities for success along all the pathways in your organization, and so these end up being much less formal. So things like being heard, and having your voice valued and respected, getting credit, or getting second chances after making a mistake, being accepted for who you are, feeling like you belong without having to cover aspects of your identity or assimilate to some other norm. Being connected to the right people in the organization, so you have the info information that you need to navigate the organization, so that you have the opportunities that you need, and so that you have the support and backing of people that you need. So when we think about creating an equitable work environment, we need to think both about the gateways and the pathways. The gateways tend to map onto diversity, and our diversity metrics, whereas the pathways tend to map onto inclusion, and more of our inclusion metrics.
The gateways, again, as I said, are where we sort of make our formal decisions. Our formal processes are often what's happening in the gateways. The pathways are often much less formal parts of our company. And so I showed you discrimination that can happen at the gateways, and those last two studies, those were both hiring studies, and compensation decisions. That's the gateways. Let me share one more example with you where we look at a pathway. So what I'd like you to imagine is that you received an email like this one, says, "Dear professor so and so, I am writing you because I'm a prospective doctoral student with considerable interest in your research. My plan is to apply to doctoral programs this coming fall, and I'm eager to learn as much as I can about research opportunities in the meantime. I will be on campus next Monday, and although I know it is short notice, I was wondering if you might have 10 minutes when you would be willing to meet with me to briefly talk about your work, and any possible opportunities for me to get involved in your research. Any time that would be convenient for you would be fine with me, as meeting with you as my first priority during this campus visit. Thank you in advance for your consideration. This email was sent to thousands of faculty members, across hundreds of institutions and universities, and many, many academic departments. And everybody received the same email, but what was varied was, again, the name that was signed to it. And so the scholars use names that were clearly gonna be identified as White, Black, Hispanic, Indian, or Chinese. And for each of those racial or ethnic groups, also male or female, and then they looked to see who responded. And what the scholars found was that faculty members were significantly more likely to reply to the white male students who sent the email compared to all the other groups, and the willingness to respond to the email also translated into their willingness to say, "Yes, let's meet." And so, again, we see a significant difference between the likelihood that a white male is given a meeting compared to all the other groups. And they looked at this by discipline, and I just want a note here. business is right there at the top, in terms of the most extreme forms of bias compared to these other disciplines. This is something we need to be incredibly aware of, not just in general, but specifically in the spaces that many of us are interacting in. So... This is clearly discrimination when it's aggregated so that we can see all these faculty response. You can see that this is discrimination because some people are being given a leg up, and other people are not. I'm considering this a pathway, because this is not your chance to be admitted to this PDH program, but just having the chance to talk to a faculty member is part of the pathways that set people up to be admitted into those PhD programs. And so all the other moments of the day, all the other moments of our work life are also important beyond just those gateway decisions. I like the example from this most recent study I've shared with you, in part because it's so easy as an individual to see how this could happen. If you are a faculty member who gets an email in your inbox, you are not thinking about this as an act of discrimination. This is just an email that you've got that you've gotten along with hundreds of other emails, and the question is, do I have time? How many other emails did I get this week? How busy am I? And so for an individual faculty member, it doesn't feel like an active discrimination, but as soon as we aggregate the data, we can see quite clearly that it is. Economists have talked about discrimination as coming from two sources, and so it's kind of labeled in that way. So one is belief-based or statistical discrimination. This is discrimination that is thought to be caused by the stereotypes that people hold, the beliefs that they hold leading to discrimination. The second category is referred to as taste-based discrimination, and this is discrimination that comes from the source of prejudice, the negative attitudes that people hold about other groups. The reason that we care about whether discrimination is based on your stereotypes or based on your attitudes is because when we think about trying to undo that discrimination, we need to think about what is the right way to educate or to change somebody's, either stereotypes, or attitudes, if we want to eliminate the discrimination. Let me take a moment or two to just unpack each of these terms, and share a little bit more with you in terms of how they actually lead to discrimination. How does it actually work? So first we'll think about stereotypes, and when it comes to stereotypes, they can come in lots of different sort of shapes and sizes. So for example, one important distinction is between descriptive and prescriptive. Stereotypes can be descriptive when it's the belief about how I think people in different groups behave. It's descriptive of what I kind of see in the world, or understand to be true. Prescriptive stereotypes, on the other hand, are beliefs about how you think people should behave based on their group membership, that you expect certain traits to emerge. And then when people deviate from that, it can be problematic for them. They can be punished for deviating from stereotypes to the extent that they're prescriptive in nature. We can also think about stereotypes as being positive or negative. I'm gonna turn to positive stereotypes in just a slide or two. They can also be true or false. And when I say that, I mean in two ways. First, it can be true or false about any particular given individual, and also they can be true or false in terms of whether they sort of accurately capture a base rate for the whole group. So stereotypes can be accurate about a group or about an individual, but also they can be completely inaccurate about a group or about a person. So, how do these different stereotypes work? First, let me just sort of share a descriptive, how descriptive is gonna work. So this is just how I think people are. And so what I've written up here on the slide is some stereotypes that are commonly associated with men and women.
So female stereotypes usually include things like caring, warm, deferential, emotional, or sensitive, whereas male stereotypes include competent, assertive, decisive, rational, objective. And so the way that these stereotypes are going to influence behavior is that if you don't have that much information about someone, and at actually even when you have a whole lot of information about someone, as long as there are any gaps in your knowledge, those knowledge gaps can often and will often be filled in with stereotypes. And so when we see the same behavior of different people, it can be interpreted differently to match the stereotype that I have in my head. But as I said, stereotypes can also be prescriptive in nature. And here's where it creates sort of a particular difficulty or onus for members of groups who may not always want to live up to those particular stereotypes. And in particular, I think one place that the research has has examined this is thinking about how this can create a double bind, in particular for women in leadership positions. So men are still seen as the default when it comes to leadership, and what that means is that the norm has this masculine stereotype flavor, and women are then judged against it. And so what research has ban out is to show that when women act consistent with gender stereotypes, they're not viewed as as competent as men are, because they're viewed in terms of these feminine stereotypes. But if women act inconsistent with gender stereotypes, so that they are seen as more competent, because they are no longer acting according to the stereotypes that some people hold to be prescriptive, they end up being considered less feminine, and by extension, often less likable. And so what this means is that for men, success and likability are generally positively correlated, but for women, it's often negatively correlated so that as you achieve more success and are seen as more competent, you may be seen as less likable. Ellen Powell has I think a really nice quote about this idea of what you're supposed to be, the pressure that comes from these prescriptive stereotypes, and just the small kind of tight rope that people are often asked to walk. She says, "I found the number of things I was supposed to be extremely daunting. As one of the only women or people of color in a workplace, you're trying to do so many things at once. You're trying to be professional so people treat you with respect. You're trying to be fun so they like you, you wanna be humble, but you want to be confident. No matter how hard I worked, I couldn't figure out a way to succeed."
And so when there are prescriptive stereotypes, and when we narrow the range of behavior that we allow for people, we make it incredibly difficult for people to find their own footing, and succeed in the way that might be best suited for them. I wanted to turn to positive stereotypes just for a moment, because this is a place where some people think that they're nice. Whoa, I'm saying something kind or complimentary about another group. I wanna make it clear, positive stereotypes are also quite problematic. One reason is because they tend to be prescriptive, and so when people deviate from these positive stereotypes, that can lead them to be sort of punished for that violation. It's also the case that positive stereotypes are used to keep people in their place. You're allowed to be good, you're allowed to succeed, but only in domains that people from your group have traditionally been successful in, for example, right? That that's the message that comes through when you harp upon positive stereotypes. And research shows that when you use positive stereotypes, members of that target group don't like it, and see you as being prejudice, so I think that's important to keep in mind. When black participants heard of a white confederate or actor in a study make a positive, but stereotypical remark, they liked him less and considered him more prejudice. Similarly, this was studied with women and with Asian participants, and here, they looked a little bit more at the underlying mechanism. And what they found was that, again, when people made positive stereotypes, when they made comments about positive stereotypes, they weren't liked. And this time, what we can see is it's because people assumed if you are making positive stereotypical remarks, you would also make negative stereotypical remarks. And so when people use stereotypes, even positive ones, they're perceived to be the kind of person who is relying on stereotypes, and that is perceived then as someone who seems more prejudiced than someone who does not use positive stereotypes, or any sort of stereotype at all. Next, I wanna turn to attitudes. So stereotypes are the beliefs that we hold, ascribing traits. Attitude are our evaluations.
And first we're gonna think about this in its explicit form and the way that this is usually measured is something as simple as how do you feel about these other groups? How you feel about each group from zero, coldest, to a hundred, warmest? And a feeling thermometer like the one I've presented here is often the way we study people's explicit attitudes. This is from a few years ago, and you can already see an extreme partisan divide where Republicans don't like Democrats, for example, and Democrats, don't like Republicans. And you see each of these groups holding prejudice because they are seeing the other side as negative, and my guess is this would be even more extreme today. We can also look at, this is, again, a feeling thermometer that went from zero to a hundred, and this is broken down by participants' religion. So everybody, all of the different ratings were given by people of different religious groups, and they are being asked to rate all sorts of different religions going across the columns. And I wanna draw your attention to two rows here, the participant responses who are Catholic and who are Jewish. And on your attention because if you look through these rows, almost nothing dips below 50. And so if you look at this, you could see that these respondents aren't really holding prejudice, they're neutral, or more, or better to all of these different religious groups, so maybe we wouldn't call this prejudice. Nevertheless, if you kind of dig into this, you can see that there's still a lot of variation in the extent to which they like different groups. And in particular, I'll just highlight here how Catholic respondents describe how they rate Catholics, and how Jewish responders rate other people who are Jewish. And what you see here is although they may not be prejudiced against other groups, you certainly see that they are reserving higher numbers for members of their own group. And the reason I point this out is because I fear when we use language like taste-based discrimination, we're thinking about this just in terms of outgroup hostility, just in terms of not liking other people. When in fact, ingroup favoritism, I think, psychologically is much more common, and much more powerful here. So ingroup favoritism is also a form of taste-based discrimination, even if it's not actively hostile, we're still going to see discrimination. This is a quote from Marilyn Brewer, she says, "Ultimately many forms of discrimination and bias may develop, not because outgroups are hated, but because positive emotions such as admiration, sympathy, and trust are reserved for the ingroup, and withheld from outgroups." And even though liking people who are like you doesn't seem nearly as malicious or nearly as nefarious as actively disliking other people, it still causes discrimination.
We need to keep this in mind. This still causes discrimination. One particularly disturbing example is from Rivera's work, looking at how firms hire a from the interview. So she studies elite professional service firms like law firms and consulting firms, and what she found was that once people got to the interview stage, the hiring from the interview, the criteria that was cited as being most important was the extent to which people were similar or fit in terms of lifestyle markers, things like their leisure pursuits, their experiences, or their self presentation style. Obviously there was some selection that was done before to get you to the interview stage, but just pause to think about that. That means that people are making their decisions, these partners at these firms are making decisions based on whether you like wine or whether you ski, or what fraternity or sorority you might have been part of, or how you wear your hair or what clothes you wear, and that is obviously unrelated to this skills that you need to perform well at the job. And so we can see that even if it doesn't seem as problematic, even if we kind of understand psychologically why people have an affinity towards those who are similar to themselves, we still have to recognize that it's going to cause discrimination. When we select people based on how similar they are to ourselves and to other people who are already in a place, in a space, we are just going to reproduce those same people and those same experiences, and so it still causes discrimination. One reason that this happens is because our stereotypes on our bias and our attitudes are not always top of mind in an explicit way. And so I wanna turn to the topic of implicit bias, which is something I imagine everybody has probably heard this term, and so let me just define it in the way that I wanna use it here. This is bias in favor, or against one group compared to another, that occurs automatically. And when I say automatically, I mean without requiring awareness, without requiring intent, and even without requiring cognitive resources, or sort of active working memory. And this holds for both attitudes and stereotypes, both of these can be held automatically. The IAT, the implicit association test is the most common tool for measuring implicit bias. It's not synonymous with implicit bias, but it's a tool for measuring it. There are other tools out there, but this is by far the most popular. And all of the students, if you were enrolled officially in class, you would have completed an IAT before class today. I encourage all of you to go online and try taking them. There are lots of different tests listed, and you can sign in as a guest. You don't have to give them any information, and you can learn something about the associations that are living in your head.
That's what the score is telling you. So the score is not telling you if you're a good person or not, the score is telling you which categories are associated in your mind, at least at the time of the test. And so, by going as quickly as you can through this categorization task, researchers can determine, can see which associations are generally going together. So for example, if black and bad were associated in your mind, you would be quicker to respond when black faces and bad words are paired on one side of the screen compared to if they were on opposite sides of the screen, and so these scores tell us what it's going together. And if we look across lots and lots of people, we can see the attitudes and the stereotypes that sort of live in society. So for example, when it comes to attitudes, generally speaking, we see that people are more favorable towards younger than older, towards thin than towards overweight, towards lighter skin than darker skin. And towards those who are physically able, compared to those with disabilities. When it comes to stereotypes, darker skin is associated with weapons and with violence, Asian characters are associated with foreignness, and women are associated with humanities more than with math and science, and with family more than with work and career. These aren't necessarily stable, so the numbers there are just a snapshot in time. Implicit attitudes can change over time. I thought you might be curious to see how some of that has happened over the last 12 or so years where the IAT has been so popular that there have been so many people taking it online. What you see, the implicit sexuality attitudes have changed dramatically over the last 12 years. Implicit racial attitudes and implicit skin tone attitudes have moved towards neutrality, but in a much, much lower pace compared into the sexuality IAT scores. And then if you look at the three below, you'll see there's actually been almost no change in the last number of years in terms of people's attitudes towards people who are older, people with disabilities, or people with heavier weights. Now, why do we care about this? In some sense, if the bias just lived in our head, we wouldn't have to think so much about it.
The reason we care is because the biases that we hold in our head affect the way that we interact with other people, and that's why it matters so much. So there have been several meta analyses done over the years to look at this relationship between people's IAT score, and how they behave in socially sensitive situations. I've listed two numbers there to give you a sense of the range of the effect size. When the authors who developed the IAT did the meta-analysis, they found that the effect size was larger. Researchers who were hostile to the IAT score found a lower number, but one that was still meaningfully and statistically significant. And just to give you a sense for some of the studies in these analyses, you'll find individual papers that show that people with more bias on the IAT make different decisions when they're looking through applications for members from different groups, or you'll see that the IAT score predicts the nonverbal behavior. When people are interacting with someone from a different group, they'll engage in more awkward or exclusionary non-verbal behavior to the extent that they're more biased. The bias that lives in my own head can not only affect how I evaluate someone or how I treat someone, but by extension, it can then affect their behavior. And so a beautiful study was done looking at minority employees, how they performed at work. This was cashiers, and what they found was that the minority employees performed worse on the days when they happened to be randomly assigned to work with a manager who was biased. So the managers took an IAT, and the more biased managers, when they were assigned to work with employees, the employees performed worse compared to when they were able to go to work with a non-biased manager. And so this shows us that the and performance of employees is going to depend, at least in part, on whether they have the opportunity to work with someone else who is non-biased. And because the IAT is noisy, when you aggregate into larger sets, you actually find even stronger and stronger relationships. So instead of looking at individual behavior, and individual IAT scores, we can also group that in a more aggregated way. And so what the research shows us is that the places and the spaces that tend to have the most implicit bias are also of places and spaces where we see the most disparate outcomes by different groups. Courses was correlational, but what you see here, for example, countries with strong male science IAT scores have greater gender achievement gaps in eighth grade math and science, US cities with higher levels of racial bias have greater racial disparities in shootings by police, and also greater black white gaps in terms of infant health outcomes. So this has been a little bit of a bummer so far. I know I've been focused entirely on the bad news. People have biases.
They sometimes don't even realize they have them, and they can affect our behavior, but there's good news, which is that most people are trying their hardest to overcome their biases and their stereotypical preconceptions to treat everyone fairly. And so, even though these biases exist, and it's important for us to acknowledge and recognize how they exist, because that's what's going to allow us to do something, there is good news here, which is that there are ways to prevent those biases from affecting our behavior. And that's what we're gonna think about next, and for the rest of today's lecture. And so we're gonna think about this in three categories. The first is the mindset that you have. If we can think productively about biases, then there's a greater chance for sort of learning and continuing to grow in this space. The second is debiasing, and this idea is how can I change the associations that are living in my head? If I don't like which associations are there, how can I try to eliminate those over time? And the third, which is where we're going to spend the most time in the class is thinking about decoupling. And this is something that we can do right away, because instead of trying to go in and just pull the biases out of your head, what we're trying to do in decoupling is break the link between the biases that may live in our mind, and the behaviors that we engage in. And so if we can think about best practices, we can think about how to make sure that we don't let those biases creep in. So let's take each of these categories in turn, we'll start with mindset. This is a space for humility. If you think that you are objective, the research suggests you're more likely to be biased, recognizing that you could be wrong about basically anything, just any sort of intellectual humility is going to go a long way in allowing you to continue to learn. Second, we need to think about being mindful.
So, we know that biases are more likely to creep in when we can't be deliberative, and so we're going to suggest that you should slow down, pause, and reflect. When we go with our gut instinct or an initial impression, it's much more likely that we're going to rely on these stereotypical or prejudice attitudes that we might hold, and so it's important to slow down and be as deliberative as we can be. Having a learning orientation is also incredibly important. So a learning orientation means that my goal is to learn, and it means that I can continue to improve, and that's true for other people as well, right? If you recognize that there's always more to learn and discover in this space, then you're always going to be staying on the lookout for looking for more opportunities to interrupt bias. Maybe even more importantly, if you have a learning orientation, it can protect your need to feel defensive. If someone points out a mistake or a misstep that you may have taken, if you are sort of in a fixed mindset, if you think that nobody changes, then that mistake could kind of impune your character forever, and that would be really hard. Instead, if you have a learning orientation, you can recognize that a misstep is just a misstep on a way to improving and on a way to learning even more. And it's much easier to really listen to someone and learn about the impact of what you did if you recognize that this is something that you can continue to learn and grow in. Finally, I wanna remind you to keep your own internal motivation, to be equitable, keeping that front of mind. So when we have our goals top of mind, it's a lot easier to follow them. They don't sort of disappear on you, and so keep keeping this front and center is going to make it easier to actually act out on this. Okay, so that's the category of mindset. Next we wanna turn to debiasing, and debiasing is a long term strategy. There have been studies that can show that can do a task, and a short five minute task can get a change on an IAT score.
But if you retest that person a few hours or a day later, it's back to wherever they were, it's kind of baseline. So there are ways to change your short term thinking, to change what's in your mind at the time of the test to change your IAT score, but that's not the ultimate goal, right? The ultimate goal is to change these associations that are living there, and so we need to think about this in terms of a long-term goal. So if we have intentional, meaningful, long-term exposure to counter stereotypical exemplars, and counter attitudinal exemplars, that will help break the associations that are currently living in our mind. And so, how do we do that? Well, one is about where we choose to live, where we take our person, how we interact with people. If you can get yourself into diverse and inclusive environments, if those are the spaces that you interact in, you're gonna have the opportunity to interact with lots of different people who are from lots of different backgrounds and lots of different roles, and you'll have a chance to break down some of those associations because of the variety of people you have a chance to interact with.
Of course we don't only learn from our direct experience, it also comes from indirect, so you may wanna think about how to curate a diverse and inclusive representation when you think about the media that you consume. So the TV shows that you're watching, or the movies you're watching, the books you read, the music you listen to, those are places where we learn about people in the world. And if you're intentional about making sure that you have a diverse and inclusive representation, again, that's going to help you break down associations that have built up over a lifetime. And when you think about this, sort of personally, another thing you can do is just reflect on your own life to think about the particular groups or the particular places in your life, where you think that this may matter the most, right? So you can engage in a self audit to think about how you may be letting your implicit attitudes or implicit stereotypes affect your behavior so that when you can identify those spaces, you can think more deliberately about how to interrupt the bias that might occur there. So every class, I assign a life exercise to students, which is something that they're asked to do out of class. And here is the life exercise for week three, following on this idea of a self-audit. So students are asked, and I encourage all of you listening today if you'd be willing to try this is to engage in a self-audit, perhaps prompted by an IAT score that doesn't match your explicit attitudes, and sometime in the next week, collect data from your own life to consider how implicit bias could be affecting your own behaviors, decisions, interactions, and judgements. And so some students look at the, for example, the media that they consume, and recognize how that could be leading to bias.
Some people think about the businesses they frequent, or they look in their contacts in their phone to see who it is they are generally socializing with, but all the students have a chance to reflect on their own life, collect some data, and see whether there are places where it's going to be important for them to think about interrupting bias. The final category that I wanna talk to you about is decoupling. This is the idea that we're gonna break the link between the biases that live in our head, and how we behave. And so a phrase that I like a lot is taking the bias out of the process rather than trying to take it out of the people. We know it is really hard to go in and pull out bias from people's heads. As I shared just moments ago, we can do it on a very short term basis, but it doesn't have a long term change when we try these kind of quick fixes. So instead it may be better to try to pull the bias out of the process itself.
So if we can interrogate our processes, we can see where bias could be sneaking in, and think about how to change those processes to keep them out. So one, I think, compelling example that I imagine many of you are familiar with is what happened when orchestras started using blind auditions. So what you found was that the percentage of women in the orchestras rose quite dramatically. Of course, before the orchestras believed that they were evaluating people simply based on their musical talent, but what you learn from putting musicians behind a screen is that actually there was more involved in that original evaluation. They were clearly evaluating based on gender, because when the gender could no longer be determined, when the performers were behind a screen, the number of women went up. I use this example because I think it's such a clear example of how you can decouple bias from the process. If I don't even know what group someone comes from, then it's really easy to think about how whatever biases I might hold in my head would not apply to my behavior. But of course, blinding is not logistically possible in lots and lots of situations. It's also not always the ideal way to interrupt bias, but it is one concrete way, and I think it's the easiest way to understand, but we wanna think about other ways too.
And so when it comes to hiring, beyond blinding, there are other best practices. So having predetermined job criteria, standardized evaluation forms, structured interviews, and a diverse search committee, for example, these are best practices so that we can pull out the bias that would otherwise be living in our hiring processes. We are going to, or we would unpack all of these much more later in the class, when we specifically talk about interrupting bias on the gateways and the pathways. Here, we're just kind of getting a sense for the ways that we can do it so that we understand that there are specific solutions that target some of the particular problems that we have. Beyond on hiring, which is one of these gateway, which is an issue on the gateway, we can also think about the pathways and all the other more informal moments at work, and also think about how to pull bias out of those processes too, and so there are good best practices here as well. So, for example, how do you run your meetings? Well, if we could rotate office house work, if we have a policy for interruptions, and we can mine the stolen idea, schedule meetings appropriately, all of these are going to be deliberate ways that we can make sure our meetings are not full of bias. So just to put all of this on the same slide here, you can see the reason we try to understand psychologic sources of bias is because we want to prevent it from affecting our behavior, and these are three sort of categories that we can think about. One is just getting into the right mindset so that we can learn more. Another is thinking about de-biasing the associations in our head over time.
And the last is thinking about how to break this link between biases in our head and the behavior we engage in by engaging in pro in processes that make it harder for those biases to creep in. And so we can go to breast best practices that will allow us to have both formal and informal ways at work that are going to work for more of our employees. So let me just wrap up with one final slide, which I always try to end all of the classes with take home lessons that can kind of build over time so that by the end of the quarter, you have your list of take home lessons. For today, what I want you to take away from this is that psychological sources of bias make it more difficult for certain employees to succeed. That bias is not necessarily mean-spirited, intentional, or even something people are aware of, but it's still bias. And then to prevent bias from affecting behavior, we want to adopt a productive mindset to bias our minds over time, and pull out bias from our processes. And so we can interrogate our processes to think about how bias could be sneaking in, which would allow us to then do our best to pull it out. And I will pause here, and I am happy to take questions. I could see the chat filling up, but I didn't try to look at it too carefully for fear of losing my way in my own slides, but I'm very happy to hear from people now.
Kara: Thank you, professor. Yeah, I was just gonna say the dialogue in the chat has been really, really great, and it would probably remind you of the debates that happen in your classroom. So if you look at that, most of the questions were posted the Q&A, so I pulled some, but if you happen to see something you wanna address, absolutely feel free, try to pull some themes. And there was some questions around applying this to team cohesion, working in teams. So for example, how can you drive a proper balance between team diversity and team cohesion? I have seen some diversity efforts negatively affecting that cohesion. So thoughts on that? I think a lot of it was obviously working in diverse teams, cross-functional teams, et cetera.
Jane: So first, I appreciate the question so much, because... I think there is sometimes a tendency to think of diversity as magic. And that's often the way it's discussed in, and that's problematic, because it sets all the wrong expectations. Diversity is not magic. Putting people who are different into a group doesn't magically mean you're gonna get better ideas, and your profit's gonna go through the roof, right? Diversity has the potential to do all sorts of amazing things for you, right? So the list of ways in which diversity can lead to a competitive advantage is great. Better problem, better creativity in groups, individuals will think more rigorously. You can reach new customers, there's all sorts of good things, but it's a potential, and because it depends on the climate and how you actually manage it. And this is why we would've spent time in class one that diversity doesn't stick without inclusion, so we have to be thinking about the piece of inclusion. We have to think about the climate, and so when we're setting up groups, it's really easy to look at the literature and see examples where diverse groups leads to better performance, and examples where diverse groups leads to conflict, and therefore worse performance. And so, I think... There's lots of people who have written on what is sort of what allows for diverse groups to thrive. Some of this requires time, so there's research that shows at the beginning, diverse groups can struggle until they sort of learn to navigate certain conflicts, and then they end up thriving. So some of it's an issue of time, the longer that you work together, the better it can work. Sometimes it has to do with the ways in which the diversity is represented. If there's diversity is kind of spread, if there is lots of different fault lines, and so there are some men and some women and some white employees and some employees of color, and it's all mixed up, that can be better than if you have, for example, all women of color and all white men, and the lines are sort of starker in terms of combinations.
So it's certain combinations can lead to better or worse outcomes, but mostly I think it has to do with having a learning and inclusion mindset. And so when the goal is to learn from one another, and leverage each other's differences, that's when we see groups working much better together. And so I think that's an important piece to keep in mind. It's not that it will just magically happen. It has to be that for certain tasks, diversity will provide benefits, and those tasks require a climate where people can really learn and engage with one another, rather than just kind of push the same narrative. Sorry, one last thought, I know I'm going so long on just this one question. One last thought is that it also depends, the other issue is just the numbers. And so I will say this, sometimes diverse groups perform very well, and homogenous groups often perform very well. Sometimes it's that in between, what we would call skewed groups that actually struggle the most. And so when you're thinking about groups and assigning groups, instead of... splitting things evenly, so that you might end up with all your groups just having a token, one or two people who are different from the rest of the group, that is likely to lead to a bad group dynamics. Instead, I would recommend create as many truly diverse groups as you can, and then if the other groups are still homogenous, that's probably better than creating lop-sided groups where certain voices end up being marginalized. And so that skew group dynamic is actually often more challenging than a truly diverse group, or a truly homogenous group. And so I think that's one way to think about navigating as you diversify more, you can get caught in this middle space that can be difficult. And so that's one solution to keep in mind is that you may not want everything evenly distributed because we know the group dynamics in skew are particularly challenging to navigate. Sorry, Kara, next question. That was a lot.
Kara: No need to, no, that's great. I mean, I'm keeping in mind, most of these participants are in industry and working, and in dealing with these day-to-day, so that's very, very practical and helpful advice. And this is somewhat not similar, but I think we'll give individuals some practical ways to think about how to apply this to your day-to-day, personal and professional. So going back to the beginning, the large experiments shown at the front of the class seem great for evaluating whether discrimination is happening, but may not be feasible to execute within all organizations and companies. Are there practical frameworks for how to diagnose whether and or how much discrimination is happening in your company, and like any direction that it's going, that sort of thing? So any thoughts on that, like frameworks for companies that are trying to address these issues?
Jane: Yeah, so that's wonderful. That's a great question, and that's a big part of the class. It's thinking about behavioral designs. So, we spend a lot of time thinking, how do we use behavioral design to create more health in our organizations when it comes to diversity, equity, and inclusion? And so you're not gonna run those giant experiments, I agree then, compared to the ones that I showed you. But we can still look at our own data to diagnose where things are going on. So hiring is one place where it's often very easy to kind of look into your numbers and get a sense. Now, the question, of course, is always sort of what's the right benchmark or comparison standard? So in that first stage, do I get as many applicants from various groups compared to peer organizations? That's a harder thing to assess, but what is a lot more straightforward is to see what happens throughout the hiring funnel, from the applications to a long list, to a short list, to the interviews, to the people you hire, you can look at that process and see whether, along the way, you end up losing candidates from certain groups. And the reason and to do that is because that can sort of point you to a pain point. So, you can imagine two companies that both hire in ways that they are sort of trying to change over time, right? So maybe they're hiring more men than women, and they wanna change that over time. One company could look into their process and realize that everything was pretty equal up until the interview stage. Whereas the other company might look and see, no, no, no, it was skewed all the way back to those applications. That points to a very different place for where you're gonna try to go in and make changes in your process.
When it comes to something like your climate, though, you can also just ask people. Getting people complete a climate survey can tell you a lot about whether people feel like they belong in your organization, to tell you whether people feel like they are connected to the people they need to be connected to. And so collecting data in a climate survey, now it's most useful to the extent that you can look at different groups to see how people feel, but I know that again, back to the point of your question, once, if the numbers are too small, you can't start dividing your data that way, or else people become individuated. But for larger organizations, you would be able to break it down into different ways, and just see our employees from one background feeling more comfortable, feel like they belong to our organization more than others, and can we get a sense of why someone might not feel a sense of belonging, and what might we be able to do about it? So I do think there's a lot of data you can collect, some of it you're probably already collecting, right? In like the hiring, and then you just have to look at it, and some, you may need to intentionally collect, and I make a big push for this all the way through is we're not gonna fix inclusion unless we're measuring it. We're not going to get better at this, and so we have to measure it. In some form, I think a climate survey is the easiest way. You could measure interruptions in your meetings, for example, right? There's software that allows you to do that, or you could get a consulting firm to help you look at the networks in your organization. So you can look at whether it's slack channels, or emails, and just see who's connected to who, and that might be useful for you to diagnose people who may be left out and are needing more support. So the there's lots of different tools. I think a climate survey is probably one of the easiest entry points in. I will also say one of the reasons that it's easy is that collecting data is something that a lot of organizations would be open to, and it doesn't require that you admit that something is wrong, right? You can collect data, because all you're admitting is that there's the possibility that something could be wrong, and we want an opportunity to learn and see if there is something that we need to do to change. And so sometimes collecting data is the easiest first step to convince organizations to do, because it's not admitting a problem. It's just admitting the possibility of a problem, and then giving you the tools you'll need to actually try to make things better.
Kara: Thank you, we'll do one last question, and then I do wanna thank everybody for all the great questions, unfortunately, we don't have time for them all, but this will not be the last session of this nature of going forward, I assure you of that. So finally, what are any tips on like how to interact effectively with a biased person when you encounter them? And this could be in your personal or professional life, it happens to everyone in all situations.
Jane: So... We do a section in class on confronting prejudice, and I think I'm gonna sort of lean on that a little bit in answering this. I think one of the most important things is to pause and reflect on what your goal is. Is your goal to change the mind of this person? If it is, then there are lots of good techniques that we know from the persuasion and influence literature, usually trying to yell at them and tell 'em that they're a terrible person is not the best way to get them to change their mind. But I guess I wanna at least get you to pause and think about whether that is the goal, because especially, and I'll mention this, 'cause I think this is especially poignant in online interactions. If someone posts something offensive, and you wanna respond, it's not clear whether your response should be directed at that person, or really whether your response should be directed at everybody else that watching. So sometimes what you say is not with the purpose of changing that one person's mind, but with the purpose of letting everybody else know that these are not the norms, this is not the way that we are going to interact in this space, and so thinking about who your audience is matters a lot. If there is someone, and they are your audience, and you do wanna change their mind. I think I really like, this is a suggestion from Dolly Chugh's book. She just has this line, it says, "I see things differently. Are you open to hearing my point of view?" And if they say, no, it's not worth your time and energy. I mean, for the most part. There may be some people you just have to interact with. If they say yes, then that may be a conversation that's worth having. But I think that sometimes we jump into debates, and this is such an emotionally draining space to be in that you don't want to put your effort places where it's not going to be rewarded. And so we do need to think thoughtfully about that, and so I think having a sense of whether the person is my intended audience, and then thinking about whether this is someone who is open to change in their mind before deciding what is the right approach.
Kara: Great, thank you. That was like, perfect way to end the session, I believe. And I cannot thank you enough, Professor Risen, for sharing all of the research and insights and practical applicability when it relates to our inherent biases and psychological impact, and how they come from the psychological perspective. And we just greatly appreciate how you're bringing this into the Booth classroom, and making it a part of the dialogue for all of our MBA students. So thank you everyone for joining us, and once again, thank you Professor Risen. We greatly appreciate this, and we hope you all have a great rest of the day. Bye everybody, take care.
Jane: Thanks, everybody.
Register for an Upcoming MBA Masterclass
| Course Title | Location | Date |
|---|
Booth News & Events to Your Inbox
Stay informed with Booth's newsletter, event notifications, and regular updates featuring faculty research and stories of leadership and impact.
YOUR PRIVACY
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.
Thank you
You are now signed up to receive news, event notifications, and stories from Chicago Booth. Thank you for taking this step to hear more about our community!