Machine-Learning Systems Can Search for Visual Patterns in Price Charts
Technology behind self-driving cars can help improve technical investing.Machine-Learning Systems Can Search for Visual Patterns in Price Charts
People form instant and powerful impressions of each other based on facial features, and computers are increasingly analyzing facial images for various applications. Chicago Booth’s Alexander Todorov and University of Chicago’s Wilma A. Bainbridge and Ben Zhao discuss biases relating to faces and the implications of facial-recognition technology.
Hal Weitzman: First impressions are shaped by deep biases that lead us to associate faces with traits such as competence and trustworthiness. At the same time, the use of fa-cial recognition technology is becoming more widespread, prompting concerns over privacy. So is it ethical to use facial imaging in decision-making, and how should we regulate the use of facial recog-nition software? Welcome to “The Big Question,” the video series from Chicago Booth Review. I’m Hal Weitzman, and with me to discuss the issue is an expert panel. Alexander Todorov is the Leon Carroll Marshall Professor of Behavioral Science at Chicago Booth. Wilma Bainbridge is an assistant profes-sor in the Department of Psychology at the University of Chicago. And Ben Zhao is the Neubauer Professor of Computer Science at the University of Chicago. Panel, welcome to “The Big Question.” Alex Todorov, let me start with you. How, currently, are businesses using facial imaging in their deci-sion-making or how are they thinking of using facial imaging?
Alexander Todorov: So there are varieties of different uses, and we don’t know much about all of the uses. One has to do with facial recognition identification. The use that I’m particularly concerned, which is ethically problematic, is using facial images to profile people, to trying to infer stable personality characteristics like competence, extraversion, and these attributes that can help you get a job or not get a job. And this is where the use of this kind of technology is deeply problematic. At the same time, it’s very accessible and will be more and more accessible.
Hal Weitzman: So just to be clear, you’re not talking there just about facial recognition in recognizing the face. You’re talking there about making a judgment based on what I see in your face.
Alexander Todorov: Exactly, exactly. A lot of my research in the past has been on what colloquially we call first impressions. So we know that when people look at an image—and notice I’m talking about a specific image, not at a person—there’s different images that the same person can generate very different impressions. Yet, when people are looking at an image, they ac-tually arrive at consensual judgments whether the person looks competent or trustworthy. And then, you can extract the visual features from the face and you can actually manipulate faces, and you can use it with modern technology. So you can create a fake face, hyperrealistic face, and manipulate its perceived trustworthiness, competence, attractiveness, any attribute you like, familiarity, age. And, in that sense, the technology is already there. It’s accessible, but it is problematic because it, in a sense, confuses our impression of the face with what the person really is like.
Hal Weitzman: Mm hmm, but some of the things you mentioned like age, I’m guessing, is, you know, we might be able to guess accurately from a person’s face more or less.
Alexander Todorov: Yes, that’s right. That’s right.
Hal Weitzman: But other things like characteristics.
Alexander Todorov: That’s right. So you can, I mean, there’s like different characteristics that people can extract. So things like age, masculinity, femininity, thinness, these are things that you rapidly extract and they’re less problematic. There are things that are deeply, deeply subjective, like familiarity. Wilma works on this. And then there are things that are socially constructed and also highly subjective, like perceived trustworthiness, perceived competence. Attractiveness, to a small extent, but everything else that we are talking about, complex, stable personality characteris-tics.
Hal Weitzman: So can you tell if somebody’s trustworthy just by an looking at an image of their face?
Alexander Todorov: Well, the short question is no, but under many condi-tions, you can find correlations between appearance and some behavioral measures. But there’s a lot of noise in the measurements. And then, you need to have an account: Where is this correlation coming from? So if you look at a lot of the studies, whenever you find correlations, they’re very weak. And, particularly, if they’re at the level of the individual, they will account for very, very little of the var-iation in the behavior. And, as a general rule, we really tend to overweigh the subjective impressions in a decision. So imagine you’re hiring a job candidate, you have their résumé, their past history. You have letter of recommendations. These count for much more than your subjective impression. Yet the subjective impression would often feature prominently and will be weighted more heavily than this more objective criteria.
Hal Weitzman: But, putting the ethics aside to one moment, you’re saying it doesn’t actually work.
Alexander Todorov: No, it doesn’t. I mean, so for example, let’s take unstruc-tured interviews, job interviews. People practice them all the time. There’s a difference between struc-tured interviews, which have been validated. So when you have specific criteria. You want to make sure that the job candidate matches this criteria. That’s different than, OK, we have a recruitment in-terview. I’m gonna spend 30 minutes with you. So from dozens and dozens of studies, we know that the correlation between impressions from these unstructured interviews and professional success, it’s about 0.15. So, it’s practically useless. I mean, if you look how, I mean, it just has a very, very low predictive utility. If you ask people, they think it’s something about 0.60. So they hugely overestimate the diagnostic validity of these interviews. They’re just subjective impressions. So we know that they have very, very low validity. Yet people tend to rely on these impressions.
Hal Weitzman: And, like you say, even if it’s not a formal business practice, I suppose the first thing that a hiring manager does is probably check out on LinkedIn or social media who the person is, where they’re instantly given a photo. Of course, it’s an image that the person wants to project, but there may be all sorts of things, presumably, that are subconsciously read into that, even if the hiring manager has the best possible intentions.
Alexander Todorov: That’s exactly right. I mean, we’ve done a lot of studies. You can flash a face for less than a hundred milliseconds and that’s sufficient time for people to form these sorts of impressions. In fact, you don’t need more than 160–200 milliseconds. If you give peo-ple additional time, it just increases their confidence. So they’re very rapid automatic impressions.
Hal Weitzman: Wilma Bainbridge, let me bring you in, ‘cause you’ve done re-search on what is it about faces that make some faces more memorable than others. Tell us what you found.
Wilma A. Bainbridge: Yeah, so you would think we all have different faces that we’ve seen throughout our lives, and so we should have very individual differences in terms of the faces we remember and forget. But one surprising finding that our lab finds is that we tend to re-member and forget the same images. And so, that means there’s some faces that really stick in memory and there’s some faces that are easily forgotten. And, in other words, you can ascribe a memorability score as an attribute to a face image. And one thing that’s surprising is that this is some-thing we, as people looking at faces, have pretty low insight into. So we’re, just like what Alex was saying, we’re pretty bad at guessing how memorable a face is. But we have computational methods that can take in an image and predict how memorable it will be and then predict how many people would actually remember that face.
Hal Weitzman: OK, and tell us, so how did you find that? Just tell us a little bit more about what the methodology was.
Wilma A. Bainbridge: Yeah, so basically we found that if you test a wide range of people on the internet for a wide range of faces, there’s some that most people are remem-bering and some that most people are forgetting. And, you asked earlier, what sorts of attributes re-late to memorability. So it’s not something straightforward. Like, it’s not like, the most attractive faces are the most memorable. Actually, those are very poorly correlated. So you can be very attractive but very forgettable. But you can also be very memorable. But what we find is that it seems that faces are memorable for different reasons. It’s not any one set of combination of attributes that make the face memorable. It’s that one face might be memorable because of some visual feature like where their nose is, and another face might be memorable for something else, like how trustworthy they are or something. But, still, we are pretty bad at guessing how well we will remember a face in our first impressions.
Hal Weitzman: OK, but we could look at a face and say, that’s a memorable face?
Wilma A. Bainbridge: So as scientists, we can quantify a face image and say, that is a memorable face. But the average person isn’t that good at predicting what they will re-member and forget.
Hal Weitzman: But is there a connection between memorability and your posi-tive disposition toward that face?
Wilma A. Bainbridge: No, not necessarily, because memorability is not so related to things like attractiveness. There’s a slight tiny correlation, where more negative emotional faces tend to be more memorable, so it might be that maybe more threatening faces might be more memorable, but this is such a weak effect that you can totally have a positive, friendly, attractive, memorable face too.
Alexander Todorov: Wilma’s bringing a very interesting phenomena that there’s a big mismatch between people’s intuitions and what is the objective memorability. And you get the same thing with impressions. Somebody, maybe, they are able to grasp something about the person based on their attire, how they dress, the grooming, that actually has lots of information about our personalities, and yet, they might not know that they’re good. And people who are really terrible, they think that they’re extremely good in impression. So this is what we kind of call metacognition, and often the correlations between how good you are, how good you think you are, and how good you are is just essentially zero.
Hal Weitzman: Which is, I think, was one of the main findings of psychology in general, right?
Alexander Todorov: In social psychology, yes. (all laughing)
Ben Zhao: So is this subjective per person or is it actually uniform across peo-ple?
Wilma A. Bainbridge: Uniform across people is what we’re measuring, is that we actually have a neural network on our website, where you can upload a picture, and it’ll tell you the memorability score of that picture. And it will do a good job at predicting how likely you are to go and remember it no matter who you are.
Ben Zhao: And I just wonder, like, you know, typically speaking, there’s al-ways a distribution long tail with these kind of effects, and I’m very curious as to how much that varies across specific populations. So I would imagine perhaps specific people have certain experiences, you know, childhood upbringing, or just familiarity with faces around them as they grow up, that makes them more likely or you know, less likely to remember certain types of faces or even facial ex-pressions. I wonder if that plays into that.
Wilma A. Bainbridge: Yeah, so there’s definitely a role of your own experi-ences in what you remember and forget, but we find that that memorability of the image accounts for half of what you remember and forget, and the other half is other factors like your prior experiences. We are, right now, looking at how your familiarity with categories influences it, but most of us are all experts in faces, so faces aren’t the best set of images for that sort of question.
Ben Zhao: Very cool.
Wilma A. Bainbridge: Yeah.
Hal Weitzman: OK, but, Ben Zhao, I wanted to carry on with you actually ‘cause your expertise is all about facial recognition and the software that’s used. And, given all the flaws that we are building into things like facial recognition software, it seems very scary that we would extrapolate things from faces. Is it inevitable, though? It seems like it would be hard to put the toothpaste back in the tube.
Ben Zhao: Yeah, one of the sort of the most damaging parts, if you will, of the internet is that everything is permanent, right? And so, generally speaking, once you tweet some-thing, once you post something, once you share something, there’s always gonna be a copy some-where. So that’s part of the challenge with internet privacy is once you share some content, even if you regret it five minutes later, it may to be too late. Oftentimes it’s, you know, when you look at peo-ple like us who have been around for a little while and we all have significant internet, you know, presences and footprints, if you will. And so our images are in all sorts of formats. And for us to try to think about privacy of facial recognition and how do you corral those images and limit now who has access is very, very difficult. And that’s why companies like Clearview.ai or other companies like them can easily go and scrape down billions of images, and without us knowing about it, because they are supposedly in the public domain, and build incredibly detailed profiles, facial profiles, of all of us and then make a business out of it, right? So someone sees a random stranger at the grocery store says, I wonder who that is, takes a photo, sends it off to some service, and they will say, yeah, you know, so and so from our database matches, and here’s all their internet information and perhaps incredibly detailed personal information to go along with it. So yeah, it’s certainly a real privacy issue. And com-panies like Clearview have been debated hotly ever since, I think, you know, Kashmir Hill wrote about them in the “New York Times” in 2020, almost a couple years ago. And various governments have already come out, you know, very clearly against this type of facial recognition as a tool, as a gener-alized tool.
Hal Weitzman: So these are companies, not to pick on one, but these are companies that will take public data and sell it, and package it and sell it. I mean, are they, I sup-pose, given what we’ve heard about the assumptions that people build into those, and also that, you know, if it’s LinkedIn, that’s one thing, you can control your image, but there are many images, I’m guessing, of you guys giving talks places, or at some kind of event, where the photo has been taken of you and it gets posted and there’s nothing you can do about it, which may give also sorts of impli-cations that you did not intend to put out into the world. Is it just inevitable, then, that at some point, we’re all gonna be facially profiled in some sense? And just like the rest of our data is being sold, our faces are gonna be sold as well?
Ben Zhao: Yeah, I guess it’s possible. You know, whether it’s inevitable is something that, hopefully, is not decided yet. But, you know, we are developing tools here at UChi-cago to try to push back the tide, if you will. And so, you know, one of our tools basically allows you to digitally alter your images in very subtle ways that are basically imperceptible to the human eye that, when taken, and someone takes these photos and builds a facial recognition profile of you, they’ll actually get a wrong impression of what you look like. And so, when someone does give the, say, this database, a true photo of you, it’ll actually cause a mismatch and it’ll return an error and ba-sically misclassify you as someone else.
Hal Weitzman: And it’s fascinating that you created that. Why, why was it necessary to create that?
Ben Zhao: Oh, it was very funny because, you know, I had a long thought sort of about the downsides of the ubiquity of machine learning and deep-learning tools. And so, weaponized machine learning and what all that might entail was sort of a, you know, a thinking line that we’re exploring. And so my students and I sort of discussed this idea of this dystopian future of what would happen one day if, you know, our images were used against us in a sort of a Skynet type of, you know, scenario, “Terminator” and all that? And so we decided to work on sort of defenses, preemptively, against such a future. And as we were ready to submit our manuscript for publication, literally, I think two weeks before, the Clearview.ai article came out in the “New York Times.” And, you know, it was just coincidence, but it was just certainly made our motivational job easier for the paper. But it’s unfortunate we’re already in that dystopian future that we thought we had to motivate and to sell people on. But, yeah, we’re there.
Hal Weitzman: And so how many people have used your tool so far?
Ben Zhao: You know, I don’t have daily tracking downloads, but I think, so there’s lots of ways that you can get it, the source code, and so on, thousands of projects have been built on top of this already, I think, but certainly just binary software tools, we’ve had probably, by now, north of 800,000, perhaps closer to a million. I haven’t checked recently, but somewhere around that range.
Hal Weitzman: OK, but this is a very, I mean, it’s very interesting, and just maybe just give us a word about how it works. You kind of talked a little, but tell us a little bit more.
Ben Zhao: Yeah, sure, sure. So I mean, at a very high level, basically, deep-learning systems try to extract meaningful features out of the image to try to memorize or produce a vector in some sort of high-dimensional space of some representation of what you look like, and then uses that as a match, right? And, a lot of times, these features are nonsensical. They may be some-thing that don’t naturally correlate one on one with something visual that we see. And so it turns out that, you know, because we have the tools, because we have the understanding of the deep-learning models, you can go in, and this is a known vulnerability with all deep-learning systems, what are called adversarial perturbations. But, basically, there’s a distortion, or a decoupling between what you see at a sort of the input level and what the model actually learns. And, sometimes, you can use that to your advantage. So in our case, what we can do is we can create superficial small changes that are still resistant to things like, you know, blurring or compression or rescaling, those kind of things, they’re still resistant to that, but they’re so subtle that they’re not really visible. And yet, the model they’re designed, specifically, so that the model picks ‘em up and greatly distorts them be-cause of the way that a model is particularly trained to work on facial understanding. And so, you sort of identify these vulnerabilities and these sort of missteps, if you will, and then you magnify them. And so in doing so, we can make you, Hal, for example, you know, look to the model, like, you know, your favorite, you know, hunky actor.
Hal Weitzman: We look very similar, actually.
Ben Zhao: Yeah, well, you know, yeah, exactly. So that will basically tell the model that, in fact, you know, this is Hal’s, you know, facial profile. He’s actually over here next to Tom Cruise. And so the actual position of where Hal’s face usually would be in the feature space, you know, will be empty or filled by someone else. And so someone using the service would come in with a real photo of you taken on the street, or, you know, perhaps from online, and it would map to the correct position but produce someone else as a result.
Hal Weitzman: I see, OK. So this is a way of kind of fighting back, using the jujitsu, using the technology against what others, as opposed, are trying to use it for. But it does indi-cate that, I mean, we’re sort of fighting on our own then, not as a society. The technology, obviously, always goes faster than the ability to regulate it. I mean, you talked a little bit about Clearview and how regulators have responded. Are there kind of general principles that you think regulators should have in mind in responding to advances in facial recognition software?
Ben Zhao: There are so many different ways that regulators can approach this, right? You can look at it from a sort of data providence and control sort of way, where there’s more explicit ownership of images and we can control things that way. So there’s more of a notion of permission or watermarking or control over your images and where they spread. That’s difficult be-cause of the internet and what is already out there today, right? Another approach would be to say, put the onus on the model creators to actually prove that, you know, they have obtained permission to use these images. And that, again, that has some effect, but it only applies to basically the well-behaved sort of entities, if you will. And so the ones that are intentionally skirting the law, who are, you know, trying to avoid regulatory practices, they’re the ones who are most likely to be ignoring these kind of practices. So yeah, it’s actually quite challenging. You know, carrot stick, carrot stick, and there’s technology in the middle. It’s a very, very sort of cloudy issue. And it’s not at all clear there are even good solutions out there.
Hal Weitzman: Right, no doubt complicated by the fact that you could be based pretty much anywhere in the world and be doing this work.
Alexander Todorov: But you have to apply to a specific image, right? So, which kind of, already it’s a bit of a losing proposition, right? Because if there are already hundreds of images of you stored out there.
Ben Zhao: Sure, sure. There is a sense that volume does matter. So for younger people who have a more limited footprint right now, you know, children, perhaps, if you start, you know, applying these kind of curation noises effects, if you will, to these images and consistently do it for a while, the sort of percentage of images that are controlled or protected will eventually dom-inate over some of the other images that are not authorized. And so, the machine-learning model will, in essence, be forced to choose the bigger group, and to sort of switch a label over to the protected angle. So if that is the case, then it can still work. But yeah, I mean, I think for some of us, it’s a lost cause. And then there’s other questions of trying to future proof this type of technology. That’s al-ways hard. Someone can always come around five years later and say, oh, I just found a loophole in your protection mechanism, and now the new model is gonna be, sort of, you know, equipped with that, and all your past protection has been invalidated. So that’s always an ongoing challenge.
Hal Weitzman: Alex Todorov, we started by asking the question about ethics and facial imaging. And now we’ve brought in the facial recognition software. I mean, what are the ethical issues here? What are you worried about? And what do you think regulators should keep an eye on?
Alexander Todorov: Well, there’s just tons of different issues and depends on the level. One of the issues that Ben brought up is, of course, privacy and what’s being done with our images to what kinds of information they collected, who is creating a profile. And there are even more pernicious uses of the images where, in fact, they can simply perpetuate bias. For example, let’s say you built a model that predicts trustworthiness, whatever you call this. But imagine that this model happened to correlate with ethnicities for different deep social reasons and histories, and you can end up just furthering more discrimination rather than solving anything. So you don’t . . . and the same issues that, again, Ben brought up in terms of what these machine-learning algorithms do, they’re like a black box. So you don’t know exactly what are the features that the algorithm is using to make this particular prediction. And if you’re making these predictions in terms of whether you should hire a person, whether they should get a loan, I mean, there are lots of important life decisions that affect, literally, people’s lives. And if you’re using these kind of black box algorithms, there’s gotta be . . . there’s no way, there’s no easy way to have a justification. You should be able to justify your deci-sion. I mean, in the old days, it’s not that people . . . We know that people have lots of different bias-es, but you still need to say, well, here’s the criteria I used to arrive at this decision. And this could be questioned, right? If you suspect that there’s some kind of prejudice behind the decision. But it gets much harder when you outsource these sorts of decisions to technology that, you know, you’re work-ing with million of parameters and it is computing something. The output is great, but you don’t really know what’s the mapping from the input to the output. And that is deeply problematic.
Hal Weitzman: I was gonna say, who do you think is the best person to regu-late this? Should there be a regulation coming from on high, or should companies just be aware of this, maybe ban the use of faces from any hiring practices? I mean, what would be the way to, or should this be addressed in the courts? I mean, what’s the best way of dealing with it, do you think?
Alexander Todorov: I’m not sure I’m the right person to discuss these issues, but I personally doubt that, I mean, again, Ben said, there’s the good actors and the bad actors. Ideally it should be regulated by the companies, but I doubt that all companies would actually regu-late themselves. So this is, again, and then if you have some sort of governmental regulation, again, it’s not clear to what, there’s a trade-off between how deep the regulation goes and various devel-opment of technologies. So I’m not, I don’t . . . there’s no easy solution. Again, I know in law en-forcement, people have used a lot of face recognition, but we know that every now and then there are errors, and the errors, particularly with faces of people from ethnic minorities, partly because often these algorithms are trained, are not sufficiently trained in images representing ethnic minorities, and that’s a deep problem. And it’s a problem because also, psychologically, we really believe that we are face experts, that once we see a face, we recognize it, and it comes with a set of biases. And I’m pretty sure in most police departments, the idea is that, well, if you’re using this face recognition tool, it’s just another piece of evidence, but it’s not sufficient on its own to lead to an arrest or some other more heavy consequences, but often that’s not what happens. And then, again, you need to have some kind of very explicit rules and to be treated as just another piece of evidence. It’s striking, but if you look at since the introduction of DNA evidence, the most common reason for false convictions is eyewitness testimony. And it’s not because people, I mean, people are very good at recognizing fa-miliar faces, but we’re actually terrible at recognizing unfamiliar faces. In fact, our computer recogni-tion deep-net algorithms are better than humans. And 10 years ago, when we give a lecture, you know, in psychology, it’ll say, you know, like computers are much better in this serial task, like solving an algebra problem. But if it’s pattern matching, like face recognition, we are much better. That’s just not true. That’s a history. But it is still the case that these algorithms make errors. And you need to know what are the kinds of errors? Where would we expect them? They cannot be treated as, OK, here is the real truth.
Hal Weitzman: And I suppose the use of the algorithm could compound those biases you talked about in the beginning. If sort of, you know, a machine confirms what I al-ways knew about this, you know, this particular person.
Alexander Todorov: Certainly, I mean, once you have a bias in place, it’s easy to perpetuate it. Imagine that you have economic transactions, and you have two partners in a cooperation that, if I trust you, I invest in you, the majority of people actually reciprocate and return the investment. We know this from experimental games in economics. Well, what happens if I have two potential partners, and the one looks trustworthy to me, yet I’m not so sure? Well, I invest in a trustworthy person, he or she reciprocates, and I get this self-confirming evidence here. I had the right impression, and here is the behavior that confirms it. But, ultimately, this is the predominant be-havior. Everybody does this in this situation. So in a sense, you never test your intuition. I never learned anything about the person who might be much more trustworthy and more cooperative be-cause I never gave them a chance. And I never learned about the distributions of their behaviors, and I get confirming feedback, false confirming feedback, that my initial impression is actually accu-rate.
Hal Weitzman: Thank God I didn’t work with him.
Alexander Todorov: That’s right.
Hal Weitzman: Yeah, right, ‘cause I knew he didn’t look right.
Ben Zhao: So Alex, I was actually curious because you had just talked about some of the human biases and how frequently we are subject to them. And, you know, black-box machine-learning models are obviously problematic, especially the bigger they get. Is there any validi-ty to that argument of sort of, you know, we know that humans are inherently flawed and biased, and, you know, why not sort of, in essence sort of, as you say, you know, outsource it to the other bias model, which, at least in those circumstances, there are some tools, not a whole lot, but some tools that can at least quantify the level of bias and perhaps point out their direction. Whereas like, you know, human bases, not only in many cases, it’s very difficult to understand the bias and where it’s coming from. But also, when you do point it out, for example, people get very defensive, and there’s a natural reaction to fight back.
Alexander Todorov: Yeah, this is a great point, and I agree. I mean, in fact, even before the sophisticated algorithms we are talking about here, I mean, there’s like 50 years of research in psychology that you can just take very simple algorithms, like a regression model, some-thing that says, OK, I have five predictors and I don’t even, I just wanna weigh them and I may weigh them equally. And I can compare the accuracy with the accuracy of a human judge. Often, this for clinical psychologists, you end up with the computer algorithms are better on average, right? And, in fact, you can build a model of the judge. I can tell you, well, tell me what’s important to you. And then I can build a model of you and the model of you is better than you. And that’s true. And it is true be-cause the model is consistent. It doesn’t care. It doesn’t get hungry at 11 a.m., right? It doesn’t get tired. It doesn’t get upset. So the model is consistent. But the difference is like, in the past, you know exactly, here’s a simple model, right? OK, I have 10 predictors and I know exactly what the weight is, so you can know, and then can say, well, but maybe this particular predictor is actually using race in an inappropriate way. So I need to rethink everything. And I think now we have this layer of complexi-ty that you just don’t know. But, on the positive side, there might be algorithms that actually can avoid certain . . . some of these biases. I mean, the big issue is how do you build these sorts of algo-rithms that can actually overpass the bad biases and can lead to better decisions. But it’s just that it’s much harder to pinpoint, oh, that’s what this algorithm is doing. And it’s just a very different world than 20 years ago.
Wilma A. Bainbridge: Yeah, and so, to add to that point, also one important consideration is what we’re training these algorithms with. Because, ultimately, it’s still us, as humans, choosing what images to feed into the models. And, often, those images that we’re choosing are al-so biased. So if we’re using prior photographs of criminals, then there’s a bias built in by the deci-sions, the legal decisions we’ve made for those people in the past too. So it’s tricky to come up with a totally unbiased set of data to create these models from.
Hal Weitzman: OK, but we were starting to get to a more positive note be-cause it can get very dystopian very fast here. I mean, there are some positive uses for the more we know about, you know, facial imaging and facial recognition, the more we can use it for positive ends. Tell us, Wilma Bainbridge, about some of the more positive aspects of this.
Wilma A. Bainbridge: Yeah, so going back to the example that Alex brought up about eyewitness testimony, so these sorts of models can also help us counteract these biases in some ways. So, for example, let’s say you have a suspect of a crime who turns out that they’re totally innocent, but they have a face that causes a lot of false memories. So witnesses will tend to say, oh, I did see that face at the crime, even though that’s just because that face causes a lot of false mem-ories. ‘Cause we’ve found in our work there are faces that cause lots of false memories as well as faces that cause accurate memories. So if you know this in advance about that face—
Hal Weitzman: Just explain what that meant for a second. So that means that there’s certain faces that will lead the viewer to think that they have seen that face before, in some context, even though they haven’t.
Wilma A. Bainbridge: Definitely, yeah. So you might have that sense if you see someone on the street and you feel like you’ve met them before but you never have, there are some faces that cause that consistently for many people. Yeah, and it’s not just faces. There’s just some images, in general, that cause this feeling of you thinking you’ve seen it before, but actually you’ve never seen it before. And this can be really problematic in things like eyewitness testimony, where you’re really relying on an accurate memory from that witness. So if they are seeing a face that causes many false memories, they might incorrectly identify that face. So one great thing is that using these sorts of models, if we can identify this sort of bias that might exist in that suspect’s face to begin with, then we can choose members of a lineup, or other photographs in a lineup of photo-graphs, that also cause a lot of false memories so that there is . . . this sort of bias is equal across all of the different faces. And it’s more likely if a witness picks a photo that that’s because it’s an accu-rate memory for that person and not driven by a false memory just for that one face. Yeah. So you can sort of balance out these biases across people.
Hal Weitzman: Ben Zhao, let me ask you the same question. I mean, there are positive things that come out of this technology. We don’t have to be necessarily scared of it.
Ben Zhao: Right, right, yeah, of course. I mean, I think, you know, in some sense, given the ubiquity of this type of algorithms and models, a lot of things will be made much easier. You know, and whether it’s authentication or just simplifying your daily passwords, for exam-ple, that’s oftentimes a challenging task. And for certain types of authentication purposes, simple fa-cial recognition will make life a lot easier. And, in certain contexts, whether it’s on your phone, for ex-ample, that could be reasonable security, right? And in the sense that you would have to compro-mise someone’s phone before compromising their facial recognition for that particular app. And so that does add some help in terms of security and certainly simplify some of the other challenges we have with security implementing more secure policies as it is.
Hal Weitzman: Not to mention it could help you organize the many millions of photos that we all now have, either on our phones, or somewhere in the cloud.
Ben Zhao: Many of which we’d like to forget, but yes. (all laughing)
Hal Weitzman: So Alex Todorov, maybe I can ask you a bigger question, which is how do we think about the positives and negatives of both of facial imaging, generally, and of this facial recognition software?
Alexander Todorov: Well, it’s a complex question. So clearly, the technology is moving forward and it’s just going to get better and better. I mean, one of the important things, and Wilma brought up this point, is that there’s always two things. One is the power of the computational algorithms. This is growing up and growing up. And the other is, what are you training the system on? I mean, this is like just almost as essential. And so if you introduce biases, you know, in the training set, they’re going to be there and you’re going to perpetuate them. So you have to be really thinking about this. Awareness, is awareness enough? Absolutely not. I mean, there’s a lot of work in psy-chology about awareness of bias, stereotypes, prejudice, and it’s not sufficient. I mean, it is the first step, but you need to have some . . . you need to have incentives. You need to have specific crite-ria. There’s much more needs to be introduced in order to work. It’s not sufficient to say, well, I’m be-ing aware that. And, often, people will say, well, I’m aware, but it’s not me. It’s natural. We call it the bias blind spot in social psychology. I mean, everybody’s biased, but I’m not. And so, that just awareness is not enough. You need to have some other protections with respect to preventing bias-es popping up into these algorithms. And this is the problem for some people. Especially in the be-ginning, I think, that now people are much more sophisticated. They will say, oh, look, it’s just math. You know, I put . . . here’s a set of inputs. Here’s a set of numbers, and yes, it tells you whether a person has a criminal inclination or not. And you just look a little bit, like how did you train your algo-rithm? And there’s a bunch of things that suddenly come up that explain the amazing performance of the algorithm, but not because there’s no bias, but because you’re incorporating bias inside.
Hal Weitzman: Sure, so algorithms can certainly, you know, exacerbate and aggravate biases, what they reflect the inputs, as you say, that we put into them.
Alexander Todorov: I mean, it starts from the researchers, right? It’s like, it’s the human researchers. And, often, this might be introduced completely unintentionally. Even in eve-ryday life, it’s not like most people are biased and blatantly prejudice. Most of the time, people wanna do the right thing, but the information is ambiguous and a bias can nudge you one way or another. And this is the situation where they really play a role. And, similarly, when you’re training an algorithm, if you haven’t given enough attention, if you don’t have a sufficient number of examples that repre-sent parts that are missing, parts that are important in the real world, but missing in your model, there is going to be reproduced in the decisions of the model.
Ben Zhao: One of the things that I think, you know, Alex, you were talking about awareness, and sort of its role, and I think, you know, I sort of mentally draw an analogy to some of the changes that we’ve seen in privacy. And, for a long time, users were not aware of the value of privacy and the sense of what privacy leaks mean to them. And so, you know, you had goof-iness in back in the early days of Facebook and so on, where people were selling photos of them-selves, you know, for pennies or whatever. But then you saw sort of more awareness. And, for a long time, we were pessimistic about the role of companies and how they would face privacy because it was always seen as a limitation, a drawback that you would have to penalize your revenue stream for, right? And then, you know, lo and behold, some number of years later, Apple takes this role and changes that around and says, OK, now we understand that users value privacy, and therefore, we’re gonna to offer it as a feature rather than a bug. And that, I think, offers some hope because I look at that and I say, you know, if we are aware of the role of ethics and bias and deep-learning machine models and how they’re misused, then there’s hope that once we, as a society, understand more of it and demand it from big tech, that they will turn around and make that a priority for them, thereby, you know, helping their revenue, but also aligning their interests with ours.
Alexander Todorov: Yeah, I totally agree that awareness is the first step. I mean, if you don’t have it, nothing’s gonna happen. If you have it, but you stop there, again, nothing is gonna happen. And it’s a good example when, in fact, something happened. So, yeah, I agree.
Hal Weitzman: Well on that somewhat optimistic note, I’m gonna wrap it up ‘cause our time, unfortunately, is up. My thanks to our panel, Alexander Todorov, Wilma Bainbridge, and Ben Zhao. For more research, analysis, and commentary, visit us online at chicago-booth.edu/review. And join us again next time for “The Big Question.” Goodbye. (light piano music)
Technology behind self-driving cars can help improve technical investing.Machine-Learning Systems Can Search for Visual Patterns in Price Charts
The debate has raged for 300 years and counting.What Is the Line Between Self-Interest and Selfishness?
Chicago Booth’s John Paul Rollert considers whether contemporary companies can combine behavioral insights with enormous amounts of data to more effectively guide the actions of their labor forces.Will App-Based Employment Make Workers Easier to Manipulate?
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.