In order to study the automatic judgments that people make when assessing facial traits, scientists need faces to show their lab participants. Traditionally, researchers who study impressions have taken one of two approaches: they use real photographs that limit how much they can change the features of a particular face, or they use artificially generated faces that don’t look very real.
Scientists from Princeton, the University of Chicago, and the Stevens Institute of Technology offer a new approach. Combining deep generative image models with more than 1 million facial judgments of over 1,000 images, they have created face models that allow researchers to generate synthetic but photorealistic and demographically diverse faces that can be tuned along sets of perceived attributes, such as age or weight, and even evoke more subjective judgments such as perceived trustworthiness or perceived intelligence.
“When we make an API (application programming interface) available to the scientific community, they’ll have a lot more power over the kinds of images they use. And it will open a whole new set of questions that were never possible to answer before,” says Chicago Booth postdoctoral scholar Stefan Uddenberg, a researcher on the project. The same technology could have commercial use too, as it could potentially allow photographers, ad agencies, and countless others to identify which of their face photos are likely to be considered trustworthy or smart, for example.
These models were born of the researchers’ frustration at never having enough faces or the right type of faces for study, Uddenberg says. While researchers frequently develop expensive new face databases, artificial faces don’t necessarily convince anyone that they’re real. “They look like bald heads on black backgrounds, like mannequin heads,” Uddenberg says. The goal was to create easily transformed images that look like actual photos.