Scientists disclose the sophisticated math behind the brain’s visual perception of faces.
Family, friends, colleagues, casual acquaintances—how does the brain process and recognize the countless sea of faces we wade through every day? New research from Caltech reveals that the brain uses a simple and elegant mechanism to represent facial identity. The findings demonstrate a near future in which monitoring brain activity can lead to a reenactment of what a person sees.
The work was conducted in the laboratory of Doris Tsao (BS ‘96), Professor of Biology, Leadership Chair and Director of the Tianqiao and Chrissy Chen, Center for Systems Neuroscience, and Howard Hughes Medical Institute (HHMI) Investigator. A paper detailing the work has been published in the June 1st issue of the journal Cell.
The main focus of the new work is that even though there exist an infinite number of different possible faces, our brain requires only about 200 neurons to uniquely encode any face, with each neuron encoding a specific dimension, or axis, of facial variability. In the same way that red, blue, and green light merge in various ways to create every possible color on the spectrum, these 200 neurons can unify in numerous ways to encode every possible face—a range of faces known as the face space.
Some of these neurons encode aspects of the skeletal shape of the face—such as, the distance between the eyes, the shape of the hairline, or the width of the face. Others encode features of the face that are independent of its shape, such as the complexion, the musculature, or the color of the eyes and hair. In addition, the response of neurons corresponds with the strength of these features; for example, a neuron might reveal its strongest response to a large inter-eye distance, an intermediate response to an average inter-eye distance, and a minimal response to a small inter-eye distance. However, single neurons are not mapped onto specific nameable features. Instead each neuron codes a more abstract “direction in face space” that integrates different elementary features. By measuring where a face lies along each of these different directions, the brain can then perceive the identity of the face.
“This new study represents the culmination of almost two decades of research trying to crack the code of facial identity,” says Tsao. “It’s very exciting because our results show that this code is actually very simple.”
In 2003, Tsao and her collaborators determined that certain regions in the primate brain are most active when a monkey is viewing a face. The researchers called these regions face patches; the neurons inside, they named face cells. Over the past decade, research had disclosed that different cells within these patches react to various facial features. For example, some cells respond only to faces with eyes while others respond only to faces with hair.
“But these results were unsatisfying, as we were observing only a shadow of what each cell was truly encoding about faces,” says Tsao. “For example, we would change the shape of the eyes in a cartoon face and find that some cells would be sensitive to this change. But cells could be sensitive to many other changes that we hadn’t tested. Now, by characterizing the full selectivity of cells to faces drawn from a realistic face space, we have discovered the full code for realistic facial identity.”
Two critical pieces of evidence demonstrate that the researchers have cracked the full code for facial identity. First, once they knew what axis each cell encoded, the researchers were then able to develop an algorithm that could decode additional faces from neural responses. Specifically, they could show a monkey a new face, measure the electrical activity of face cells in the brain, and recreate the face that the monkey was seeing with extreme accuracy.
Second, the researchers hypothesized that if each cell was essentially responsible for coding only a single axis in face space, each cell should respond identically to an infinite number of faces that look extremely different but all have the same projection on this cell’s preferred axis. In fact, Tsao and Le Chang, Postdoctoral Scholar and first author on the Cell paper, found this to be true.
“In linear algebra, you learn that if you project a 50-dimensional vector space onto a one-dimensional subspace, this mapping has a 49-dimensional null space,” Tsao says. “We were stunned that, deep in the brain’s visual system, the neurons are actually doing simple linear algebra. Each cell is literally taking a 50-dimensional vector space—face space—and projecting it onto a one-dimensional subspace. It was a revelation to see that each cell indeed has a 49-dimensional null space; this completely overturns the long-standing idea that single face cells are coding specific facial identities. Instead, what we’ve found is that these cells are beautifully simple linear projection machines.”
“Our results could suggest new machine-learning algorithms for recognizing faces and providing new tasks to train networks with,” adds Chang. “It gives us a model for understanding how objects in general are coded within a large brain region. One can also imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness’s brain activity.”
The paper is titled “The Code for Facial Identity in the Primate Brain.” Funding was provided by the National Institutes of Health, HHMI, the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, and the Swartz Foundation.