How do humans recognise face from any given angle? That’s a puzzle that scientists at MIT have been trying to solve and as a step towards that ultimate goal have developed a new machine learning system that effectively shows what could be going on inside our brains when facial recognition is being carried out.
The work is being dubbed as a study of social intelligence which is an important part of human intelligence by Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM). The paper published in Current Biology described a theory that explains how our visual system learns to compute invariant descriptions of an object and applies to the case of faces where scientists have been able to make the machine recognise a face that it has seen only once from a certain point and scale it to other points. For example the machine would have been trained to see the face from 90 degrees, but not from 45 degrees; however, through its intelligence it learns to recognise the face when it is shown at an angle of 45 degree rotation.
Researchers say that their system, which is based on a computational model of the human brain’s face-recognition mechanism, captures aspects of human neurology that previous models have missed. This, scientists say, is reflected in the system’s ability to spontaneously included an intermediate processing step that effectively represented a face’s degree of rotation.
Scientists aren’t claiming that they have understood completely what goes on inside our brains when it is processing visuals to recognise a face, but given the evidence that they have retrieved through their machine learning system, it won’t be wrong to assume that they are on the right track.
Poggio has been working in this area for years and has long believed that the brain must produce “invariant” representations of faces and other objects that are indifferent to the faces’ or objects’ orientation in space, their distance from the viewer, or their location in the visual field.
A few years back, in 2010, Winrich Freiwald, an associate professor at the Rockefeller University, published a study describing the neuroanatomy of macaque monkeys’ face-recognition mechanism in much greater detail. Freiwald revealed in the study how information from the monkey’s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face’s orientation — an invariant representation.
But when it comes to neurons in an intermediate region, they appear to be “mirror symmetric” – sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it’s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it’s rotated 30 degrees, and so on.
This is the behavior that the researchers’ machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”