11 Jan 2017
World innovation news
Information and Communications Technologies
New Artificial Intelligence for Facial Recognition
Like RoboCop, we may soon have the ability to quickly identify faces in a crowd. A report entitled “View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation ”, written by Joe Z. Leibo, Qianli Liao, Fabio Anselmi, Winrich Freiwald and Tomaso Poggio (Researchers at the Massachusetts Institute of Technology MIT), and published on December 1, 2016, in the journal Current Biology, presents a computational model of the human facial recognition process.
The researchers developed a learning system based on a neurological model of visual information processing. They trained it to recognize individual faces by providing it with a bank of pictures. They found that the system incorporated an intermediate, not previously integrated, that emerged spontaneously from the learning process, and is now capable of representing a face rotated about 45o from the center, in any direction (left or right).
The cross-disciplinary experience framework
This initially unplanned step mimics a characteristic that was experimentally observed in the face recognition process of primates. Researchers view this as an indication that their system and the brain follow the same processes. Tomaso Poggio, Professor of Cognitive and Brain Science at MIT and Director of the Center for the Brain, Minds and Machines (CBMM), emphasizes that models created from biological systems are merely a caricature (stylization) of reality and that it is difficult to make any scientific deductions from the reaction emerging from the learning process. Furthermore, in a new article, the researchers prove mathematically that this particular type of machine learning system, intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably produce other intermediary representations.
Poggio adds that the last article testifies to the epistemological scope of their research. On the one hand, we have the integration of machine learning and computer science and, on the other hand, greater knowledge about neurophysiological primate cognition. Indeed, their research within the CBMM also focuses on the “algorithms” used by the brain and the neurophysiological circuits that implement them.
How does facial recognition work in primates?
Poggio has long believed that the brain produces “invariant” representations of faces and other objects, that is, representations independent of the spatial orientation of objects, their distance from the observer, or their location in the visual field. The ability of the brain to translate the visual information of isolated objects by eliminating these spatial variables was the subject of the article entitled “The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex” also published in the journal Computational Biology in October 2015. Magnetic resonance scintiscans performed on human and monkey brains also suggest this. Indeed, in 2010, Freiwald published a detailed study describing the neuroanatomy of the facial recognition mechanism of macaque monkeys. He showed that the information coming from the monkey’s optic nerves crosses a series of brain loci, where reactions differ according to the orientation of the face observed. The neurons in the first region are triggered only in response to certain particular facial orientations. The neurons in the last region function independently from the orientation (the invariant representation).
Moreover, neurons in an intermediate region appear to be “symmetrical”, i.e. they are sensitive to the angle of rotation of the face without regard to direction (left or right): a group of neurons triggers when the face is positioned 45o to the left, and the other is stimulated when it is positioned 45o to the right. The three regions are located inside the facial area in the visual cortex.
Integration of the Hebb concept into the system
The automatic learning system of researchers is termed neural network because it roughly resembles the architecture of the human brain. The first machine was trained to produce invariant representations by memorizing orientations of some model faces. Poggio was dissatisfied with the result because it differed somewhat from the neurophysiological functioning of the brain. To improve it, he integrated the Hebb concept into a new machine. Basically, it is a matter of creating continuous stimulation between the connected modules of the machine, like what happens between neurons, to improve their mnemic function and optimize information processing. As expected, this approach generated invariant representations. In addition, as mentioned earlier, layers of the network spontaneously simulated the mirror-symmetrical responses of the intermediate regions of the brain’s visual processing function.