How our brain recognises faces decoded

BOSTON, Dec 5:
MIT researchers have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.
The researchers designed a machine-learning system that implemented their model, and they trained it to recognise particular faces by feeding it a battery of sample images.
They found that the trained system included an intermediate processing step that represented a face’s degree of rotation – for example 45 degrees from centre, but not the direction – left or right.
This property was not built into the system; it emerged spontaneously from the training process.
However, it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.
“This is not a proof that we understand what is going on,” said Tomaso Poggio, professor at Massachusetts Institute of Technology (MIT) in the US.
“Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it is strong evidence that we are on the right track,” said Poggio.
The new study includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.
The machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain.
A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units – or nodes – in the layers above and below.
Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on.
During training, the output of the top layer is correlated with some classification criterion – for instance correctly determining whether a given image depicts a particular person.
In earlier work, Poggio’s group had trained neural networks to produce invariant representations by, essentially, memorising a representative set of orientations for just a handful of faces, which Poggio calls “templates.”
When the network was presented with a new face, it would measure its difference from these templates.
That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer.
The measured difference between the new face and the stored faces gives the new face a kind of identifying signature. The study appears in the journal Computational Biology. (PTI)

LEAVE A REPLY

Please enter your comment!
Please enter your name here