Washington: A team of neuroscientists from the Massachusetts Institute of Technology (MIT) has found that one of the latest generation of computer networks - called "deep neural networks" - matches the primate brain.
"This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, lead to new ways to repair visual dysfunction," said Charles Cadieu from MIT's McGovern Institute for Brain Research and the paper's lead author.
For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognising objects which the human brain does very accurately and quickly.
Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance.
"The new model encapsulates our current best understanding as to what is going on in this previously mysterious portion of the brain," said James DiCarlo, professor of neuroscience and head of MIT's department of brain and cognitive sciences.
For this study, the team of researchers first measured the brain's object recognition ability.
This allowed them to see the neural representation - the population of neurons that respond - for every object that the animals looked at.
The researchers then compared this with representations created by the "deep neural networks", which consist of a matrix of numbers produced by each computational element in the system.
"Because these networks are based on neuroscientists' current understanding of how the brain performs object recognition, the success of the latest networks suggest that they have a fairly accurate grasp of how object recognition works," DiCarlo added.
DiCarlo's lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognising three-dimensional forms.
The paper appeared in the journal PLoS Computational Biology.