- News>
- India
Deep neural networks `see` same things, but differently from humans: Study
A team of researchers at Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) recently conducted a study to compare the visual perception of the deep neural networks to that of humans.
Highlights
- An IISc study uncovered how deep neural networks 'see' things
- Researchers compared visual perception of deep neural networks to that of humans
- Deep networks exhibited similarities and dissimilarities from human brain
New Delhi: Deep neural networks – a technology which has been in the works for over a decade and provides crucial insights into how human beings perceive things – evolved a bit further as researchers found some fascinating new facts.
A team of researchers at the Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) recently conducted a study to compare the visual perception of the deep neural networks to that of humans.
They found that the deep networks are capable of seeing the very objects humans see, they just see it ‘differently’.
What are deep neural networks?
Deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks.
These networks have played a pivotal role in helping scientists understand how our brains perceive the things that we see.
Although deep networks have evolved significantly over the past decade, they are still nowhere close to performing as well as the human brain in perceiving visual cues.
How do deep networks act differently from humans?
A team led by SP Arun, Associate Professor at CNS, studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain.
“Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences,” said Arun, who is the senior author of the study.
“Identifying these differences can push us closer to making these networks more brain-like,” he added.
Key findings of the study:
1. Deep networks exhibited the Thatcher effect which humans do too. The Thatcher effect is a phenomenon where humans find it easier to recognize local feature changes in an upright image, but this becomes difficult when the image is flipped upside-down.
2. Mirror confusion: To humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis. The researchers found that deep networks also show stronger mirror confusion for vertical compared to horizontally reflected images.
3. Another phenomenon peculiar to the human brain is that it focuses on coarser details first. This is known as the global advantage effect. For example, when presented with an image of a face, humans first look at the face as a whole, and then focus on finer details like the eyes, nose, mouth, and so on.
“Surprisingly, neural networks showed a local advantage,” said Georgin Jacob, first author and Ph.D. student at CNS. This means that unlike the brain, the networks focus on the finer details of an image first.
Therefore, even though these neural networks and the human brain carry out the same object recognition tasks, the steps followed by the two are very different, concluded the study.