How AI and neuroscience move each other forward

The combination of AI and brain science promises benefits for both areas







Cetan Pandarinah wants to give people with paralyzed limbs the opportunity to operate objects with the help of a robotic arm as naturally as with his own. To achieve this goal, he collected records of brain activity in people with paralysis. His hope, which many researchers share, is that it is possible to establish patterns of electrical activity in the neurons that control the movement of the hand so that these instructions can then be submitted to the prosthesis. This is essentially a reading of thought.



“It turns out these brain signals are very difficult to understand,” said a biomedical engineer at the Georgia Institute of Technology in Atlanta Pandarin. In search of a solution to the problem, he turned to AI. He used his recordings of brain activity to train an artificial neural network - computer architecture, the idea of ​​which was borrowed from the brain, and set the task of reproducing data.



Records were made on a small group of neurons in the brain - about 200 out of 10-100 million neurons that control the movement of human hands. In order for such a small sample to make sense, the computer had to find patterns that researchers call hidden factors that control the overall behavior of the recorded brain activity. This study revealed the temporal dynamics of neural activity, generating a more accurate set of instructions for hand movement than previous methods. “Now we can say with millisecond accuracy that the experimental animal is currently trying to move at this exact angle,” Pandarinakh explains, “this is exactly what we need to know in order to control the robotic arm.”



His work is just one example of the growing interaction between AI and cognitive science. AI, with its ability to identify patterns in large, complex data sets, has made remarkable progress in the last decade, in particular emulating the process by which the brain performs certain calculations. ANNs, similar to the networks of neurons that make up the brain, gave computers the ability to distinguish a cat’s image from other images, identify pedestrians with accuracy sufficient to drive unmanned vehicles, and recognize speech and respond to it. Now, cognitive science is starting to benefit from the power of AI, both as a model for developing and testing ideas about how the brain performs calculations, and as a tool for processing complex arrays of data that are obtained in studies similar to those conducted by Pandarinakh. “Technology has made a full turn in its development and is now being applied to understand how the brain itself works,” he says [ Chethan Pandarinath Pub. ]. This cycle of mutual amplification is likely to continue. Since AI allows neuroscientists to get an idea of ​​how the brain computes, their further research can lead to machines that can take on more of the human intelligence.



Naturally, these two disciplines combine together, says Manesh Sahani, a neuroscientist theorist and machine learning researcher at Gatsby’s Computational Neuroscience Unit at University College London. “We study the same thing effectively. In one case, we ask how to solve this learning problem mathematically so that it can be effectively implemented in a machine. In another case, we are looking at the only evidence that a problem can be solved is the brain “[ Maneesh Sahani Pub. ].



Work with data



AI methods are useful not only for creating models and generating ideas, but also as tools for processing data. “Neural data is very complex, and so we often use machine learning methods just to find structure in them,” Sahani says. The main power of machine learning is to recognize patterns that may be too subtle or too hidden in huge datasets for people to detect.



Functional magnetic resonance imaging, for example, generates snapshots of activity throughout the brain with a resolution of 1-2 millimeters every second or so, potentially for hours. “The task of cognitive neurobiology is how to find a useful signal in images that are very, very large,” says Nicholas Turk-Brown, a cognitive neuroscientist at Yale University in New Haven, Connecticut. Turk Brown leads one of several projects seeking fresh ideas at the intersection of data science and neuroscience [ Nicholas B Turk-Browne Pub. ].



Using a machine to analyze this data speeds up research. “This is a huge change in how neurobiological research is done,” said David Susillo, a neural calculator for the Google Brain team in San Francisco, California. “Graduate students do not need to do unnecessary routine work - they can focus on larger issues. You can automate a lot and get more accurate results. ”



Reproduction of feelings



Creating an artificial system that would reproduce brain data was an approach taken by Daniel Yamins, a computational neuroscientist at the Wu Tsai Institute of Neuroscience at Stanford University. In 2014, when Yamins, after receiving a doctorate at the Massachusetts Institute of Technology in Cambridge, and his colleagues trained the neural network to predict the brain activity of a monkey when it recognized certain objects [ Yamins, DLK et al., 2014 ]. Recognition of objects in humans and monkeys is carried out by a system of the brain called the ventral visual flow , which has two main architectural features. Firstly, it is retinotopy , which means that the visual processing pathways in the brain are organized in such a way that the eye perceives visual information. Secondly, the system is hierarchical; certain areas of the cortex perform increasingly complex tasks, from a layer that identifies only the contours of objects to a higher layer that recognizes an entire object, such as a car or face. Details of how the upper layers work are unknown, but as a result, the brain can recognize an object in various positions, under different lighting conditions, when it has a different size due to different distances to it, and even when it is partially hidden. Computers often cannot cope with such difficulties.



Jamins and his colleagues built their deep learning neural network in accordance with the same retinotopic hierarchical architecture as in the brain, and showed it thousands of images of 64 objects that differed in such characteristics as size and position. When the network learned to recognize objects, it developed several possible patterns of neural activity. Researchers then compared these computer patterns with patterns recorded on monkey neurons at a time when they were performing a similar task. It turned out that those network options that best recognized objects had activity patterns that were closest to the monkey brain patterns. "We found that the neural structure is simulated in the structure of the network," says Jamins. Researchers were able to compare areas of their network with areas of the brain with almost 70% accuracy.



The results confirmed that the architecture of the ventral visual flow is very important for the recognition process in the visual system of the brain. In 2018, Jamins and colleagues made a similar breakthrough by exploring the auditory cortex, for which they created a deep learning neural network that could identify words and genres of music in 2-second clips with the same accuracy as humans [ Kell, AJE et al. 2018 ]. This helped researchers determine which areas of the cerebral cortex perform speech recognition and which recognize music - a new step in understanding the auditory system of the brain.



Neuroscientists are still far from understanding how the brain handles a task such as distinguishing between jazz and rock music, but machine learning gives them the opportunity to build models that can be used to explore these issues. If researchers can create systems that work similarly to the brain, says Jamins, their structure can suggest ideas on how the brain solves such problems. This is important because scientists often do not have a working hypothesis on how the brain works [ Daniel Yamins Pub. ].



After the researchers constructed the hypothesis, the next step is to test it. By changing the parameters of AI models, you can get an idea of ​​the activity of the brain, and see what factors can be important for a specific task. Researchers are limited by ethical considerations in terms of how much they can interfere with processes in a healthy human brain. Therefore, many recordings of neural activity in humans are made on the brains of those who suffer from epilepsy and who need to remove brain tissue. This is due to the fact that implantation of electrodes into the brain tissue is allowed, which will be removed in any case. Experimental animals allow researchers to use more invasive procedures, but there are forms of human behavior, in particular speech, that cannot be reproduced by other species. AI systems that can mimic human behavior and be exposed to any kind of impact without causing ethical issues will provide scientists with additional tools to study how the brain works. Researchers, for example, can teach a network to reproduce speech, and then degrade it in order to investigate how this is related to exposure (another interesting example of neural network simulation of feelings - feelings of numbers is given in this publication on Habré - translator's note ).



General considerations



Computer and cognitive sciences solve some important questions, and understanding how to answer them in any of these areas can lead to progress in both areas. One of these questions is how exactly is the training going? Neural networks mainly carry out supervised learning - training with a teacher. For example, for image recognition, they can be trained using images obtained from the ImageNet database, which consists of more than 14 million photographs of objects that have been classified and annotated by people. During training, the network creates a statistical description of what the images with the same label have in common - for example, “cat”. When the networks present a new image, it is checked for the presence of similar numerical attributes, if there is a match, then the image is declared by the cat.



Obviously, this is not how children learn, says Tomaso Poggio, a neural calculus specialist at the Center for Brain, Mind and Machines, which is part of the Massachusetts Institute of Technology. “A child sees about one billion images in the first two years of life,” he says. But only a few of these images are somehow labeled or named. “We don’t yet know how to deal with this,” says Poggio, “and how to create machines that learn mainly from untagged data.”



His laboratory is at the initial stage of the project, which will allow the neural network to perform training without a teacher, to find patterns in untagged videos. “We know that animals and humans can do this,” says Poggio. - "The question is, how?"



Jamins deals with the problem of learning without a teacher, developing programs that behave like children in a game, interview their surroundings through random interactions, and gradually develop an understanding of how the world works. Essentially, he programs curiosity to motivate a computer to explore the environment in the hope that new patterns of behavior will emerge.



Another unresolved issue is whether certain aspects of intelligence are “established” evolution. For example, people are probably predisposed to face recognition, children do this from the first hours of life. Perhaps, Poggio suggests, our genes encode a mechanism for quick and early learning of this task. Deciphering whether this idea is correct may allow scientists to devise one way to help machines learn. Other researchers are studying the neural foundations of morality. “People are afraid of 'evil' machines,” says Poggio. “We probably need to better understand how our moral behavior arises if we want to build good machines, ethical machines.” [ Tomaso Poggio Pub. ]



Jamins says it's hard to understand how neuroscience alone can uncover how learning without a teacher works. “If you don't have an AI solution, if you don't have anything that works artificially, you can't have a brain model,” he says. Probably, he believes, AI scientists will come up with one or more solutions that neuroscientists can then test.



The answer to these puzzles will help create more intelligent machines that will be able to learn in their environment, and which can combine the speed and processing power of computers with human abilities. Data processing and the ability to simulate on computers are already yielding results in the brain sciences, and this will only progress. “AI will have a huge impact on neuroscience,” says Susillo, “and I want to get involved.”



Remark by the translator . Given the specifics of the Habr audience, which does not require clarification of such issues as a neural network or deep learning, the translation was made with some abbreviations that are not essential for understanding the article.



All Articles