The brain is like a computer: it copes poorly with math, and well - with everything else

image We all remember from school painful exercises in arithmetic. To multiply numbers like 3,752 and 6,901 using pencil and paper will take at least a minute. Of course, today, when we have phones at hand, we can quickly verify that the result of our exercise should be 25,892,552. Processors of modern phones can perform more than 100 billion such operations per second. Moreover, these chips consume only a few watts, which makes them much more efficient than our slow brains, consuming 20 watts and requiring much more time to achieve the same result.



Of course, the brain did not evolve to do arithmetic. Therefore, he does it badly. But he copes with the processing of a constant stream of information coming from our environment. And he reacts to it - sometimes faster than we can realize it. And no matter how much energy a conventional computer will consume, it will be difficult to cope with what is easily given to the brain, for example, understanding the language or running along the stairs.



If we could create machines whose computational abilities and energy efficiency would be comparable to the brain, then everything would change dramatically. Robots would deftly move in the physical world and communicate with us in a natural language. Large-scale systems would collect huge amounts of information on business, science, medicine or government, discovering new patterns, finding causal relationships and making predictions. Smart mobile applications like Siri and Cortana could rely less on clouds. Such a technology could allow us to create devices with little power, complementing our senses, providing us with medications and emulating nerve signals, compensating for organ damage or paralysis.



But is it not too early to set such bold goals for yourself? Is our understanding of the brain too limited so that we can create technologies based on its principles? I believe that emulation of even the simplest features of the neural circuits can dramatically improve the work of many commercial applications. How accurately computers must replicate the biological details of the structure of the brain to get closer to its level of performance - this is still an open question. But today's systems inspired by the structure of the brain, or neuromorphic, will become important tools for finding the answer to it.



A key feature of conventional computers is the physical separation of memory that stores data and instructions, and the logic that processes this information. There is no such separation in the brain. Calculations and data storage occur simultaneously and locally, in an extensive network of approximately 100 billion nerve cells (neurons) and more than 100 trillion links (synapses). For the most part, the brain is determined by these connections and the way each neuron responds to the incoming signal of other neurons.



Speaking about the exceptional capabilities of the human brain, we usually mean the recent acquisition of a long evolutionary process - the neocortex (the new cortex). This thin and extremely folded layer forms the outer lining of the brain and performs very different tasks, including processing information from the senses, motor control, working with memory and learning. Such a wide range of possibilities is available in a fairly homogeneous structure: six horizontal layers and a million vertical columns 500 μm wide each, consisting of neurons integrating and distributing information encoded in electrical pulses along the antennae growing from them - dendrites and axons.



Like all cells of the human body, the neuron has an electrical potential of about 70 mV between the outer surface and the viscera. This membrane voltage changes when a neuron receives a signal from other neurons associated with it. If the membrane voltage rises to a critical value, it generates a pulse, or voltage surge, lasting several milliseconds, about 40 mV. This impulse spreads along the neuron axon, until it reaches the synapse, a complex biochemical structure connecting the axon of one neuron with the dendrite of the other. If the impulse satisfies certain limitations, the synapse converts it into another impulse going down along the branching dendrites of the neuron receiving the signal, and changes its membrane voltage in a positive or negative direction.



Connectivity is a critical feature of the brain. The pyramidal neuron - a particularly important type of human neocortex cells - contains about 30,000 synapses, that is, 30,000 input channels from other neurons. And the brain is constantly adapting. The neuron and synapse properties — and even the network structure itself — are constantly changing, mainly due to input from the senses and environmental feedback.



Modern general-purpose computers are digital, not analog; the brain is not easy to classify. Neurons accumulate electric charge, like capacitors in electronic circuits. This is clearly an analog process. But the brain uses bursts as units of information, and this is basically a binary scheme: at any time, there is a surge in any place, or there is none. In terms of electronics, the brain is a system with mixed signals, with local analog computing and transmission of information using binary bursts. Since a burst has only values ​​of 0 or 1, it can travel a long distance without losing this basic information. It also reproduces, reaching the next neuron in the network.



Another key difference between the brain and the computer is that the brain copes with information processing without a central clock generator that synchronizes its operation. Although we observe synchronizing events - brain waves - they organize themselves, arising as a result of the work of neural networks. Interestingly, modern computer systems are beginning to adopt the asynchrony inherent in the brain in order to speed up the calculations, performing them in parallel. But the degree and purpose of parallelization of these two systems is extremely different.



The idea of ​​using the brain as a model for computing deep roots. The first attempts were based on a simple threshold neuron that yields one value if the sum of the weighted incoming data exceeds the threshold, and the other if it does not exceed. The biological realism of this approach, conceived by Warren McCulloch and Walter Pitts in the 1940s, is very limited. Nevertheless, this was the first step towards applying the concept of a triggered neuron as an element of computation.



In 1957, Frank Rosenblatt proposed another variant of the threshold neuron, the perceptron . A network of interconnected nodes (artificial neurons) is composed of layers. Visible layers on the network surface interact with the outside world as inputs and outputs, and the hidden layers that are inside perform all the calculations.



Rosenblatt also suggested using the main feature of the brain: restraint. Instead of folding all the inputs, the neurons in the perceptron can make a negative contribution. This feature allows neural networks to use a single hidden layer for solving problems on XOR in logic, in which the output is true, if only one of the two binary inputs is true. This simple example shows that adding biological realism can add new computing power. But what brain functions are necessary for its work, and what - useless traces of evolution? Nobody knows.



We know that impressive computational results can be achieved without attempting to create biological realism. Deep learning researchers have come very far in using computers to analyze large amounts of data and extract certain attributes from complex images. Although the neural networks created by them have more inputs and hidden layers than ever before, they are still based on extremely simple neuron models. Their broad capabilities do not reflect biological realism, but the scale of the networks they contain and the power of the computers used to train them. But networks with deep learning are still very far from computational speeds, energy efficiency, and learning capabilities of the biological brain.



The huge gap between the brain and modern computers is best emphasized by large-scale brain simulations. In recent years, several such attempts have been made, but they were all severely limited by two factors: energy and simulation time. For example, consider a simulation conducted by Markus Deisman and his colleagues several years ago using 83,000 processors on the K supercomputer in Japan. A simulation of 1.73 billion neurons consumed 10 billion times more energy than the equivalent region of the brain, although they used extremely simplified models and did not conduct any training. And such simulations usually worked more than 1000 times slower than the real time of a biological brain.



Why are they so slow? Brain simulation on conventional computers requires the computation of billions of differential equations, interconnected, and describing the dynamics of cells and networks: analog processes like the movement of charge across the cell membrane. Computers using Boolean logic — changing energy for accuracy — and separating memory and computation, are extremely inefficient at coping with brain modeling.



These simulations can become a tool for brain cognition, transmitting the data obtained in the laboratory in a simulation, with which we can experiment, and then compare the results with observations. But if we hope to go in a different direction and use the lessons of neuroscience to create new computing systems, we need to rethink how we develop and create computers.



image

Neurons in silicon.



Copying the work of the brain using electronics can be more doable than it seems at first glance. It turns out that about 10 fJ (10 -15 joules) is spent on the creation of the electric potential in the synapse. The metal oxide semiconductor (MOS) transistor gate, which is much larger and consumes more energy than those used in the CPU, requires only 0.5 fJ to charge. It turns out that synaptic transmission is equivalent to charging 20 transistors. Moreover, at the device level, biological and electronic circuits do not differ so much. In principle, it is possible to create structures like synapses and neurons from transistors and connect them so as to obtain an artificial brain that does not absorb such glaring amounts of energy.



The idea of ​​creating computers using transistors that work like neurons, appeared in the 1980s with Professor Carver Meade from Caltech. One of the key arguments of Mead in favor of “neuromorphic” computers was that semiconductor devices can work in a certain mode, follow the same physical laws as neurons, and that analog behavior can be used for calculations with high energy efficiency.



The Mead group also invented a neurocommunication platform in which bursts are encoded only by their network addresses and time of occurrence. This work has become innovative since it was the first to make time a necessary feature of artificial neural networks. Time is a key factor for the brain. Signals need time to propagate; membranes need time to react, and it is time that determines the form of postsynaptic potentials.



Several active research groups today, such as the Giacomo Individual group from the Swiss Higher Technical School and Quabena Boachen from Stanford, followed in the footsteps of Mead and successfully implemented elements of biological cortical networks. The trick is to work with transistors using a low voltage current that does not reach their threshold value, creating analog circuits that replicate the behavior of the nervous system while still consuming a little energy.



Further research in this direction may find application in such systems as the brain-computer interface. But there is a huge gap between these systems and the real size of the network, connectivity and ability to train the animal brain.



So in 2005, three groups of researchers independently began to develop neuromorphic systems that were significantly different from the original Mead approach. They wanted to create large-scale systems with millions of neurons.



The closest to conventional computers is the SpiNNaker project, led by Steve Furber of the University of Manchester. This group has developed its own digital chip, consisting of 18 ARM processors operating at 200 MHz - about one-tenth of the speed of modern CPUs. Although ARM cores came from the world of classic computers, they simulate surges sent through special routers designed to transmit information asynchronously — just like a brain. The current implementation, which is part of the European project Human Brain Project, and completed in 2016, contains 500,000 ARM cores. Depending on the complexity of the neuron model, each nucleus is capable of simulating up to 1000 neurons.



Chip TrueNorth, developed by Darmendra Moda and his colleagues from the IBM Almaden Research Lab, refuses to use microprocessors as computational units, and is actually a neuromorphic system in which computing and memory are intertwined. TrueNorth is still a digital system, but it is based on specially designed neural circuits that implement a particular model of neuron. The chip contains 5.4 billion transistors, it is built on the 28-nm Samsung CMOS technology (complementary metal-oxide-semiconductor structure). Transistors emulate 1 million neural circuits and 256 million simple (single-bit) synapses on a single chip.



I would say that the next project, BrainScaleS , went quite far from ordinary computers and approached the biological brain. We worked on this project with my colleagues from the University of Heidelberg for the European initiative “The Human Brain”. BrainScaleS implements mixed signal processing. It combines neurons and synapses, which are silicon transistors acting as analog devices with digital information exchange. The full-size system consists of 8-inch silicon substrates and allows you to emulate 4 million neurons and 1 billion synapses.



The system can reproduce nine different modes of triggering biological neurons, and is designed in close collaboration with neuroscientists. Unlike the Meade analog approach, BrainScaleS operates in an accelerated mode, its emulation is 10,000 times faster than real time. This is especially useful for studying the process of learning and development.



Learning is likely to become a critical component of neuromorphic systems. Now chips made in the image of the brain, as well as neural networks running on ordinary computers, are trained on the side with the help of more powerful computers. But if we want to use neuromorphic systems in real-world applications — for example, in robots that will have to work side by side with us, they will have to be able to learn and adapt on the fly.



In the second generation of our BrainScaleS system, we realized the possibility of learning by creating “flexibility handlers” on the chip. They are used to change a wide range of parameters of neurons and synapses. This feature allows us to fine-tune the parameters to compensate for differences in size and electrical properties when moving from one device to another — just like the brain itself adjusts to changes.



The three large-scale systems I described complement each other. SpiNNaker can be flexibly configured and used to test different neuromodels, TrueNorth has high integration density, BrainScaleS is designed for continuous learning and development. The search for the right way to assess the effectiveness of such systems is ongoing. But the early results are promising. IBM's TrueNorth group recently calculated that the synaptic transmission in their system takes 26 pJ. And although it is 1000 times more energy required in a biological system, but it is almost 100,000 times less energy spent on transmission in simulation on general-purpose computers.



We are still at an early stage of understanding what such systems can do and how to apply them to solving real problems. At the same time, we must find ways to combine multiple neuromorphic chips into large networks with improved learning opportunities, while reducing power consumption. One of the problems is connectivity: the brain is three-dimensional, and our schemes are two-dimensional. The question of three-dimensional integration of schemes is now actively studied, and such technologies can help us.



Non-CMOS-based devices such as memristors or PCRAM ( phase-changing memory ) can be another help.Today, weights that determine the response of artificial synapses to incoming signals are stored in conventional digital memory, which takes away most of the silicon resources needed to build a network. But other types of memory can help us reduce the size of these cells from micron to nanometer. And the main difficulty of modern systems will be to support the differences between different devices. The principles of calibration developed in BrainScaleS can help.



We have just begun our journey on the road to practical and useful neuromorphic systems. But the effort is worth it. If successful, we will not only create powerful computing systems; we can even get new information about the work of our own brain.



All Articles