The main problem of artificial intelligence is that nobody fully understands how it works.





Last year, on the roads of Monmouth County, New Jersey, began to travel around a strange robomobil. This experimental vehicle developed by Nvidia’s specialists is not very different in appearance from other stand-alone machines from other manufacturers, such as Google, Tesla or General Motors. But specifically for this car developed a new control system. It is self-taught, and does not depend on the settings specified in the production. Instead, the robomobil monitors the actions of the human driver and learns from him.



This is a bit of an unusual way to learn robo mobile. On the other hand, very few people have any idea how the car makes decisions. Information from sensors, radars and lidars enters the neural network, where these data are processed with the release of clear rules for controlling the steering wheel, brakes and other systems. In the end, it turns out that the actions of the robotic are often similar to the actions of a man who has fallen into the same situation on the road. The question arises - can the car make a strange decision in one of the cases - for example, bump into a tree at full speed or stand at a traffic light when the green light is on?



The neural network processing information can be compared with a black box. No, of course, experts imagine the principle of data processing by the neural network in general terms. But the problem is that self-learning is not a fully predetermined process, so sometimes unexpected results can be expected at the output. At the heart of everything is deep learning, which has already solved a number of important problems, including image processing, speech recognition, and translation. It is possible that neural networks will be able to diagnose diseases at an early stage, make the right decisions when trading on the stock exchange, and carry out hundreds of other important human actions.



But first you need to find ways that will allow you to better understand what is happening in the neural network itself during data processing. Otherwise, it is difficult, if not impossible, to predict possible errors of systems with a weak form of AI. And such errors will definitely be. This is one of the reasons why the car from Nvidia is still in the testing phase.



A person now uses mathematical models in order to make it easier for him to choose - for example, determine a reliable borrower of funds or find an employee with the necessary experience for some kind of work. In general, the mathematical models and processes that use them are relatively simple and straightforward. But military, commercial companies, scientists are now using much more complex systems, whose "solutions" are not based on the results of one or two models. Deep learning is different from ordinary computer operating principles. According to Tommy Jaakola, a professor from MIT, this problem is becoming more urgent. “Whatever you do - make a decision on investments, try to make a diagnosis, choose an attack point on the battlefield, all this should not depend on the black box method,” he says.



This is understood not only by scientists, but also by officials. Starting in the summer of next year, the European Union introduces new rules for developers and solution providers of automated computer systems. Representatives of such companies will be required to explain to users how the system works, and on what basis decisions are made. The problem is that this may not be possible. Yes, one can easily explain the basic principles of neural networks, but few people can tell exactly what is happening there during the processing of complex information. Even the creators of such systems cannot explain everything “from and to”, since the processes occurring in the neural network during information processing are very complex.



Never before has a person built cars, the principle of which is not fully understood by the creators themselves and is very different from the way information is processed by the person himself. So, can we expect normal interaction with machines whose work is unpredictable?





Picture prepared by artist Adam Ferriss using Google Deep Dream



In 2015, the Mount Sinai Hospital research team from New York used in-depth training to process a patient record database. The database included information on thousands of patients with hundreds of lines of information on each person, such as test results, date of a doctor’s visit, etc. As a result, the Deep Patient program appeared, which was trained on the example of records of 700 thousand people. The results that this program showed turned out to be unusually good. For example, she was able to predict the appearance of some diseases at an early stage in a number of patients.



However, the results were a bit strange. For example, the system began to diagnose schizophrenia well. But even for experienced psychiatrists, diagnosing schizophrenia is a difficult problem. But the computer coped with it with a bang. Why? No one can explain, even the creators of the system.



Initially, the developers of AI were divided into two camps. Proponents of the first said that the machine must be programmed so that all processes that occur in the system can be seen and understood. The second camp adhered to the idea that the machine should be trained by itself, receiving data from the maximum number of sources, followed by independent processing of such data. That is, supporters of this point of view, in fact, suggested that each neural network should be “its own master”.



All this remained pure theory to the present day, when computers became powerful enough so that artificial intelligence experts and neural networks could begin to put their ideas into practice. Over the past ten years, a huge number of ideas have been implemented, excellent services have appeared that help translate texts from language to language, recognize speech, process a video stream in real time, work with financial data, optimize production processes.







But the problem is that almost any machine learning technology is not too transparent for specialists. In the case of "manual" programming, the situation is much simpler. Of course, one cannot say that future systems will be incomprehensible to anyone. But by nature, deep learning is a kind of “black box”.



You can not just look at the principles of the neural network and predict the result that we get as a result of processing some kind of data array. Inside the “black box” there are dozens and hundreds of “layers of neurons” interconnected in a rather complicated order. Moreover, the value for the final result is not only the work of the layers, but also of individual neurons. In most cases, a person cannot predict what will appear at the output of a neural network.



One example of how the work of a neural network differs from that of a human brain is the Deep Dream system. This is a Google project, a neural network into which a regular photo was entered and the goal was to convert this photo according to a specific topic. Well, for example, to make all the objects in the picture look like dogs. The results impressed everyone. Somehow in the system introduced the task to generate an image of a dumbbell. The system managed, but in all cases people’s hands were drawn to the dumbbell - the neural network decided that the dumbbell-arm is the same system, which cannot be considered as two separate elements.







Experts believe that you need to better understand how such systems work. This is necessary for the simple reason that neural networks are beginning to be used in more and more important areas, where an error can lead to a sad ending (securities trading is one of the examples). “If you have a small neural network, you can perfectly understand how it works. But when a neural network grows, the number of elements in it increases to hundreds of thousands of neurons per layer with hundreds of layers — in this case, it becomes unpredictable, ”Jaakkola says.



However, neural networks should be used in work, in the same medicine. Doctors underestimate the importance of many data, a person is simply not able to cover the eye and find links between hundreds of records in the patient’s medical history. A machine - maybe this is the great value of neural networks and deep learning in general.



The US military has invested billions of dollars in the development of automatic control systems for drones, identifying and identifying various objects, and analyzing data. But the military also believe that the work of such systems should be understandable and explainable. The same soldiers, who are locked in an automated tank, will feel very uncomfortable if they do not understand how it works, why the system made such a decision, and not another.



Perhaps in the future, these systems will explain their actions. For example, Carlos Gustrin, a professor at the University of Washington, has developed a computer system that explains the intermediate results of his calculations. This system can be used to analyze electronic messages. If any of them is evaluated by a computer as sent by a conditional terrorist, then an explanation will be given why this conclusion was made.



According to the scientist, the explanations should be fairly simple and understandable. “We have not yet reached a dream where the AI ​​just talks to you and can explain,” says Gestrin. "We are only at the beginning of a long journey to create a transparent AI."



Understanding the reasons that led the AI ​​to a certain result is important if we want to make AI a part of our life, and a useful part. This is relevant not only in the case of the military, but also applies to Siri or any other digital assistants. For example, if you request Siri to sample the nearest restaurants, then many would like to understand why these restaurants, and not others, are shown. An important criterion for introducing AI is the trust of a person in computer systems.



Probably, it’s still impossible to force a computer to explain all its actions and decisions - for example, it’s often impossible to understand the actions and decisions of any person. They depend on a large number of factors. And the more difficult a neural network is, the more such factors it takes into account and analyzes.



Here a number of difficult questions of ethical and social nature arise. For example, in the case of the same military, when it comes to creating automated tanks and other killing machines, it is important that their decisions comply with the ethical standards of a person. Actually, the murder itself may not meet the ethical standards of many of us. But these solutions should be at least acceptable to the majority. Although what is the majority? Same question.



Danielle Denett, a philosopher at Tufts University, says: “I believe that if we are going to use these things and depend on them, then we need to understand how and why they act so and not otherwise. If they can’t better explain us what they are doing, then don’t trust them. ”



All Articles