Dangerous ease with which you can fool military artificial intelligence

The AI ​​war is beginning to dominate the strategies of the USA and China, but are the technologies ready for this?







Last March, Chinese researchers announced their intention to launch a brilliant and potentially devastating attack on one of America's most valuable technological assets: the Tesla electric car.



A team from the security lab of the Chinese tech giant Tencent showed several ways to trick Tesla's AI algorithm. By slightly changing the data going to the sensors of the car, the researchers managed to confuse the AI ​​driving the car.



In one case, there was a hidden pattern on the TV screen that made the wipers work.

In another, the markings on the road were slightly modified to confuse the autonomous driving system, after which it shifted the car into an adjacent lane.



Tesla algorithms usually do a great job of spotting raindrops on the windshield or following the road markings, but their principle of operation is fundamentally different from human perception. Because of this, it becomes surprisingly easy to fool deep learning algorithms (quickly gaining popularity in areas such as face recognition and cancer diagnosis) if you know their weaknesses.



It may seem that in the confusion of the Tesla car there are no strategic threats to the United States. But what if similar technologies were used to deceive battle drones or software that analyzes satellite images so that it sees objects that are not in the images, or does not see what it really is?



The accumulation of artificial intelligence



All over the world, AI is already considered the next step in obtaining a significant military advantage.



This year, the United States announced the implementation of a strategy in which AI will be used in many military fields, including data analysis, decision making, autonomous vehicles, logistics and weapons. Of the $ 718 billion requested by the Ministry of Defense for 2020, $ 927 million will go to AI and machine learning. Among the existing projects, there are both boring ones (checking whether AI can predict the need for servicing tanks and trucks) and the most advanced things in the field of armaments (swarms of drones).



The Pentagon advocates the use of AI, in particular because of fear that its technology could be used by its adversaries. Last year, Jim Mattis, as Secretary of Defense, sent President Donald Trump a memo warning that the US is already lagging behind in AI matters. His anxiety can be understood.



In July 2017, China unveiled its AI strategy, saying that “the largest developed countries in the world accept AI development as the main strategy to improve competitive position and protect state security.” A few months later, Vladimir Putin, the president of Russia, made the following prediction: “Anyone who becomes a leader in AI will rule the world” [ this is a literal translation of the quoted passage from the original article. But, as often happens, the quote is distorted. She sounded like this : “If someone can ensure a monopoly in the field of artificial intelligence, then we all understand the consequences - he will become the ruler of the world” / approx. perev. ].



The desire to build the smartest and most deadly weapons can be understood, but, as Tesla's hack shows, an enemy who knows the principles of the AI ​​algorithm can neutralize it or even turn it against the owners. The secret to victory in AI wars may not be to build the most impressive weapons, but to master the art of treacherous software deception.



Battle bots



On a bright sunny day last summer in Washington, D.C., Michael Canaan sat at the Pentagon's cafe, ate a sandwich and was amazed at the new set of powerful machine learning algorithms.



A few weeks earlier, Kanaan was watching a video game in which five AI algorithms worked together, and almost beat and outwitted five people in a contest in which they needed to control forces, camp and resources scattered across a large and complex battlefield. However, Kanaan frowned beneath his short cut blonde hair as he described what was happening. It was the most impressive demonstration of the AI ​​strategy of all that he saw - an unexpected jerk of AI, similar to what happened in chess, games for Atari, etc.



These fighting was played out in Dota 2, a popular NF video game that remains incredibly complex for computers. Teams must defend their territories by attacking the enemy camp in an environment more complex and treacherous than any board game. Players can see only a small part of the entire field, and it may take about half an hour to determine if their strategy is winning.



The opponents under the control of AI were not developed by the military, but by OpenAI, which was created to conduct basic research in the field of AI by the luminaries of Silicon Valley, including Elon Mask and Sam Altman. These algorithmic warriors, known as OpenAI Five, developed their own winning strategy, practicing the game tirelessly, and using moves that proved to be the best.



This is the kind of software that interests Canaan, one of those who must use AI to modernize the American army. From his point of view, this software shows what advantages the army will receive, enlisting the help of leading world researchers in the field of AI. However, whether they are ready to provide it - this question is becoming more acute.



Kanaan led the US Air Force Project Maven, a military initiative aimed at using AI to automate the recognition of objects in aerial photographs. The contractor of the project was Google, and when some employees of the company found out about this in 2018, the company decided to abandon this project. After that, the company issued the rules for using AI, where it wrote that Google will not use its AI to develop “weapons or other technologies whose principal purpose or method of implementation involves causing injury to people.”



Google was followed by employees of other large technology companies, requiring employers to refrain from military contracts. Many prominent AI researchers have supported attempts to organize a global ban on fully autonomous weapons.



However, for Kanaan, it will be a big problem if the military cannot work with researchers - such as those who developed OpenAI Five. Even more unpleasant is the prospect in which the adversary gains access to such advanced technology. “The code is in the public domain, everyone can use it,” he said. And he added: War is much more complicated than a video game. ”







AI surge



Kanaan so stresses the question with AI, in particular because he knows first hand how useful it can be for the military. Six years ago, he was an intelligence officer in Afghanistan for the US Air Force, and was responsible for deploying a new intelligence-gathering device: a hyperspectral analyzer. This tool can notice objects hidden from ordinary sight, such as tanks covered with camouflage fabric, or emissions from an illegal explosive factory. Kanaan says the system allowed the military to remove thousands of kilograms of explosives from the battlefield. And still, often, analysts could not in practice analyze the huge amounts of data collected by the analyzer. “We spent too much time looking at the data, and too little time making decisions,” he says. “Sometimes it dragged on so much that we began to think about whether we could save even more lives.”



The solution can be found in a breakthrough made in the field of computer vision by a team from the University of Toronto under the direction of Joffrey Hinton. Scientists have shown that an algorithm inspired by a multi-layer neural network can recognize objects with unprecedented accuracy if you provide it with enough data and computing power.



Learning a neural network implies that it will receive data, for example, image pixels, and will process it, constantly changing internal network connections using mathematical techniques, so that the output is as close as possible to a specific result, for example, determining whether image of the object. Over time, these deep learning neural networks learn to recognize patterns from pixels that indicate, for example, people or homes. Breakthroughs in deep learning spawned the current AI boom; this technology is the foundation of standalone systems for Tesla and OpenAI algorithms.



Kanaan immediately recognized the potential of deep learning to process various types of images and sensor data needed for military operations. He and his colleagues from the Air Force soon began to agitate the authorities to invest in this technology. Their efforts have contributed to the Pentagon promoting AI development plans.



However, shortly after deep learning broke into the field of AI, researchers found that the same properties that make it such a powerful tool are also its Achilles heel.



If it is possible to calculate how to adjust the network parameters so that it correctly classifies the object, then it is possible to calculate what minimal changes in the image can cause the network to make a mistake. In such “competitive examples”, after changing just a few pixels of the image, it looks the same for a person, but is perceived in a completely different way by the AI ​​algorithm. The problem can occur wherever deep learning is used - for example, in managing robotic vehicles, planning missions or detecting network hacks.



And in the midst of all this active growth in the use of AI for military purposes, few people notice the mysterious vulnerabilities in software.



Moving targets



One noteworthy object illustrates the full power of competitive machine learning. This is an image of a bug.



It seems normal to me or to you, but she sees a drone or a robot with a certain machine vision algorithm obtained through deep training ... as a gun. In fact, the unique pattern from the marks on the tortoise shell can be redone so that AI-based computer vision, accessible through Google’s cloud service, confuses it with anything (since then, Google has updated its algorithm, and it’s not so easy to fool it )



At the same time, the turtle was made not by the enemy of the state, but by four guys from MIT. One of them is Anish Ataliy, a lanky and very polite young man working in the field of computer security in the computer science and artificial intelligence laboratory (CSAIL). The video spinning on his laptop shows the process of testing turtles (some of these models, he said, were stolen at a conference), which are rotated 360 degrees and also turned upside down. And the algorithm sees the same thing there: "shotgun", "shotgun", "shotgun".



The earliest competitive examples were fragile and often refused to work, but Ataliy and his friends believed that they could develop a sufficiently reliable version of such an example that could work even on an object printed on a 3D printer. It was necessary to simulate three-dimensional objects and develop an algorithm that creates a turtle - a competitive example that will work from different angles of view and at different distances. Simply put, they came up with an algorithm that creates something that reliably deceives machine learning.



Military applications of this technology are obvious. Using camouflage competitive algorithms, tanks and planes can hide from satellites and drones with AI. AI-powered missiles can go blind with competitive data, or even go back to friendly targets. The information fed to smart algorithms can be poisoned by disguising a terrorist threat or trapping soldiers in the real world.



Atalya is surprised at how little concern he has about competitive machine learning. “I spoke with a bunch of people from this industry, asking them if they were worried about competitive examples,” he says. “Almost all of them answered negatively.”



Fortunately, the Pentagon is starting to pay attention to this. In August, the DARPA project announced the launch of several major AI research projects. Among them is GARD, a program focusing on competitive machine learning. Hawa Shigelman, a professor at the University of Massachusetts and program manager for the GARD, says these attacks can be devastating in war situations because people cannot recognize them. “We are supposedly blind,” he says. “And that makes the situation very dangerous.”



Competing machine learning problems also explain why the Pentagon is so keen to work with companies like Google and Amazon, as well as research institutes like MIT. The technology is developing very fast, and the latest breakthroughs are taking place in laboratories run by Silicon Valley companies and the best universities, rather than ordinary defense contractors.



What is important, they also occur outside the United States, in particular in China. “I think a different world is coming,” Kanaan says. “And we have to fight it with AI.”



The angry reaction to using AI for military purposes is understandable, but it may not see the whole picture. People are worried about smart killer robots, however, it is possible that we will sooner be overtaken by the risk of an algorithmic fog of war - through which even the smartest machines cannot see anything.



All Articles