Ilon Musk opposes autonomous lethal weapons

image



Businessman Ilon Musk, who heads many well-known technology companies, is known for his point of view about the dangers of artificial intelligence. Of course, he is famous not only because of this opinion. Nevertheless, he has opposed the creation of a strong form of AI since 2014. Then, for the first time, he publicly stated that artificial intelligence could be dangerous. Stephen Hawking and many other scientists, futurologists, IT specialists agree with him. There is an opinion that all mankind can fall victim to technological singularity.



The most dangerous, according to Mask, is the tendency to create an autonomous "smart" weapon. According to the entrepreneur, the risk of the appearance of “killer robots” is very high, so it is necessary to treat this issue with all possible care. “The third revolution in armaments” is already close, experts are convinced, and autonomous “killer robots” are a kind of Pandora’s box. There is very little time to solve the problem.



About a week ago, Musk, together with 115 prominent specialists from various fields of science and technology, signed an open letter , the authors of which ask the UN to address this problem, as soon as possible.



The letter was signed at the International Joint Conference on Artificial Intelligence (IJCAI 2017) event, which was held in Melbourne. By the way, among the signatories is the founder of DeepMind Mustafa Suleiman, as well as Jerome Mons, the head of Aldebaran Robotics, who developed the Pepper robot. If the leaders of the companies that are at the head of technological progress sign this letter, then it is probably worth thinking about the problem.



The letter, in particular, states that AI and robotics are developing at such a rapid pace that the possibility of waging war with the help of autonomous weapons, including robots, is becoming more and more likely. And this is a question of the next few years, and not at all decades, as was previously thought. So now it is necessary to think about how technology can affect the fate of mankind. And think about this should, first of all, the leaders of states where technologies are developing as actively as possible.



Here you can also add the threat of this type of technology falling into the hands of terrorists and autocrats who, without any remorse of conscience, will send deadly tools against ordinary people. Even if this can be avoided, there is still the threat of hacking systems, and it is non-zero. Hackers have repeatedly proved that you can hack almost everything, no matter how well this “everything” is protected.



Already, there are developments of lethal weapons operating in autonomous or semi-autonomous mode. Among others, this is such a device.







The photo is just to show how robots can wield weapons in the wars of the future. But already now drones flying in the sky are capable of operating in automatic mode. And the ships are equipped with automatic guns, which independently monitor a possible threat.



Governments of different countries do not agree with the point of view of scientists and technologists for various reasons. The main - for the States of autonomous weapons - it is profitable. It can increase the effectiveness of border defense or reduce the number of deaths of soldiers in the event of local or regional conflicts. So, in 2015, the British government opposed the prohibition of autonomous lethal weapons. Moreover, the development of weapons of this type are conducted very actively here.



Well, the point of view of scientists opposing autonomous “killer robots” is well illustrated by the statement of Element AI founder Joshua Bengio: affect the entire scope of the development of AI. The situation should be taken under the control of the international community, just as it is done with other types of weapons (biological, chemical, nuclear) ”.







"The creation of deadly autonomous weapons makes it possible to increase the scale of wars to unprecedented scales, which are even difficult to imagine," the letter says. And, probably, the authors of the letter are really right.











All Articles