Now only lazy people are not talking and writing about
artificial intelligence . But nowadays AI is not SkyNet at all from the movie “Terminator”. So far, scientists are dealing with a weak form of artificial intelligence. In most cases, these are relatively complex neural networks designed for solving certain tasks.
Many outstanding people of our time are concerned about the possibility of the emergence of a strong form of AI. According to some experts, a “thinking computer” can become an implacable opponent of a person, his enemy. Or the AI ​​of the future will fulfill some of its goals, being indifferent to the person. At the same time, artificial intelligence will destroy entire cities or countries when such a need arises. In order to show a similar scenario of technology development, scientists from Cambridge have
developed a mod for the game "Civilization", the fifth part of it.
In this supplement, you can see what can happen if a strong form of AI appears without pre-established security measures. Scientists engaged in the development of this mod is not suddenly. Experts from the Cambridge Center for Existential Hazards (Cambridge University Center for the Study of Existential Risk, CSER), are trying to understand that in our modern world can become an insurmountable danger to humans in the near or distant future.
The group was founded in 2012, and its original purpose was what it does today - the study of the impact of global disasters of various species on humans. Only those problems that potentially could lead to the destruction of humanity — civilization in its current form or all people on Earth in general — are considered. Of course, these disasters can be caused far not only by the influence of artificial intelligence, the reason may be in a genetically modified pathogen of a dangerous disease, climate change and much more.
As for the game, the addition to the “Civilization”, according to the developers, is partly educational, partly - a research project. “The main thing that we have conceived is to show a possible scenario for the development of real computer intelligence, to demonstrate the dangers that such intelligence can represent for us, people,” said Shahar Eivin, one of the project participants.
The game also shows IBM’s Deep Blue computer, which helps players accelerate AI research.
By the way, the addition replaces one of the scenarios of victory in “Civilization”, the launch of the spacecraft to the Alpha Centauri star with the development of a strong form of artificial intelligence. In the fashion there are new buildings (research laboratories), a new miracle (the same Deep Blue) and a new mechanism, which was called the AI-danger.
As soon as the player
launches a program for studying AI, a timer appears, called the "AI-danger". If the timer reaches the end of the countdown, the player receives a message that somewhere in the world a strong form of artificial intelligence is created and the game ends with the defeat of the gamer. “It all looks like real life. You can go a long way to the development of civilization, but you lose everything if you manage the created technologies without due care, ”says the developer.
In order to avoid a negative development scenario with a defeat at the end, the player has the opportunity to conduct more research in the field of AI. The more such projects are conducted, the slower the timer “ticks”. To win, a player needs to develop technologies for controlling AI, and invest in projects of neighboring countries and authorities of large cities. “If you choose an AI scenario, you need more security developments. Investing in this area is not always an obvious necessity, but it needs to be done, ”said Eivin.
Of course, “Civilization” cannot be called a serious research tool, but the addition makes it possible to realize the need for a cautious approach to the development of AI, not only for ordinary players, but also for scientists. The risks are shown very clearly. And the situation is quite realistic - the player must deal with aggressive neighbors, states that conduct more or less the same research. If one of the parties succeeds, but does not exercise due caution - the defeat is inevitable.
Actually, in order to confront global dangers, we need global interaction. Here everything is shown correctly.