Google's Quantum Bet on AI - and What It Means for All of Humanity

Google has more computing power, data, and more talented specialists to work on artificial intelligence than any other company on Earth - and it does not slow down. Therefore, people should not slow down.







The human brain is a fun thing. Some memories remain with us forever: the birth of a child, a car accident, election day. But we can only save some details - the color of the obstetric department door, the smell of the polling station - and other details, such as the nurse’s face, or how we were dressed at the time of the accident, disappear. Sundar Pichai, Google’s CEO, will forever remember the day that AI left the lab.



“It was in 2012, I was in the same room with a small team, where there were only a few people,” he says. Jeff Dean , the legendary Google programmer who helped write the search engine, was working on a new project and wanted Pichai to take a look at it. “Every time Jeff wants to show you something new, you feel joyful excitement,” he says.



Pichai doesn’t remember in which building he watched Dean’s work, although other strange details of that day struck him. He remembers that he didn’t sit, but stood, while someone was joking around about the jamb of the HR department, who appointed the recently recruited Jeffrey Hinton , the “father of deep education”, an AI researcher who had been doing this for four decades, to the position of intern Turing.



The future CEO of Google at that time was a senior vice president, led projects Chrome and Apps, and did not think about AI. And no one at Google was looking in the direction of AI, at least seriously. Yes, 12 years earlier, Google’s founders, Larry Page and Sergey Brin, publicly announced that AI would transform the company. Page told Online magazine in May 2000: “The perfect search engine is smart. He must understand your request, understand all the documents, and this is clearly an AI job. ” But, despite high-profile promises, machine learning (MO) for decades has produced mediocre results at Google and other companies.







However, at that moment powerful forces were emerging on Google’s servers. For a little over a year, Dean, Andrew Eun and their colleagues created a massive computer network, the connections of which simulated the human brain. The team brought together 16,000 processors in 1,000 computers, which together had a billion connections. It was an unprecedented computer system, although it still lagged far behind the human brain, with its 100 trillion connections.



To test the data processing of this massive neural network, engineers launched a deceptively simple experiment. For three days in a row, they fed the car a diet of millions of random images from YouTube videos - a video hosting service purchased by Google in 2006. They did not give her any instructions and wanted to see what she would do with it. They saw that the computer brain that processed YouTube wasn’t so different from the human one. In a far corner of computer memory, Dean and his colleagues found a spontaneously generated and blurry image of one object that the computer had been encountering for 72 hours: cats.



It was a machine learning independently.



When Pichai first saw how intelligence is emerging on Google’s servers, he felt how his perception of the world was changing, he felt a certain hunch. “This thing was bound to grow in scale and perhaps show us how the universe works,” he says. “This will be the most important thing that we, humanity, will work on.”



The advent of AI on Google is reminiscent of the path that billions of people have taken together in the digital future; and few people really understand this path, but at the same time we cannot turn off it. And for the most part, Google dominates it. Few other companies (not to mention governments) planets want or can promote computer thinking just as effectively. Google is working on more products than any other technology company on the planet (with a billion users): Android, Chrome, Drive, Gmail, Google Play Store, Maps, Photos, Search and YouTube. Unless you live in China, and you have access to the Internet, then almost certainly some of the Google projects complement your brain.



Shortly after Pichai took over as CEO in 2015, he set out to make Google a company with AI at the forefront. She already had several AI research-oriented units, including Google Brain and DeepMind (the latter was bought in 2014), and Pichai focused on turning all of these intellectual values ​​related to intelligence into new and improved products. Introduced in May 2018, Smart Compose for Gmail already offers 2 billion characters each week to respond to emails. Google Translate can recreate your voice and speak a language you don’t know. Duplex, a personal assistant, can make appointments and book tables by telephone with a voice so similar to a human one that many interlocutors did not even suspect that it was a robot. The latter raises serious ethical issues and causes public complaints. The company says it always informs consumers that the call comes from Google.







The reach of Google’s AI extends far beyond the services of the company. Third-party developers - both startups and large corporations - are already using AI tools for everything from training smart satellites to tracking changes on the Earth's surface or uprooting offensive language on Twitter (well, they’re trying). Google AI is used on millions of devices, and this is just the beginning. Google is on the verge of achieving the so-called " quantum superiority ." A new breed of computer will be able to crack complex equations millions of times faster than ordinary ones. We are on the verge of a cosmic computer era.



When used with good intentions, AI can unlock potential through helping society. He can find cures for deadly diseases (Google directors say their smart machines have shown the ability to detect lung cancer five years earlier than doctors), feed the hungry and even fix the climate. A paper published in the scientific journal of Cornell University authorship of several leading AI researchers (some of whom work for Google) identified several ways machine learning can be used to combat climate change, from accelerating the development of tools for using solar energy to radical optimization of energy use.



When used to the detriment, with the help of AI it will be possible to arm tyrants, trample on human rights, destroy democracy, freedoms and privacy. The American Civil Liberties Union released a report in June entitled “The Dawn of Robotic Surveillance,” which warned of how millions of security cameras (including those sold by Google) that are already installed throughout the US can be used for government surveillance through AI for citizens and to manage them. This is already happening in some areas of China. In the same month, a lawsuit was filed in court accusing Google of using AI in hospitals that violate patient privacy.



Every major breakthrough in the history of mankind has been used both for good and for harm. Typography made it possible to distribute “Common Sense” as a work of Thomas Payne [ pamphlet in support of American separatists seeking to separate from Britain / approx. perev. ], and the manifesto of fascism Mein Kampf ["My struggle"] by Adolf Hitler. However, with the advent of AI, this problem has a new dimension: the printing press does not choose what to print. AI, having reached its full potential, will do just that.



It is time to ask questions. “Think about what thoughts you would like to put in the heads of people who invented fire, started the industrial revolution or developed the development of atomic energy,” says Greg Brockman, co-founder of OpenAI, a general-purpose AI startup that received $ 1 billion from Microsoft in July .



Left and right-wing political parties say Google is too big to be split. Will a fragmented company engage in AI democratization? Or, as the leaders of the company warned, the primacy in this area will pass to the Chinese government, which announced its intention to seize the palm? President Xi Jinping has allocated more than $ 150 billion to become the world leader in AI by 2030.



Inside Google, the warring factions are fighting for the future of AI. Thousands of employees rebelled against the leaders, trying to stop the use of the technology they developed for espionage and military purposes. The way Google decides to develop and apply AI may ultimately determine whether this technology will help humanity or harm. “When these systems are developed, they can be deployed around the world,” explains raid Hoffman, co-founder and vice president of Linkedin, member of the board of directors at the Stanford University Human-Oriented Intelligence Institute. “This means that everything that its creators do right or wrong will affect our lives on a large scale.”






“At the very beginning, the neural network is not trained,” says Jeff Dean, one fine spring evening in Mountain View, California. He stands under a palm tree next to the Shoreline Amphitheater, where Google is hosting a party to mark the opening of the annual I / O technology fair.



At this event, Google reveals its plans to developers and the rest of the world. Dean, wearing a pink and gray polo shirt, jeans, sneakers and a backpack, is one of the main speakers. “It's like meeting Bono, ” one of the Korean programmers gasps, rushing to Dean to take a selfie with him after one of his speeches earlier that day. “Jeff is a god,” another person tells me, almost surprised that I don’t know. On Google, Dean is often compared to Chuck Norris, a militant hero known for owning kung fu and able to deal with many enemies at once.



“Oh, that looks good, and I’m alone,” Dean says with a grin as the waiter stops next to him, holding a tray of vegetarian tapioca puddings. Leaning against a tree, he talks about neural networks the way Laird Hamilton could describe surfing in Teahupoo . His eyes light up, and his hands make quick gestures. “These are the layers of a neural network,” he says, clutching at a tree and using a gray trunk to explain the connections between computer brain neurons. He carefully looks at the tree, as if he sees something hidden inside.



Last year, Pichai made Dean the head of Google AI, that is, responsible for what the company will build and what to invest in. He deserved this role, in particular, by scaling up the experiment with the YouTube neural network to a new platform, teaching machines to think on a huge scale. The system began with the DistBelief internal project, which many teams, including Android, Maps, and YouTube, began to use to make their products smarter.



But by the summer of 2014, while DistBelief was growing inside Google, Dean began to notice its flaws. He was not adapted to technological shifts such as the growing importance of GPUs (computer chips for graphics processing) or the conversion of speech into a complex data set. Also, DistBelief was not intended to open code, which limited its growth. Therefore, he made a bold decision: to create a new version open to everyone. In November 2015, Pichai introduced the heir to DistBelief, TensorFlow - this would be one of his biggest announcements as CEO.



The importance of opening TensorFlow to developers outside of Google cannot be overestimated. “People were looking forward to touching him,” said Ian Brother, director of MO at Arm, one of the world's largest computer chip companies. Today, Twitter uses it to create bots that track conversations, rank tweets and encourage people to spend more time in the feed. Airbus trains satellites so that they can study almost any part of the earth’s surface. Students in New Delhi have adapted mobile devices to monitor air quality. Last spring, Google released early versions of TensorFlow 2.0, which made its AI even more accessible to inexperienced developers. The ultimate goal is to make the creation of AI applications as simple as the process of creating a website.



TensorFlow has already been downloaded 41 million times. Millions of devices - cars, drones, satellites, laptops, phones - use it for learning, thinking, reasoning and creativity. The company’s internal document contains a graph showing the use of TensorFlow inside Google (and, consequently, the number of projects related to machine learning): since 2015, the indicators have grown by 5000%.



Technology insiders point out that TensorFlow may not be just a gift for developers, but a Trojan horse. “I’m worried that they are trying to control access to AI,” said a former Google employee who wanted to remain anonymous, as his current job depends on having access to the Google platform. Today, TensorFlow has only one main competitor, PyTorch from Facebook, popular with scientists. This gives Google significant control over the core AI layer. “Look at what Google has done with Android,” the same person continues. Last year, EU regulators fined the company $ 5 billion for requiring electronics manufacturers to preinstall Google applications on devices running Google’s mobile OS. Google has filed an appeal, but companies are threatened with additional investigations into its competitive behavior in both Europe and India.



Helping AI grow, Google has created a request for new tools and products that the company can sell. One example is the Tensor Processing Units (TPU), integrated circuits designed specifically to speed up applications that use TFs. If developers need more computing power for applications with TF - and this usually happens - they can pay Google for the time and place of use of these chips working in Google data centers.



The success of TF convinced skeptics among the Google leadership. “It was clear to everyone that AI would not work,” Sergey Brin recalls in an interview at the World Economic Forum 2017. “People tried to do something with him, tried to use neural networks, and nothing worked.” Even when Dean and the team succeeded, Bryn was adamant. “Jeff Dean periodically came up to me and said: 'Look, the computer painted a cat', and I said: 'Okay, that’s very nice, Jeff,” he said. But he also had to admit that AI was "the most important breakthrough in computing in my entire life."






The 4th stage of the Shorline Amphitheater seats 526 people, and all the seats are now occupied. There is a second day of I / O, and Jen Gennai, the head of responsible innovation at Google, is leading a seminar entitled “Writing a Script for an Honest and Ethical AI and MO”. She tells the crowd: “We have identified four forbidden areas, technologies that we will not develop. We will not create or deploy weapons. We will also not deploy technologies that, in our opinion, violate international human rights. ” The company also promises to refrain from technologies that cause "fundamental harm" and "collect or use information for surveillance, violating internationally accepted standards." She and two other directors from Google explain how the company now integrates AI principles into all products being developed, and that Google has a comprehensive plan for everything from uprooting bias from algorithms to predicting the unintended consequences of using AI.



After the report, developers from various companies gather in a small group and express displeasure. “It seems to me that we have not received enough,” one of them, an employee of a large international corporation that uses TF and often works jointly with Google, shares the observation. “They tell us: 'Don’t worry about it, we have everything under control.” But we all know that this is not so. ”



And the developers have every reason for skepticism. The words of Google representatives quite often diverge from their deeds, but in the case of AI, the stakes are higher. Gizmodo was the first to report on Google’s Pentagon contract to develop Project Maven combat drones technology in March 2018. After three months of protests from Google, Pichai announced that the contract would not be renewed. Soon after, another project became known: Dragonfly, a search engine for Chinese users, was supposed to be as powerful and ubiquitous as the one used in 94% of search queries in the United States, while at the same time agreeing with Chinese censorship prohibiting the delivery of content for some articles related to human rights, democracy, freedom of speech and civil disobedience. Dragonfly also had to bind the phone numbers of users to their search queries. Employees rebelled for another four months, and activists tried to include Amnesty International and Google shareholders in this fight. Last December, Pichai told the US Congress that Google no longer had plans to launch a search engine in China.







During these upheavals, one of Google’s programmers asked Dean a direct question: will the company continue to work with “tyrannical regimes”?“We need to know: where does the line go?” This programmer tells me, using the same words as the company itself. “I insisted on answering the question: What things would you basically refuse to do? But he never received clear explanations. ” In protest, this employee quit.



Friendly Dean becomes serious if you ask him a question about the dark side of AI. “People in my organization have sincerely spoken out about what we should do for the Defense Ministry,” he says, referring to his work on the Maven project. Dean describes a list of AI applications that Google is doing, but which he himself would not want to work on. “One of the projects is work on autonomous weapons. This is something that I don’t want to work on, and I don’t want to have anything to do with it, ”he says, looking me straight in the eye.



In the process of developing the story of the inconsistency of the Maven project, The Intercept and The New York Times published emails, which showed concern among company representatives about how the public can perceive the breadth of Google’s ambitions for AI. “I don’t know what will happen if newspapers get information about the secret production of AI weapons in the bowels of Google,” wrote Fei Fei Lee, chief scientific consultant of the Google Cloud project, one of the authors of the “principles of AI on Google” in one of them. - Avoid any mention or inclusion in the discussion AND AT ANY PRICE. Turning AI into a weapon is one of the most sensitive topics in the field of AI, if not the most sensitive. This is a tidbit of the media trying to find any way to harm the company. " She also suggested launching some positive PR stories about the democratization of AI by the company,and what is called "humanistic AI." “I would very carefully defend these positive images,” she wrote.



AI protests led to a prolonged PR crisis. In March, the company announced the creation of an external advisory council on advanced technologies, known as the “AI Ethics Council” - but it did not last a week after thousands of Google employees opposed its creation. The board included the director of the drone company and president of the Heritage Foundation, a right-wing advocate who publicly made transphobic statements and denied climate change.



Pichai himself intervened several times in such a discussion. Last November, in a letter to employees, he acknowledged Google’s mistakes. “We understand that we have not always done everything right in the past, and we sincerely regret it,” he said. “We obviously need to change something.” However, controversy over Google’s use of technology continues to tear apart. In August, the Googlers for Human Rights staff organization [googlers for human rights] issued a public petition for more than 800 signatures, demanding that the company not provide technology to customs, immigration police, border guards, and the US Department of Health's Refugee Relocation Unit.






When I asked Pichai about how the approach to AI on Google affects his own work, he linked it to another company priority: reducing concern about what Google does with the user data it receives. "I always demand that teams working with AI - although this sounds somewhat counterintuitively - increase privacy because I believe that AI gives us a chance to do this." Last spring, he talked about how Google uses the MO to protect data stored on smartphones from third-party access.



He says the fears associated with AI are greatly exaggerated. “It’s important for people to understand that you don’t need to worry yet - we are at the very early stages, and we have time,” he explains. Pichai hopes Google can calm all the worries surrounding AI by touting its virtues. As part of the AI ​​for the Common Good initiative, Google is using MoD to address, as the company says, “the greatest social, humanitarian, and environmental problems of humanity.” Teams use AI to predict floodstracking whale migration, diagnosing cancer, detecting illegal mining and deforestation. On I / O, a young entrepreneur from Uganda invited by the company described how he uses TensorFlow to track the migration of corn leaf scoops across Africa - insects that provoke hunger throughout the continent. Launched in 2018, the AI ​​Impact Challenge offered grants of $ 25 million to charitable organizations and startups using AI for tasks such as conserving forests and fighting fires.



The company also in the process of disputes about AI refused two conflicting initiatives. Last December, Google shelved facial recognition software, although its rival Amazon continues to develop its version of the program, despite protests from its employees and allegations of racial discrimination by the police. An informed source said that because of this decision, Google could lose profits of billions. The company also, for ethical reasons, abandoned the $ 10 billion project to provide a cloud service for the Pentagon. Amazon and Microsoft are still doing this.



When asked how Google decides whether a project is useful or harmful to society, Pichai cites a “lip-reading project”. A team of engineers came up with the idea of ​​using AI to read lips on camera recordings. They wanted to give deaf people the opportunity to communicate. However, someone was worried about the unforeseen consequences of this project. Could unscrupulous individuals be able to use it for surveillance through street cameras? Engineers tested the project on street cameras, security cameras, and other publicly accessible cameras, and determined that AI would require larger-sized recordings. Google published the work, being confident that it could be used safely.






Santa Barbara, California, has a sunny day, but a thermometer in a Google lab shows 10 microkelvins - 1/100 degrees above absolute zero. “This is one of the coldest places in the universe,” Eric Lucero, a researcher in the laboratory, tells me. “Inside this thing,” he says, pointing to a shiny metal container, “it's colder than in space.” The container is similar in size and shape to a fuel barrel, made of copper and paved with real gold. Thick wires of niobium-titanium alloy, similar to tentacles, come out from above, along which control signals and data from sensors are transferred to the computer and vice versa.



This barrel contains one of the most fragile, and potentially the most powerful machines on Earth: a quantum computer. If everything goes according to plan, he will seriously pump the capabilities of AI, so much so that it may make us reconsider our understanding of the Universe, and the place of humanity in it.



Dreams of quantum computers have existed since the 1980s, when Richard Feynman, one of the participants in the Manhattan project, which created the atomic bomb, began to put forward theories about empowering computers through the use of quantum mechanics, which was used to create nuclear physics. Our computers today work with bits of information, and the value of one bit can be one or zero; they need to calculate the results, probabilities and equations step by step, sequentially excluding one option after another, before reaching a solution. Quantum computers work with qubits, in which zeros and ones can exist simultaneously. This allows qubits to process certain information much faster. How much faster? One widely known example:a computer of 300 qubits could simultaneously perform as many calculations as there are atoms in the universe.



“But these are the very qubits,” says Lucero, inviting me to look through a microscope in which I see some blurry crosses. There are 22 of them in total. This is a smaller party. Elsewhere in the Google lab, 72 qubits were created. So far, they live only 20 μs in conditions colder than space.



To create a commercially valuable quantum computer, the company will have to create enough qubits and keep them stable and error free long enough for some serious computing breakthroughs to occur. Other laboratories compete with them, but Google brought together some of the world's best experts to find ways to create an environment in which qubits could live and prosper. And he is moving toward this goal faster than anyone could have expected: in December Google compared his best quantum processor with a regular laptop, and the laptop won. A few weeks later, after some changes in the processor, he still beat the laptop, but lagged behind the desktop computer. In February, a quantum computer overtook all the computers in the laboratory.



Hartmut Niven, leader of the quantum team at Google, presented the lab's achievements during the spring quantum symposium in May, and described the increase in computing power by a double exponent - an incredible equation that looks something like this:



2 2 1 , 2 2 2 , 2 2 3 , 2 2 4



In computer circles, such a growth rate of the computing power of quantum computers is known by the name of Niven’s law, reminding us of Moore’s law, according to which the development of classical computers is characterized by a doubling of the number of transistors that place on a given chip area, about once every 18 months.



Now the Google team has set its sights on a milestone, quantum excellence. It will be many more years until their quantum computer reaches its full potential. But in the laboratory you can feel this anticipation of the moment. “There are tasks that humanity cannot solve without a quantum computer,” says Lucero, standing next to the machine that will have to do this. “This idea of ​​empowering humanity is causing excitement.”



There is a rhythmic hum in the room, with which qubits hatch. What will this mean for humanity when computers can think and count exponentially faster? This new science, perhaps, can explain the deepest mysteries of the Universe - dark matter, black holes, the human brain. “This is the moment of 'Hello, world!'” Says Lucero, referring to the appearance in 1984 of the Macintosh, the computer that launched a new era for a whole generation of programmers. And while Google opens this door to a new space, we all need to prepare for what awaits us on the other side.



All Articles