Superb Quantum Excellence Q & A

From a blog by Scott Joel Aronson, Computer Science and Systems Specialist, Lecturer, Department of Computer Science, University of Texas at Austin



You read these stories - in Financial Times, Technology Review, CNET, Facebook, Reddit, Twitter, [ on Habr / approx. transl.] or elsewhere - that a group of researchers at Google has achieved quantum computing superiority with its 53-qubit superconducting device. They are easy to find, but I will not give links to them - simply because they should not exist yet.



The world knows that Google is really preparing a big announcement of quantum superiority, which should come out simultaneously with the publication of research work in one of the respected scientific journals. And this is likely to happen within a month.



Meanwhile, NASA, which participated in this development, accidentally posted an old version of Google on a public site. She spent a short time there, but enough to get to the Financial Times, my inbox and millions of other places. Arguments about this work, devoid of facts, are predictably spread everywhere.



Apparently, our world has been deprived of the opportunity to clearly announce a new moment of “man’s entry into space,” in which the strong Church-Turing thesis will be experimentally refuted at a press conference. It will be more like the flight of the Wright brothers, fragmentary rumors about which surfaced from 1903 to 1908, in which Wilbur and Orville finally agreed to demonstration flights (only in our case, fortunately, it will go to clarify the issue where how less time!)



I've been up to date for a couple of months; it was extremely difficult for me not to write about this on my blog. I promised to be silent, but I could not resist the desire to leave different hints here and there - for example, during my speech at the last event dedicated to Paul Bernays , I specially constructed the course of my lectures so that they would bring to this point.



This entry is the official announcement confirming these facts. And if lightning is already visible to us, then the right to voice the thunder belongs to a group of researchers from Google, as well as the time and place for this.



Instead, I decided to clarify this question, to clear up the misinformation circulating on the Internet in my role as a blogger and “public intellectual”, and to present you “An excellent set of questions and answers about quantum superiority from Scott”. Just in case you suddenly became interested in quantum superiority, or wanted to know what would happen if some search engine company from Mountain View or somewhere else hypothetically announced the achievement of quantum superiority .



Well, let's not pull the cat by the tail:



Question 1: What is quantum computational excellence?



This term is often abbreviated simply to quantum superiority (KP). It denotes the use of a quantum computer (QC) to solve a certain well-defined set of problems, the solution of which using today's well-known algorithms on today's classical computers will take orders of magnitude longer - and not just like that, but because of asymptotic quantum complexity. The emphasis here is on being 100% sure that the task is being solved on QC, and it really cannot be approached with the help of classical computers; Ideally, the task should be such that it can already be solved in the near future (with the help of noisy, not universal QCs that already exist or will appear soon). And if this task is still useful, the better - but this is not necessary. The Wright Brothers plane or the Chicago Woodpile were not useful in themselves.



Q2: If Google’s researchers really reached HF, does this mean that “any code can be hacked,” as the candidate for presidency from the US Democratic Party, Andrew Young, recently tweeted?







Not true (although as a candidate, Young is pretty to me).



There are two problems. Firstly, the devices that Google, IBM and others are creating today consist of 50-100 qubits and do not have an error correction system. To run the Shore algorithm to crack RSA encryption, several thousand logical qubits are required. And with the error correction methods known today, this can easily require millions of physical qubits, and the quality is better than the existing ones. I don’t think anyone has come close to this, and we don’t know how long it will take.



Secondly, even in a hypothetical future, scalable QCs with error correction will, as it seems to us today, be able to crack some codes, but not all. Coincidentally, the public keys that they can crack include most of the keys we use today for Internet security: RSA, the Diffie-Hellman protocol, elliptic curves, etc. Private key encryption will have minimal impact. And there are also candidates for public key systems (for example, based on gratings), the method of hacking of which has not been known to anyone for 20 years of attempts at QC; in addition, attempts are already being made to switch to such systems. Details are in my letter to Rebecca Goldstein .



Q3: What calculations, considered classically complex, does Google plan to carry out, or has already done?



I can say, although I’m embarrassed. The calculations are as follows: the “tester” generates a random quantum circuit C (a random sequence of gates of 1 qubit and the second second qubit closest to it, depth about 20, on a two-dimensional lattice of 50-60 qubits). Then the tester sends C to the quantum computer and instructs him to apply the circuit to the initial state of all-0, measure the result with the base {0,1}, send the observed string of n bits, and repeat this several thousand or million times. Finally, using his knowledge of C, the classical tester applies a statistical test to verify that the output confirms that the QC performed the calculations.



That is, it is not a task like factoring, having one correct answer. Circuit C produces a probability distribution, let's call it D C , over strings of n bits, and the task is to produce samples from this distribution. In fact, 2 n lines will speak in support of D C - and there will be so many of them that if the QC works as it should, then we will never see the same output. Importantly, the distribution of D C is not uniform. Some lines will experience constructive interference of amplitudes, and their probabilities will be greater, while others will suffer from destructive interference, and their probabilities will be less. And although we will see only a few samples, the number of which compared to 2 n will be tiny, we can check whether they gravitate to the rows, which we consider more likely, and in the end this will increase our confidence that something classically unattainable is happening .



Simply put, KK will simply be asked to perform a random (but known) sequence of operations - not because we need a specific result, but because we are trying to prove that it can get ahead of a classic computer in some clearly defined task.



Q4: But if KK exists only to give out random garbage, the only meaning of which is that it is difficult to simulate on a classical computer, then who cares? Isn't that a tinned sandwich with nothing?



No. As I already wrote, this is not a sandwich with everything at once, but definitely a sandwich with something!



Respect the immensity of the topic under discussion, and the terribly complex engineering achievements necessary for its implementation. Before the onset of the CP, skeptics might have fun at the same time that after all the billions spent in more than 20 years, not a single QC has ever been used to solve a task that it would have completed faster than your laptop, or at least the problem could not be solved in a way that depends on the quantum nature of this computer. But in the world of the coming KP, this is already wrong. A superposition of 2 50 or 2 60 complex numbers was mastered computationally, using time and resources that were tiny compared to 2 50 or 2 60 .



I constantly remember the Wright brothers' plane because I am struck by the abyss between what we are talking about and the neglect on the Internet from all sides to this topic. This is similar to the reaction of a person who firmly believed that it was impossible to make a useful flight through the air, and then he saw a plain wooden propeller plane floating in the air - this spectacle would not refute his views, but would not support either.



Have I really been worried all these years for nothing that the constant hype about the much less significant milestones achieved by the CC will deplete people's patience, and they will not care when something really worthwhile happens?



Q5: Many years ago, you shamed the masses for over-praising D-Wave and its statements about the enormous quantum acceleration of solving optimization problems through quantum annealing. Now you are ashamed of the masses that they are not sufficiently enthusiastic about the Communist Party. Why don't you decide?



Because my goal is not to lead the "level of enthusiasm" in any particular direction, but to ensure that it is correct! Can't you say, looking back that I was generally right about the D-Wave, even when my criticism of this topic made me unpopular in some circles? So about the KP, I also try to be right.



Q6: If the calculations related to KP take into account only samples from the probability distribution, how can one verify that the calculations are correct?



Good thing you asked! This is a question of the theory that we and other scientists have been developing for the last decade. I have already explained it briefly in B3: the calculations can be checked statistically from the samples returned by the QC, and make sure that they prefer to add to the “peaks” of the probability distribution D C. One of the convenient ways to do this is what Google calls a “linear cross-entropy check”: this is to sum Pr [C output s i ] for all samples s 1 , .., s k that the QC produces, and then declare the check successful if this amount comes out beyond a certain boundary - for example, bk / 2 n for a constant 1 <b <2.



Of course, to carry out this test, it is necessary to calculate the probabilities Pr [C output s i ] on a classical computer - and the only way to do this is to exhaustively take 2 n time. Is there a problem with this? No, if n = 50, and you are Google, and you can process numbers like 2 50 (although you can’t cope with 2 1000 , a number exceeding googol - ha, ha). By launching a cluster of classic kernels somewhere for a month, you will eventually be able to confirm the correct exit of your QC, which it issued in a couple of seconds - and at the same time that the QC works many orders of magnitude faster. However, this also means that sampling-based experimentation with KP is designed for the 50-qubit devices that they create today. Already with a hundred qubits, we would not be able to confirm the results of the QC, even using all the computing power of the planet.



I emphasize that this problem is typical for experiments with samples similar to the one we are conducting now. If the Shore algorithm factorized a number of 2000 digits, we would easily check this by simply multiplying the resulting factors and checking that they belong to primes. Or, if CC was used to simulate a complex biomolecule, we could test its operation in an experiment.



Q7: Wait a minute. If classic computers can verify the results of an experiment on KP only in the simulation mode of the experiment (albeit slow), then how can this be a "quantum superiority"?



Come on you. With a chip of 53 qubits, it’s quite logical to expect acceleration several million times in the mode in which you can still directly check the result of work, and also see that the acceleration increases exponentially with an increase in the number of qubits - just as the asymptotic analysis predicted.



Q8: Is there any mathematical proof that there is no fast classical algorithm that would overtake the KP experiment with sampling?



No today. But this is not the fault of KP researchers! Experts in theoretical informatics are unable even to prove the simplest hypotheses, such as P ≠ NP or P ≠ PSPACE, so you should not hope that it will be possible to unequivocally exclude the presence of a quick classical simulation. We can only hope for conditional complexity. And we really proved some of these results . The largest theoretical problem in this area is the proof of the best results of conditional complexity.



Q9: Does the sampling frame itself have any useful uses?



When thinking about this topic for the first time, people usually believed that the obvious answer to this question was “no” (and I was one of them). But lately the situation has changed - for example, thanks to my protocol of confirmed randomness, which demonstrates how an experiment with a CP with sampling can be very quickly turned into a bit generator, a randomness of which can be proved by a skeptical third-party observer (under some computational assumptions). And this can potentially be applied to the creation of PoS cryptocurrencies and other cryptographic protocols. I hope that in the near future more such applications will be discovered.



Q10: If experiments with the CP simply generate random bits, is that interesting? Is it not a trivial task to turn qubits into random bits by simply measuring them?



The bottom line is that the experiment with the KP does not generate uniform random bits. He makes a selection from some complex correlating probability distribution over 50-bit or 60-bit lines. In my confirmed randomness protocol, deviations from homogeneity play a central role in how the CP can convince the classical skeptic that the bits are really random and do not have a secret system (like a pseudo-random number generator).



Q11: But haven't decades of quantum mechanics experiments - for example, those that violate Bell's inequalities - demonstrated KP?



This is just a confusion in terms. Those experiments showed other kinds of "quantum superiority." For example, in the case of violation of Bell's inequalities, it was “quantum correlation superiority”. There was not shown computational superiority, that is, something that cannot be simulated on a classical computer (despite the fact that the classical simulation would not have restrictions on spatial locality or something like that). Today, people usually mean by quantum computing superiority.



Q12: Well, but there are countless examples of materials and chemical reactions that are difficult to simulate classically, as well as special quantum simulators (for example, the Lukin group from Harvard). Why are they not considered examples of KP?



By some definitions, they can be considered examples of KP! The key difference between the work of Google researchers is that their device is fully programmable - it can be programmed into an arbitrary sequence of 2-qubit gates with the nearest neighbor, simply sending the necessary signals from a quantum computer.



In other words, KP-skeptics can no longer grunt that, they say, if there are quantum systems that are difficult to simulate classically, it is only because nature is hard to simulate, and we cannot arbitrarily change what random chemical compound we find in nature, therefore, these systems cannot be considered "computers that simulate themselves." By any reasonable definition, the superconducting devices that Google, IBM, and other companies are building today are real computers.



Q13: Have you invented the concept of quantum superiority?



No. I played a role in his definition, which is why Sabina Hossenfelder and others attributed to me a too big role in this idea. The term "quantum superiority" was coined by John Preskill in 2012, although in a sense the key concept dates from the beginning of quantum computing itself in the early 1980s. In 1994, using the Shore algorithm for factorization of large numbers became a real experiment in the field of KP - although this problem is too difficult to solve today.



The essence of the idea, which was instead to demonstrate the CP using the sampling problem, as far as I know, was first proposed by Barbara Theral and David Divincenzo in a 2002 visionary work . “Modern” attempts to organize experiments began in 2011, when Alex Arkhipov and I published our work on BosonSampling, and Bremner, Yosa, and Shepherd published our work on switched Hamiltonians. These works showed not only that “simple”, not universal quantum systems can solve seemingly complex problems with sampling, but also that an effective classical algorithm for these problems will lead to the collapse of the polynomial hierarchy . Arkhipov and I also took the first steps to prove that even approximate versions of quantum-sampling problems can be classically complex.



As far as I know, the idea of ​​“random sampling of circuits” - that is, generating a complex sampling problem through the selection of random 2-qubit valve sequences in a superconducting architecture - appeared in the email correspondence that I started in December 2015, in which John also participated Martinis, Hartmut Niven, Sergio Boixo, Ashley Montanaro, Michael Bremner, Richard Josa, Aram Harrow, Greg Cooperberg and others. The correspondence was called “a difficult 40-qubit sampling problem,” and my letter began with the words “sorry for spam.” I discussed some of the advantages and disadvantages of three options for demonstrating KP with sampling: (1) random contours, (2) switched Hamiltonians, (3) BosonSampling. After Kuperberg defended the first option, a consensus quickly formed among the participants that the first option was indeed the best from an engineering point of view - and that if the existing theoretical analysis for (1) is not enough, then we can come up with something.



After that, a group from Google conducted a huge number of analyzes of samples of random contours, both theoretical and numerical, and Lijie Chen and Buland and my colleagues presented evidence of various kinds for the classical complexity of the problem.



Q14: If KP is achieved, what will this mean for skeptics?



I would not want to be in their place! They can move to a position where KP is possible (and who said that it is impossible? They certainly aren’t!), And that quantum correction of errors has always been a real problem. And many of them really supported this particular position. But others, including my friend Gil Kalay, right here on a blog post, claimed that KP could not be achieved for fundamental reasons. And I won’t let them get out.



Q15: What next?



Reaching KP means that the Google group already has the equipment necessary to demonstrate my protocol for generating confirmed random bits. And this is really one of the first things we plan to do.



The next obvious milestone will be the use of a programmable QC with 50-100 qubits for useful quantum simulations (for example, condensed state systems) that are faster than any known classical methods. The second obvious milestone will be the demonstration of the use of quantum error correction so that the encoded qubit lives longer than the underlying physical qubits. There is no doubt that Google, IBM and other players will race to strive for both of these milestones.



After publishing the article, Steve Girwin reminded me that the group from Yale had already achieved quantum error correction, albeit in the bosonic system, and not in the system of superconducting qubits. So, perhaps, the next milestone is better formulated as follows: achieving quantum computational superiority and useful quantum error correction in one system.



All Articles