Probability can be imagined in many ways. And quantum mechanics covers them all
An article by Sean Carroll, a professor of theoretical physics at the California Institute of Technology
In the philosophical
Essay on Probabilities , published in 1814,
Pierre-Simon Laplace introduced a notorious hypothetical being: "vast intelligence", knowing the full physical state of the universe. For such a creature, nicknamed by later commentators the “
Laplace demon, ” there will be no mysteries about what happened in the past or what will happen at any moment in the future. In the framework of the “universe as a clockwork” described by Isaac Newton, the past and future are determined by the present.
The Laplace demon was never conceived as a practical thought experiment; imaginary intelligence was supposed to be as vast as the universe itself. In practice, the dynamics of chaos can amplify tiny imperfections in the knowledge of the system, turning them into complete uncertainty. But in principle, Newtonian mechanics are deterministic.
A hundred years later, quantum mechanics changed everything. Conventional physical theories speak of the current state of a system and its evolution over time. Quantum mechanics also deals with this, but it brings with it a completely new set of rules that say what happens when observing or measuring a system. In particular, the measurement results cannot be predicted with absolute accuracy, even in principle. The best thing that can be done is to calculate the probability of obtaining any of the possible results, according to the so-called
Bourne’s rule : the wave function assigns an “amplitude” to each measurement result, and the probability of obtaining such a result is equated to the square of the amplitude. This feature made Einstein complain that God plays dice with the Universe.
Researchers continue to argue about the best view of quantum mechanics. There are competing theoretical schools, which are sometimes called “interpretations” of quantum theory, but it would be more correct to consider them different physical theories that give the same results in the experiments. All of them are similar in that they are based on the idea of ​​probability. Which leads to the question: what is “probability”?
Like many subtle concepts, probability begins with a seemingly straightforward and common sense, which becomes all the more confusing the more we understand it. You flip a coin many times; Whether it falls an eagle or a tails in a certain throw is completely unknown, but having completed many throws, we expect to get 50% of the eagles and 50% of tails. Therefore, we say that the probability of getting a tails (or an eagle) is 50%.
Thanks to the Russian mathematician Andrei Nikolaevich Kolmogorov and other scientists, we know how to work with probabilities. Probabilities are real numbers from 0 to 1 inclusive; the probabilities of all independent events add up to one; and so on. But this is not the same as deciding what, in essence, is probability.
There are many approaches to determining probability, but we can distinguish two large classes. An “objective” or “physical” approach considers probability a fundamental feature of the system, the best way to characterize physical behavior. An example of an objective approach to probability is the
frequency probability in which probability is defined as the frequency with which events occur during repeated repetition, as in the example with a coin.
There are other, “subjective” or “evidence” points of view related to probability, as a personal characteristic, as a reflection of the degree of individual faith in what is true and what can happen. An example of this point of view is the
Bayesian probability , emphasizing the
Bayesian mathematical
theorem that tells us how to update our faith when new information is received. Bayesians imagine that rational beings in a state of incomplete possession of information live with a certain degree of faith in any sentence imaginable, and constantly update this faith when new data comes in. In contrast to frequency probability, in Bayesianism it is considered normal to assign probability to an event that occurred only once, for example, victory in the next election, or even past events about which we have no confidence.
Interestingly, different approaches to quantum mechanics imply a fundamentally different meaning of probability. Reasoning about quantum mechanics helps clarify the question of probability, and vice versa. Or, in a more pessimistic approach, quantum mechanics, as it is understood today, does not help us make a choice from competing concepts of probability, since each of the concepts has taken root in one or another quantum formulation.
Let's look at the three leading approaches to quantum theories. There are theories of “dynamic collapse”, for example,
the Girardi-Rimini-Weber theory , proposed in 1985. There are approaches of a “
wave pilot ” or “
hidden parameters ”, in particular,
the de Broglie-Bohm theory , invented by David Bohm in 1952 based on the previous ideas of Louis de Broglie. And there’s a “
multi-world interpretation ” proposed by Hugh Everett in 1957.
Each of them represents a way to solve the problem of measuring quantum mechanics. The problem is that the generally accepted quantum theory describes the state of a system by a wave function that evolves smoothly and determinately, according to
the Schrödinger equation . At least this happens only if no one is watching the system; otherwise, as they say in the textbooks, the function suddenly “collapses” to a certain observable result. Collapse is unpredictable; the wave function assigns a number to each of the possible results, and the probability of observing this result is equal to the square of the wave function. The measurement problem is formulated simply: what is a “measurement”? When does it happen? Why are measurements different from ordinary evolution?
Theories of dynamic collapse probably offer the most straightforward approach to the measurement problem. They postulate the existence of a truly random component of quantum evolution, due to which each particle usually obeys the Schrödinger equation, but sometimes its wave function is spontaneously localized at a certain point in space. Such collapses occur so rarely that we will never see the collapse of an individual particle, but in a macroscopic object consisting of many particles, collapses occur constantly. This prevents macroscopic objects - such as a cat from Schrödinger's famous thought experiment - from turning into an observable superposition. All particles of a large system are entangled with each other, so when one of them is localized in space, all the others do the same.
The probability in such models is fundamental and objective. There is nothing in the present that accurately determines the future. Dynamic collapse theories fit perfectly with the old-fashioned frequency look at probabilities. What happens next is unknown, and we can only say what the long-term frequency of the various results will be. The Laplace demon will not be able to accurately predict the future, even if he knows the current state of the entire Universe.
Pilot wave theories say something completely different. Nothing is truly random in them; a quantum state evolves deterministically, as in the classical states of Newton. A new element of the theory is the concept of hidden parameters, such as the actual location of particles, in addition to the traditional wave function. We actually observe particles, and wave functions simply control their movement.
In a sense, pilot wave theories bring us back to a universe similar to a clockwork, only with an important nuance: when we do not conduct observations, we cannot know the exact values ​​of the hidden parameters. We can prepare the wave function so that we know it for sure, however, we discover the hidden parameters when observing. The best we can do is recognize our ignorance and introduce a probability distribution over their possible values.
Probability in pilot wave theories, in other words, is completely subjective. It characterizes our knowledge, and not the objective frequency of phenomena in time. A trained Laplace demon, knowing both the wave function and all the hidden parameters, could accurately predict the future, but its incomplete version, knowing only the wave function, could make only probabilistic predictions.
And we also have a multi-world interpretation. This is my favorite approach to quantum mechanics, but it is in it that it is most difficult to determine how and why probability works.
Multidimensional quantum mechanics is formulated simpler than all other alternatives. There is a wave function in it, and it obeys the Schrödinger equation - and that’s it. There are no collapses and additional parameters. Instead, we use the Schrödinger equation to predict what happens when an observer measures a quantum object in superposition from many possible states. The answer is that a combined system from an observer and an object evolves into an intricate superposition. In each part of the superposition, the object will have a specific measurement result and the observer will receive this measurement result.
Everett's genius consisted in saying: “And this is normal” - all we need is to recognize that each part of the system evolves separately from all the others, and is considered a separate branch of the wave function, or “world”. Worlds are not inserted there specifically, they have always been hiding in quantum formalism.
The idea of ​​all these worlds may seem extravagant or tasteless, but such objections are not considered valid in science. A more correct question would be the nature of probability in this approach. In a world-wide interpretation, we can know the wave function exactly, and it evolves deterministically. There is nothing unknown or unpredictable. The Laplace demon could predict the whole future of the universe with complete certainty. How is probability involved at all?
The answer is given by the idea of ​​the vagueness of “self-locating / indexical”. Imagine that you are going to measure a quantum system, thereby branching the wave into different worlds (for simplicity, imagine that there will be two worlds). It makes no sense to ask, “In what world will I end up after the measurement?” There will be two people, in each of the branches, each of which descended from you; none of them can be "more than you" than the other.
However, even if both of these people know the wave function of the Universe, something appears that they do not know: which of the branches of the wave function they are in. There will inevitably be a period of time from the moment of branching until the observers find out what result they obtained. They do not know where the wave function is located. This is the uncertainty of their own location, which in the quantum context was first identified by the physicist Lev Weidman.
You might decide that you can very quickly get acquainted with the result of the experiment, so as to avoid a noticeable period of uncertainty. But in the real world, the wave function branches incredibly fast, in a time of no more than 10
-21 s. This is much faster than the signal speed in the brain. The length of time during which you will be on a certain branch of the wave function, but you will not know which one will always exist.
Is there any reasonable way to resolve this uncertainty? Charles Sibens and I
argue that it is possible, and in the end we come directly to the Bourne rule: the belief that you are on a particular branch of the wave function is equal to the square of the amplitude of this branch, as in ordinary quantum mechanics. Sibens and I had to make another assumption, which we called the "epistemic principle of separability": any predictions of the results of an experiment should not change only because of a change in the wave function of completely separate parts of the system.
Uncertainty of one's own location is different from the epistemic uncertainty found in pilot wave models. You can know everything that is possible about the Universe, and still you will have some uncertainty - namely, about your place in it. Your uncertainty is subject to the rules of ordinary probability, but you need to try to convince yourself that there is a reasonable way to quantify your confidence.
You can object that you want to make predictions now, even before branching. Then there is no uncertainty: you know exactly how the universe will develop. However, this knowledge includes the belief that all future versions of yourself will be uncertain, and they should use the Bourne rule to assign degrees of confidence to the various branches on which they may be. In this case, it makes sense to act as if you live in a truly stochastic universe in which the frequency of various results is determined by the Bourne rule. David Deutsch and David Wallace sharpened this argument using decision theory.
In a sense, all these concepts of probability can be considered variants of the uncertainty of one's own location. We only need to consider the set of all possible worlds - all the different versions of reality that you can imagine. Some of these worlds are subject to the rules of dynamic collapse theories, and each of them differs in the actual sequence of the results of all quantum measurements ever made. Other worlds are described by wave-pilot theories, and in each of them the hidden parameters may have different meanings. And there are many many worlds realities in which agents are not sure which branch of the wave function they exist on. We can assume that probability expresses our personal confidence in which of these possible worlds is real.
The study of probability led us from tossing a coin to branching universes. I hope that our understanding of this complex concept will develop in parallel with our understanding of quantum mechanics itself.