How We Came to Know the Cosmos: Light & Matter

Discover How We Came to Know the Cosmos

Chapter 20. Quantum Mechanics and Parallel Worlds

20.1 Problems with quantum mechanics

Quantum mechanics challenges classical notions of space, time, matter, and probability. Proponents of the collapse approach to quantum mechanics (discussed in Chapter 17) state that when we measure a property of a quantum system, the different possibilities given in Erwin Schrödinger’s wave equation collapse into a single result. The probability of any given result can be determined using the Born rule. Yet we have never found these collapse dynamics, and there’s nothing like them within quantum theory itself.

We could put this problem aside and hope that these dynamics will be discovered in time, but then we still have to solve the problem of how quantum states can interact with ordinary matter at all. This is known as the measurement problem.

The measurement problem is similar to the problem faced by mind-body dualists like Rene Descartes, who argued that there are two fundamentally different substances in the universe (discussed in Chapter 26) that interact, despite exhibiting different properties and obeying different physical laws.[1]

The Bohm interpretation of quantum mechanics (discussed in Chapter 18) solves all of these problems by dropping the idea of a collapse, but it must then add dynamics to explain why all but one of the possibilities given in the Schrödinger equation are suppressed.[2]

These problems could all be solved with antirealism (discussed in Chapter 30), the view that we shouldn’t take physical theories literally when they invoke objects that we can’t see. Antirealists argue that all previous scientific theories have been proven false, and so we should not take the invisible entities postulated by our current theories literally either. This argument is countered by the fact that instruments have been built that rely on these invisible entities to work.[3]

Antirealism and the Bohm interpretation are both still popular, but their main rival claims to solve the problems of quantum mechanics without relying on any extra dynamics or hidden variables. In 1957, Hugh Everett suggested that we should simply take Schrödinger’s wave equation literally, applying the theory to everything, including the universe itself.[4]

20.2 Everett’s many worlds interpretation

Everett showed that if there are no collapse dynamics and no hidden variables that suppress the effects of a superposition, then everything will evolve in accordance with the unitary evolution of Schrödinger’s wave function (discussed in Chapter 17).

This means that an observer isn’t separated from the quantum system they’re measuring. When they measure a property of a quantum system, it doesn’t collapse into a single determinate state, as the collapse approach suggests. Instead, every possibility given by the Schrödinger equation is actualised and, because the observer is also in a superpositional state, they will observe them all.

When someone measures the spin of an electron, for example, it could have a 50% chance of appearing ‘up’ and a 50% chance of appearing ‘down’. Both the Bohm interpretation and the collapse approach predict that an observer will record only one result in accordance with its probability. Everett’s many worlds interpretation predicts that they will record both.[4]

Everett referred to his interpretation as the ‘relative state formulation’ because it shows that everything we experience exists in relative terms. In the example above, the experience of measuring the electron to be ‘up’ is only real relative to the experience of measuring it to be ‘down’. Neither branch is more real than the other.

Everett argued that we do not appear to experience every possible result because we too behave like a quantum object, and the individual elements of a superposition do not affect each other.[4]

Although Everett only mentioned the word ‘branches’ in 1957, the realism associated with them soon led to the term ‘parallel worlds’ being used instead. The American physicist Bryce DeWitt was the first to do so when he popularised Everett’s many worlds interpretation in 1970[5] and by 1977, Everett was defending his theory in these terms.[6]

20.2.1 Energy conservation

DeWitt described how, from our subjective perspective, it seems as if the universe is,

constantly splitting into a stupendous number of branches, all resulting from the measurement like interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.[7]

The many worlds interpretation does not contradict the laws of energy conservation, however, because the universe does not literally split every time a quantum event takes place. This is because the theory applies to the universe as a whole.

There are no collapse dynamics within the many worlds interpretation, and so there’s no distinct time when a measurement is said to have been made. When we become aware of the result of a quantum experiment, the world does not split, we simply realise which world we are already in. This superpositional universe is known as the multiverse.

20.2.2 Action at a distance

The many worlds interpretation does not face the problem of explaining the appearance of instantaneous action at a distance, which is apparent in the collapse approach to quantum mechanics. This is because there’s no need to send information between one entangled quantum system and another.

Two entangled particles may be separated and then observed by two people. When the first measures a property of their quantum particle, QA, they realise which world they are in and know that the second observer’s results will be correlated with theirs. However, QA does not need to send a signal to QB to ‘tell’ it the results. This is because all results are actualised, and so QB only needs to ‘know’ that it is in a world which is correlated with QA, and this information was exchanged when the two quantum states became entangled in the first place.

20.2.3 The preferred basis problem

The many worlds interpretation solves the measurement problem by stating that macroscopic objects also obey the laws of quantum mechanics, but it then faces a similar problem known as the preferred basis problem.

The preferred basis problem asks why the universe is split into the ‘separate worlds’ we experience if it is really part of a multiverse described by Erwin Schrödinger’s wave equation.

The American physicist Henry Stapp described how,

if the universe has been evolving since the big bang in accordance with the Schrödinger equation [without a preferred basis], then it must by now be an amorphous structure in which every device is a smeared-out cloud of a continuum of different possibilities. Indeed, the planet earth would not have a well defined location, nor would the rivers and oceans, nor the cities built on their banks.[14]

Schrödinger first considered this problem when he discussed the realist implications of his theory in 1952. He stated that,

nearly every result [a quantum theorist] pronounces is about the probability of this or that...happening - with usually a great many alternatives. The idea that they be not alternatives but all really happen simultaneously seems lunatic to [them], just impossible. [They think] that if the laws of nature took this form for, let me say, a quarter of an hour, we should find our surroundings rapidly turning into a quagmire, or sort of a featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jelly fish.[15]

Schrödinger thought that this idea must be flawed and stated that,

the compulsion to replace the simultaneous happenings, as indicated directly by the theory, [with] alternatives [is] a strange decision.[15]

Proponents of the many worlds interpretation had no way to explain the preferred basis problem until the theory of decoherence was developed by Heinz-Dieter Zeh in 1970[16] (discussed in Chapter 19) and extended by Wojciech Zurek in 1981.[17]

Decoherence shows that a natural basis will form, which prevents us from experiencing branches that involve indeterminate macroscopic objects. Mathematically, these branches are said to decay exponentially, but they do not disappear completely, and so decoherence can only give us an approximate appearance of definiteness.

It is still unclear whether the approximate nature of decoherence shows that the many worlds interpretation is incorrect, or whether it can be accounted for. It may be that our minds have evolved to only comprehend definite objects, or because decoherence is precise enough to explain our observations when combined with a material theory of the mind (discussed in Chapter 28).

Probability and free will

Probability and classical mechanics

Classically, we understand probabilities in terms of decision theory. The decision-theoretic link states that it’s rational for a person to use their objective knowledge of a system to determine how to act.[8,9]

Objectively, we know that regular dice have a 1/6 chance of landing on any particular number, and that coins have a 1/2 chance of landing either heads or tails. A rational person should try to bet on the number that has the highest objective probability.

The problem with this is that we don’t know how to derive probabilities without knowing the symmetry of the system. If we actually throw dice and count how many times each outcome occurs then a set of objective probabilities is expected to emerge, but no matter how many frequency trails are run we can never know for certain if the dice are weighted. There’s always the possibility that we have just been unlucky.

This raises the question of why we should be rationally compelled to use our objective knowledge of probabilities when placing bets.

Functionalism suggests that one day we’ll be able to define objective probability as a physical property. This property will be defined independently of the decision-theoretic link but will come to the same conclusions. Primitivism is the view that we should accept the decision-theoretic link as a fundamental law of nature and not look for a deeper explanation. Eliminativism is the view that there’s no such thing as objective probabilities.

Cautious functionalism is the view that we will one day find a functional definition and, in the meantime, we can use the decision-theoretic link as such. This allows scientists to continue to use decision theory when considering objective probabilities.

Free will and classical mechanics

At first glance, we do not appear to have free will. Classical mechanics shows that if we knew all of the natural laws, then we should be able to predict the future. The American neurophysiologist Benjamin Libet explored this idea in a series of experiments conducted in 1985[10] and in 2008, Chun Siong Soon and colleagues at the Max Planck Institute in Germany showed that you can predict a person’s free choice up to 10 seconds before they’re aware of what they’ll do.[11]

This appears to show that we do not make conscious decisions freely. However, in 1954, the British philosopher Alfred Jules Ayer showed that determinism is not incompatible with free will because the two are not mutually exclusive.[12]

To be held responsible for our actions, Ayer argued that we must be acting consistently with our character and this implies that our behaviour is, to an extent, predictable. Ayer suggested that free will should be contrasted with constraint instead of causality. He stated,

For it is not when my action has any cause at all, but only when it has a special sort of cause, that it is reckoned not to be free.[12]

In 2002, the American psychologist Daniel Wegner suggested that free will can be understood as an illusion generated by the brain.[13] Like the English philosopher Thomas Hobbes (discussed in Chapter 26), Wegner argued that this is due to our inability to obtain a complete knowledge of our own mind.

20.2.4 Probability

The many worlds interpretation does away with the objectively indeterminate universe suggested by the collapse approach. There is no objective uncertainty because every physical possibility actually happens.[4] Despite this, we experience quantum events with well-measured probabilities.

The problem of how we can ascribe probabilities to events at all, given that every physical possibility is certain to happen, is known as the incoherence problem.

Everett attempted to resolve the incoherence problem by arguing that there is a probability associated with the subjective experience. This means that from a subjective point of view, an observer will not be aware of every possibility, and so probabilities represent their chances of observing a specific result.[18]

Given that it makes sense to talk of probabilities within the many worlds interpretation, a more serious problem arises. What good is it in saying that an atom has a 1% chance of decaying in the next 24 hours when there are only two possibilities, a world where it decays, and a world where it doesn't?

The quantitative problem asks why Everett is justified in using the Born rule to assign probabilities, rather than assigning an equal probability to each branch. Everett suggested that we must find some way to measure, or weigh, the outcome of every superposition.[18]

In the example above, the universe can be thought of as branching into 100 copies, the atom decays in one but not in the 99 others. These 99 worlds remain identical until new quantum interactions force them to diverge, and so they can be thought of as one world with a weight of 99.

The meaning of the word ‘weight’ is still debated, but the quantitative problem is analogous to the problems raised by classical probabilities. When we throw a weighted dice, for example, we know that there are only six possible outcomes, and so this raises the question of why are we entitled to give them unequal probabilities.

With classical probabilities, scientists use the decision-theoretic link, which states that it’s rational for a person to use their objective knowledge of a system to determine how to act.[8,9]

There’s no further justification for the use of classical probabilities, and so proponents of the many worlds interpretation can defend their use of the Born rule in the same way that proponents of the collapse approach do.

The British physicist David Deutsch showed that they can go further than this and prove that their concept of ‘weight’ fits the functional definition of objective probability.[19] This is because it defines objective probability as a physical property that is independent of the decision-theoretic link but comes to the same conclusions about how to act when faced with uncertainty.

20.2.5 Free will

One misconception about the many worlds interpretation is that new worlds are created every time we make a decision or toss a coin. This does not happen because these are macroscopic events that can be described classically. We only branch when quantum interactions have macroscopic effects.

The physicist Harald Atmanspacher showed there’s little evidence that quantum events in the brain affect consciousness in 2006.[20] This is because our experiences are correlated with neuronal assemblies formed from several thousands of coupled neurons (discussed in Chapter 28). The superpositional qualities of objects this large are suppressed by decoherence, and so our mental representations are described classically.

20.2.6 Ockham’s razor

Some people are reluctant to accept the many worlds interpretation because it relies on the existence of an infinite amount of other unobservable worlds to account for our experiences in this one. It’s sometimes claimed that this makes it unnecessarily extravagant, violating Ockham’s razor, the idea that the simplest approach is preferable (discussed in Chapter 30).

DeWitt admitted having these reservations when he first read about Everett’s interpretation. He stated,

I still recall vividly the shock I experienced on first encountering this multiworld concept. The idea of 10100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognisable, is not easy to reconcile with common sense.[7]

Proponents of the many worlds interpretation reject the idea that it contradicts Ockham’s razor and argue that Ockham’s razor favours their approach.

Ockham’s razor states that entities must not be multiplied beyond necessity, but this does not refer to the number of unobservable objects that a theory invokes. It refers to the number of mutually independent assumptions that a theory makes and their individual complexity.

The many worlds interpretation is simpler in Ockhamist terms because it solves all of the problems faced by the collapse approach and the Bohm interpretation without adding extra structure to the theory of quantum mechanics. It’s also mathematically simpler to describe a superpositional universe than to define all of the eccentricities of any particular one.

Everett stated that objections to the many worlds interpretation on the grounds that it contradicts common sense seem to be based on the idea that science should only describe what we already observe. Instead, scientific theories should make novel predictions (discussed in Chapter 30) that can lead to the discovery of completely new phenomena.[18] History has already taught us that if we’re going to ask profound questions, then we shouldn’t expect mundane answers.

A Multitude of Multiverses

The many worlds interpretation is not the only theory that predicts a multiverse. There are at least four types of multiverse predicted by modern physics and they are all compatible. The Swedish-American cosmologist Max Tegmark states,

the key question is not whether the multiverse exists but rather how many levels it has.[21]

Tegmark’s Level #1 Multiverse

Space is extremely large, and matter can only take a limited number of forms before things start to repeat.

A diagram showing 16 different types of object that can be made in a two-dimensional universe with four particles of two different types.

Figure 20.1
Image credit

Sixteen types of object can be built in a two-dimensional universe with four particles of two different types. After this, objects start to repeat. There are about 210118 types of object in our universe and we are always about 1010118 metres from our nearest duplicate.

Tegmark’s Level #2 Multiverse

The theory of eternal inflation predicts that multiverse #1 is one of many ‘pocket universes’ (discussed in Book I).

Tegmark’s Level #3 Multiverse

The many worlds interpretation shows that there are an infinite amount of #1 and #2 multiverses, many of which exist in the same spacetime as our own. We are simply unaware of them.

Physicists predict that some of the intelligent life forms in multiverses #1, #2, and #3 will be able to create artificial realities, inside of computers, that will be nearly identical to our own.

Tegmark’s Level #4 Multiverse

Beyond Multiverses #1, #2, and #3, the ultimate multiverse is composed of nothing but mathematics.

Tegmark states that if the universe is mathematical then “complete mathematical symmetry” suggests that multiverses containing universes of every possible shape must exist, and other multiverses might obey different physical laws to our own.

20.2.7 Evidence of parallel worlds

Everett’s many worlds interpretation of quantum mechanics will not be accepted by the scientific community until it has made a novel prediction that can be verified. There are several ideas for experiments that could do this but they are still not practical to implement as they require artificial intelligence (AI) and reversible nanoelectronics.

Deutsch suggested the first experimental test to falsify the collapse approach in 1985.[22,23] In Deutsch’s thought experiment, an atom that has a determinate spin state in one axis, 'left' for example, is passed through a Stern-Gerlach apparatus that has the possibility of measuring it in another axis, as either spin 'up' or spin 'down' in this case. This means that the atom is then in a superposition of 'up' and 'down' states from the perspective of an observer who has not yet become entangled with it.

This superposition travels to an AI’s artificial 'sense organ'. Here, it’s provided with two options: it may be detected as either spin 'up' or spin 'down'. The AI’s conscious mind then records the result.

The collapse approach predicts that this will cause the atom to collapse into one determinate state, with either a determinate 'up' or 'down' (but not 'left' or 'right') spin. The many worlds interpretation predicts that the mind will branch into two, one mind will record up and one down (but neither will record 'left' or 'right').

The whole process is then reversed, so the atom emerges from the entrance to the Stern-Gerlach apparatus and the mind forgets which result it recorded. This process does not erase any of the AI’s other memories, however, including the memory that it did record the atom to be in a definite state.

If a 'left-right' detector was placed at the entrance of the Stern-Gerlach apparatus then the collapse approach predicts that it will be detected as being in either a 'left' or 'right' state with equal probability. If the many worlds interpretation is correct, then the atom will be in the same state that it was in before the measurement, it will still have a 'left' spin.

The Russian-Israeli physicist Lev Vaidman described a similar experiment in 1998.[24] If a photon passes through a polariser that has the possibility of sending it in two different directions, towards detectors A or B, then experiments show that it will be detected at either one detector or the other, but not both. If we remove detector A then the photon is only detected at B half of the time. Vaidman suggested that we could falsify the collapse approach by reversing the process, as Deutsch suggested, and observing how often the photon is 'recomposed'.

The collapse approach predicts that a photon will only be detected at the source half the time, yet the Everett interpretation predicts that it will be detected every time because the photon arrives from both paths, whether it was detected or not.

There are also arguments in cosmology that could falsify the collapse approach. In 1970, DeWitt showed that there would be a time before Everett’s universal wave function had decohered and we may one day be able to find evidence of this.[5]

These experiments and observations will not falsify the Bohm interpretation, however, because Bohm also stated that there’s no collapse of the wave function. Real evidence may require communication between worlds. The physicist Rainer Plaga suggested an experiment to do this in 1997.[25]

Plaga argued that it should be possible to communicate with other parallel worlds if we could repeat Deutsch’s experiment and isolate part of the apparatus, so it can be changed before it has completely decohered.

An observer could branch, for example, having set the apparatus to only excite an ion if they record a certain result. If the ion is excited and they do not record that result, then they can assume that the atom was excited by their parallel-self. This would not happen if the Bohm interpretation is correct. No interpretation of quantum mechanics has been proven yet, and both the Bohm and many worlds interpretations are currently popular among scientists and philosophers of science.

20.3 References

Back to top