In last month’s Insights column, we explored a puzzle that is a simple analogue of one of the most astonishing results of quantum mechanics — Bell’s theorem. Bell showed that if quantum mechanical predictions are correct, then we have to give up one of three reasonable assumptions about the world. In a recent *Quanta* article Natalie Wolchover explains how:

… when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

As Wolchover further describes in the article, there is a third possible assumption found in Bell’s analysis — “freedom of choice” — the assumption that the experimenters are free to place the polarizers at any angle that they want.

Our puzzle modeled the experiment described above for a pair of anti-correlated particles, and is based on an intuitive formulation of Bell’s theorem described by the physicist David Mermin. If you vary polarizer orientations, quantum mechanics correctly predicts that the correlation between the photons is given by the formula 1 *–* cos^{2}(θ/2), where θ is the angle between the two polarizers. Our puzzle explores what would be necessary to create this amount of correlation in an analogous situation from everyday life.

Two students, A and B, who are polar opposites of each other, are gearing up to do a course on quantum mechanics. Thirty-seven days before the course (Day –37) they take a computer test consisting of 100 true/false questions. Every question that A answers as true, B answers as false, and vice versa — their answers are perfectly anti-correlated. At the start of the course (Day 0), the two take the same test again. Some of their answers are now different from what they were the first time, but they are still perfectly anti-correlated. Thirty-seven days later (Day +37), they take the same test for the third time. Again, some of their answers are different, but they are still perfectly anti-correlated.

You and a friend sit at separate computer terminals and compare the tests. You can bring up just one of A’s tests on your computer screen at any given time, while your friend can bring up just one of B’s. First, the two of you pull up the tests the students took on the same day, comparing A’s Day –37 test with B’s Day –37 test, and so on. Sure enough, they are all perfectly anti-correlated, with no matching answers at all. Next, you compare A’s Day 0 test with B’s Day –37 test. In this case, there are exactly 10 answers that match. Similarly, B’s Day 0 test has 10 answers that match those in A’s Day +37 test. Finally, you compare B’s Day –37 test with A’s Day +37 test. And here comes the surprise …

As I explained, these experiments map directly to our puzzle. A and B’s same-day tests are the anti-correlated photons, and you and your friend are the experimenters. The days of the tests represent the angles, in degrees, of your respective polarizers. If the polarizers are at the same angle (same-day tests), the photons are 100 percent anti-correlated, just as the students are. Since the situations are isomorphic, we should be able to replicate the photon correlation results with the test correlation results — the situations should give identical numerical answers for all angles (days) under the same assumptions as Bell’s theorem. These common-sense assumptions are: Completed tests with definite answers exist (realism), they cannot influence one another while the grading is being done (locality), and the examiners are free to compare any of A’s tests with any of B’s (freedom of choice). (In order to simulate the probabilistic nature of quantum mechanics precisely, you would have to imagine that each test had a very large number of questions, and that you and your friend can compare only small fractions of the two tests that are nevertheless large enough to yield consistently reliable probabilistic results. This condition does not change the numerical answers.)

Question 1: What are the minimum and maximum numbers of matching answers you would expect for these two tests (B’s Day –37 test and A’s Day +37 test)?

Answer 1: The minimum number is 0 and the maximum is 20, as Ashish and Michael correctly pointed out.

The Day 0 tests of A and B have exactly opposite answers. If B’s –37 day test has 10 answers in common with A’s Day 0 test, then B must have chosen the opposite answer for 10 questions compared with his own Day 0 test. Now if A chose the same 10 questions to answer differently in her Day 37 test, then our two target tests would again be perfectly anti-correlated. On the other hand, if A chose 10 different questions from the ones B had chosen to answer in the opposite way, then they would agree on 20 of their answers, but no more.

Question 2: If you found that there were 36 answers that matched, how would you explain it?

Answer 2: Again, as Michael rightly pointed out, the only explanation can be that your computers have been hacked! The only way for 36 answers to match for the comparison between (A Day +37, B Day –37) after you’ve confirmed that the correlations between (A Day 0, B Day –37) and (B Day 0, A Day –37) are both 10, is that the answers of the tests are being changed in real time depending on what tests you and your friend call up on your computers for comparison. Analogically, the quantum results can be explained (assuming realism and freedom of choice) only by assuming superluminal connections between widely separated particles that enforce the correlations at the instant of measurement itself.

Question 3: Where do all the numbers in the above scenario (–37, 0, +37, 10 and 36) come from?

Answer 3: This question was redundant. The correlation between photons with polarizers separated by 37 degrees is given by 1 – cos^{2}[(37-0)/2], which is approximately 10 percent, which reflects the correlation between A’s and B’s tests 37 days apart. Similarly the expression 1 – cos^{2}[(37-(-37))/2] = 36 percent gives the correlation between photons with polarizer angles 74 degrees apart, which reflects the correlation between A’s and B’s tests 74 days apart.

Question 4: Using the above formula, what is the largest possible difference between the actual correlation for an angle 2θ and the maximum value calculated for 2θ from the given correlation for θ, under the three assumptions described above? At what angle between the polarizers does this largest possible difference take place?

Answer 4: The largest difference between the quantum and classical correlations in this case is 0.25 or 25 percent, at an angle of 60 degrees between the polarizers. You can calculate this by maximizing the difference between the classical and quantum expressions for correlations, as explained well by Michael. Alternatively, you can just set up a spreadsheet that calculates the difference with the angle going from 0 to 90 degrees, and plot the two curves to see the difference. The graph for classical correlations is a straight line, while that for the quantum correlations is S-shaped. The differences are maximum at angles of 60 and 120 degrees, and minimum (nil) at 0, 90 and 180 degrees.

This demonstration of Bell’s inequality dictates that we ditch one or more of our assumptions. Michael very nicely summarized the alternatives we are left with:

The balloon model obviously throws away locality. So does the Bohm model of quantum mechanics.

Superdeterminism obviously throws away freedom of choice. So does the Hall model mentioned in Natalie Wolchover’s article.

Standard quantum mechanics obviously throws away realism. The Copenhagen interpretation, for example, only allows that preparation and measuring devices are real (“classical”), with quantum mechanics being about the correlations between such devices. In this sense the correlations are real, when measured, but there is no underlying reality of, say, “photons” that cause these correlations.

This is a great summary of the received wisdom. But I think quantum mechanics already throws away locality. There are many convincing examples of this, but I’ll cite two that we have gotten so used to that we don’t even see them anymore.

The first is the matter of discrete quantum jumps themselves. In models of the atom, electrons jump from being smeared around the atom in one orbital to being smeared around in another with a completely different configuration, releasing a photon of a certain energy. There is no in-between state, no variation in frequency as would be expected if the electron was initially located in different points in these orbitals. This clearly requires non-locality. As the Columbia physicist I.I. Rabi, one of the original contributors to quantum mechanics, said: “The atom is in one state and moves to another, and you can’t picture what it is in between, so you call this a quantum jump. In quantum mechanics, you don’t ask what’s the intermediate state because there ain’t no intermediate state. It passes from one to the other in God’s mysterious way.”

The second huge affront to locality occurs in Feynman’s “path integral” version of quantum mechanics. This approach assumes that a particle goes from one point to another by simultaneously following all paths everywhere in the universe. Now here’s nonlocality with a vengeance. And yet it works beautifully.

Yes, nonlocality is literally everywhere in quantum mechanics, and we should willingly embrace it. What this means is that the internal composition of every quantum particle or entangled pair is nonlocal and inherently superluminal. As I said, perhaps every particle has its own wormhole, or something like that, to play in: ER=EPR, even for single states. This nonlocality never leaks out into the open, just as it doesn’t in case of an electron’s quantum jump, or in the EPR experiment. So relativity is not threatened at all.

Embracing nonlocality and superluminal internals in quantum objects is extremely freeing — it allows us to construct models that are physical and can be visualized, and are not just abstractly mathematical. For me, there are compelling arguments against the latter, as I’ve detailed in my responses to phayes and to Alex Livingston. One key point is that probabilities and interference require ensembles and cannot be generated by single particles (as they are in quantum mechanics), unless you assume that they have parts.

I think a compelling image is that every quantum particle is like a bubble that can split into infinitesimal superluminal bubblets in a wave distribution, which can reform the bubble at different locations. We can detect the particle only when we force the bubblets to make a choice between two opposite attributes or locations by making a measurement, which cause the bubblets to coalesce again into a full bubble. Imagine two taut and flat plastic sheets with a bubble trapped inside. If you apply a little pressure, the bubble fractionates into millions of tiny bubbles that occupy the whole area inside the sheets. Release the pressure, and the bubblets reform into the original bubble at a different location. We can postulate that the probability of coalescing at a particular point is proportional to the size of the bubblet at that point, which would correlate neatly with the Born probability interpretation rule.

Can such a fantasy really be true? I don’t see why it couldn’t be. As I explained, the bubblet model can explain the double slit experiment (which Feynman once said contains all the essential aspects of quantum mechanics), and it gives an intuitive sense of how measurements and environmental phenomena analogous to them create our reality. Of course, the bubblets could be at a level of physicality so fine that we may never get close to detecting them. But models based on them could help integrate quantum mechanics and relativity, and help elucidate the structure of space-time. Will they? Only time (and space!) will tell. In the meantime, such models can make quantum mechanics vivid again. And since I am not a physicist, I can get away with making them without losing my job! I am glad some of you found this interesting.

The *Quanta* T-shirt for this puzzle goes to Michael, for his contributions to this and several other previous columns. Ashish’s answers and comments were likewise excellent. Thank you to all who contributed. See you next week for new insights!

"The graph for classical correlations is a straight line, while that for the quantum correlations is S-shaped."

The classical correlation curve will only be a straight line in the case that multiple bits of information can be recovered from the observations. It has been demonstrated that when only a single bit of information can be recovered, the classical system will reproduce the quantum correlation curve.

This results from the fact that the limiting (single bit) case of the Shannon Capacity, in Information Theory, reduces to the Heisenberg uncertainty principle.

That is the origin of the EPR Paradox.

When will we give up the Big Bang theory. Our Universe may indeed have come from an ALMOST vanishing point and reinflated to present time. That would make it a simple oscillator and that makes sense. Having it vanish completely is kinda hard to swallow because where did it go and where did it come from. AND there are quite a few anomalies in the experimental data that point to "The Big Bang" not being the answer. Yes we can put the expansion of the Universe in an order, backwards in time, but that does not prove that space/time(energy) goes to zero.

Hey thanks for the wisdom. But I still have a question.

How are particles entangled? Do they have to be in contact? Is it that their functions overlap? Do they need to be inside each other wave function?

How do we know when they are actually entangled? Can we measure that? Thanks again.

Alan:

People are already starting to "give up the Big Bang theory".

Check out this article in the February 2017 issue of Scientific American:

https://www.scientificamerican.com/article/cosmic-inflation-theory-faces-challenges/

C Gilles Lalancette:

Entanglement simply means that the observations of one entangled entity, cannot be independent of the observations of another. Think of two parallel coins floating in space: as soon as you look at one, from any aspect angle, you will immediately know the state of the other – it must be the same.

Since they were created in a parallel state, when viewed from the same angle, they must have the same observed state. The observed state (heads or tails) is dependent on the observer's choice of aspect angle, but every choice of aspect angle, will result in both coins being observed to be in the same state, whichever it is. If the coins were created in an anti-parallel state, then every observation, regardless of aspect angle, would reveal one coin to be heads and the other to be tails.

@Rob McEachern

"Entanglement simply means that the observations of one entangled entity, cannot be independent of the observations of another. Think of two parallel coins floating in space: as soon as you look at one, from any aspect angle, you will immediately know the state of the other – it must be the same."

No, this defines correlation, and the example you give is purely classical correlation – between two coins that have perfectly well defined individual properties. Entanglement is more than this. It corresponds to correlations between quantum objects where the objects themselves cannot be assigned well defined individual properties. That is, the global properties shared by the objects do not reduce to local properties.

@C Gilles Lalancette

"How are particles entangled?"

Usually due to interaction in the past. If independent quantum particles interact then they typically become entangled. And if a decay process produces two particles, then they are also typically entangled.

"Do they have to be in contact? Is it that their functions overlap? Do they need to be inside each other wave function?"

No. Particles that have never been in contact (or close to each other) can become entangled, by a process called entanglement swapping. This is very clever, I think. Suppose A and B are entangled with each other, and C and D are entangled with each other. Now let B and C come into contact with each other. Then, because they share properties with A and D, respectively, it turns out that A and D become entangled – even if particles B and C are destroyed. See

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.3891

This is a sort of 'cheat' answer I suppose, since there had to be some entanglement in the first place (although not between A and D).

"How do we know when they are actually entangled? Can we measure that?"

The experiment that led to this Puzzle measures photons that are so strongly entangled there is no explanation in terms of individual properties of the photons (unless the particles are connected by faster than light influences, or we are not free to choose what measurements are made on them). More generally, quantum mechanics predicts entangled correlations, and experiments always confirm the predictions.

Michael:

Unlike the frequently discussed cases of colored balls, Bertlmann's socks, or gloves, the coins do not have a definite state, until they are observed. Think about it. The only wave-function that ever "collapses", is the state of the observer's model of an observed entity's "state", not the entity itself – a coin always remains two-sided, even after an observer declares it to be either (but not both) heads or tails. Unlike colored balls etc., such states are not intrinsic to the object, they depend upon the observer-object relationship – just like spin and polarity, the observed state does not even exist, until the observer "makes it so".

It has recently been demonstrated that classical coins will reproduce the so-called "quantum correlations", when they are created to be so noisy and blurry, that they manifest only a single bit of information, as that is defined within Shannon's Information Theory. This is a direct result of the little-known fact, that the Heisenberg Uncertainty Principle turns out to be the very definition of a "single bit of information", in Shannon's theory. That is the ultimate origin of the EPR paradox. Try googling: quantum correlations bit polarity, for more details.

@Rob McEachern

Thanks for the reference – I found a vixra paper by you. The model there appears to take advantage of the 'fair sampling' assumption (in a neat way) – if enough results are discarded, then it is possible to simulate violation of a Bell inequality with a classical model. However, experiments have been done which claim to evade this assumption, i.e., not enough results are thrown away permit explanation of the remaining correlations by such a classical model.

"For me, there are compelling arguments against the latter, as I’ve detailed in my responses to phayes and to Alex Livingston."

But you've detailed only fallacious (circular) arguments there, and again here, and neither you nor anyone else should be compelled by them. It's a widely understood fact that there simply is no nonlocality in QM itself. You have to put it in by interpretation*. As the QBists Fuchs, Mermin and Schack say:

"There is no nonlocality in quantum theory; there are only some nonlocal interpretations of quantum mechanics." https://arxiv.org/abs/1311.5253

There are genuinely compelling reasons to interpret QM as simply (generalised, 'subjective') probability theory applied to mechanics (see e.g. the Banks and Streater articles I linked to earlier and Matt Leifer here http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/ [the "Why be a psi-epistemicist?" section]). Not least because then there is no nonlocality or any other "foolishness" ( https://www.youtube.com/watch?v=gNAw-xXCcM8 ) or "spookiness" in it.

* As Rabi does in that probability theory-naive classical narrative description of "quantum jumps", and as you do in your (mutually incompatible) assertions of what the path integral approach assumes and that "probabilities and interference require ensembles". There is no affront to the strongly empirically confirmed assumption of locality in "quantum jumps". Discreteness is an affront to the strongly empirically disconfirmed classical assumption that there is no (fundamental) minimum action in nature (that ℏ = 0) but there is no affront to locality in that. And the path integral approach does not make the ontic assumption you claim it makes. You make it! Others would – and do – assume only that path integrals are sums of probability amplitudes distributed over possibilities.

Michael:

"experiments have been done which claim to evade this assumption…"

Read their fine print. They have not evaded anything. They seldom even report the "pair detection efficiency", only a single detector efficiency; the former cannot be measured experimentally, and it cannot be estimated theoretically, without making additional assumptions about the nature of the probability distribution of detection events – which is also detector-design and threshold dependent. The single detector efficiency of the vixra model is already greater than the supposed theoretical limits typically claimed, for loophole-free tests, even without trying to maximize it. Thus, according to those claims, the vixra model cannot exist. But it does.

@phayes,

I have been quite clear that what I’ve presented here is a visualization that reproduces some of the qualitative predictions of quantum mechanisms, using bubbles and balloons with superluminal internals. There is a great deal of work still required to make it a full fledged interpretation of QM. I merely stated that I don’t see why it couldn’t be developed into one.

As for the differences in our perspectives, I suggest that we agree to disagree. This is a philosophical battle that has been raging for close to ninety years now between the Einstein-Schrödinger-De Broglie-Bohm-Bell team (Team E) on one side and the Bohr-Heisenberg-Von Neumann-Gell Mann team (Team B) on the other. The perspectives of these two teams are like non-commuting observables. You take on the perspective of one side and you completely lose the perspective of the other. We are unlikely to change each other’s minds.

The basic difference between the two sides, as stated well in the Banks article you referred to, is that Team E does not believe in intrinsic or objective probabilities. They follow Einstein’s credo: God does not play dice. If you believe that intrinsic probabilities can exist in the universe without physical ensembles being the basis for them, then I agree, Team B’s position cannot be logically argued against.

But since you have given your perspective as a supporter of Team B, let me give mine as a supporter of Team E.

First as an overview, just because an argument is logically consistent, it does not mean that it cannot be rejected. Take the example of solipsism. If you assume that you are the only person who exists and is conscious, and everything is a figment of your imagination, your solipsistic point of view is logically impregnable. Yet most people reject solipsism, not for logical reasons, but because they disagree with the assumption. So we could both make watertight arguments to support our case, but the other could reject them. You hate superluminal objects, I hate intrinsic probability — take your pick. I believe physics is physics, you believe physics can be just mathematics.

To expand on that last statement here’s a story. Team E and Team B come across a unbreakable, hermetically sealed black box with a bunch of knobs on it. They figure out that if they accept intrinsic probabilities, then they can probabilistically predict, with great precision, how the positions of the various knobs are correlated. “That’s all there is to it” says Team B, “there is no need to know, and perhaps, we cannot know what’s going on inside.” “Wait a minute” says Team E, “these particular correlations can be achieved in hundreds of different ways. How is it done in this particular box? Let’s try to figure it out.” They come up with some candidates, each of which would break down in different ways in extreme conditions, but the box, so far, resists all their efforts to get inside it and they cannot apply the extreme conditions.

Now Team B’s approach is eminently practical. It could even be the only approach if the box is unbreakable for all eternity, if the correlations were perfect in all circumstances, and if the black box made music with another blue box we have. But the boxes have thus far have resisted all efforts to interface with each other.

You get the allegory of course. QM is a black box, but is it perfect for all time? Does it not break down at small distances in quantum field theory where it gives infinite answers which require ad hoc renormalization? How can it be made to interface with general relativity? How does it relate to space and time? Are quantum correlations the basis of space as Van Raamsdonk thinks? Does ER (Wormholes)=EPR (Quantum Entanglement) as Susskind and Maldacena have suggested? And are there mini black holes inside crystals of samarium hexaboride?

I don’t know, but Team B either dismisses all these things, or thinks the black box will emerge unscathed from them. Team E at least allows us to try out new things and fail. Or, who knows, succeed and open up new vistas?

As for your numerous references, they are all great if you share their assumptions. I do not. And I am not alone. In the end, what approach you take depends on what appeals to you.

Finally, let me give you one stark example of the kind of image that impels my perspective and makes me believe what I do.

An electron is sent towards two slits. It emerges on the other side, interferes with itself and impinges on a phosphor at different places every time you do the experiment, building up a beautiful interference pattern. I know you can predict the pattern. So can I. But can you tell me what that one solitary electron is doing? How does it interfere with itself?

Nothing we can understand? We shouldn’t think or talk about it? We can never know?

I reject those statements. Don’t you have any curiosity, man?

Pradeep:

"An electron is sent towards two slits. It emerges on the other side, interferes with itself and impinges on a phosphor at different places every time you do the experiment, building up a beautiful interference pattern. I know you can predict the pattern."

Why assume it has anything to do with interference? Or even physics? You can predict the pattern by simply computing the Fourier Transform of the slits geometry and then computing its power spectrum. It is pure math, not physics. Thus, the experimental apparatus can be interpreted simply as an analog computer, that approximately computes the spectrum of the slit's geometry.

You need to ask yourself the question, "Where does the information content within the pattern come from?" The point is, it does not come from the particles or waves striking the slits, anymore than the visible pattern that you associate with your mother's face comes from the photons being emitted by the sun. The particles and/or waves are all merely acting as "carriers", like a radio frequency carrier, that are being spatially modulated by the geometry of the object that they strike – the slits or your mother's face. You only need to know the geometry of the modulator, not the nature (particles or wave) of the carriers, in order to deduce the pattern. And if you change that geometry, it will change the pattern, even though the carriers have not changed at all.

By "blaming" the carriers, for producing the information content within the pattern, you have made the same bad assumption that rubes at carnival fairs make, when they assume a ventriloquist's dummy is the source of the sounds that they hear. You have misidentified the ultimate source of the information content.

Like your black box example, the fact that the apparatus behaves *as if* interference occurs, is not equivalent to saying that interference *does* occur. It can also be described as a simple property of a Fourier transform. And as you know, in QM, "wavefunction" is just the name applied to a Fourier transform.

The reason the Born rule exists, is precisely because the whole quantum mechanical procedure for computing probabilities, is mathematically identical to the procedure for computing a histogram; and histograms yield probabilities.

So instead of believing you are observing an interference pattern, you can believe you are simply observing a histogram. They are mathematically equivalent. But the latter is much easier to correlate with probability estimates than the former.

@Rob,

Sure, that is fine if you believe that the probabilities and the mathematics are all there is to the world.

But mathematics only describes the world, it does not make the world work.

And probabilities and information have no meaning without human minds. Yet the world works even without human minds. Human minds weren't there for most of the world's existence.

Pradeep,

"But mathematics only describes the world…"

QM fails to do even that.

You have missed my point entirely. My point is, contrary to popular belief, quantum theory does not even attempt to describe how objects in the world behave. It only describes how the process of detecting the existence of those objects behaves. That is why it so perfectly matches the observed detection probabilities. And that is why the uncertainty principle exists: it is Shannon's limit for the minimum detectable amount of information that can ever be recovered from an observation of an object. It says nothing at all about what may be carried within the object. You can only observe that a flying black box exists as a black box. You cannot observe what is inside. Think about it. Physicists have completely misinterpreted what QM is all about. That is why is seems so weird: they think it "describes the world" when in fact it only describes their observations of the world. They are not the same thing. But they sure are correlated – in some rather weird ways.

"The basic difference between the two sides is, as stated well in the Banks article you referred to, is that Team E does not believe in intrinsic or objective probabilities."

I think Banks's use of "intrinsic" there is somewhat unfortunate, and "intrinsic" certainly shouldn't be taken to mean "objective". Surely you can't have failed to notice the explicit rejection of 'objective' probabilities in both ("B" for "Bayes") QBism and in Streater's 'vanilla' neo-Copenhagen interpretation? There's no need for anyone to believe in 'objective' probabilities and no-one should. Anyway, I think the basic difference is really that Team E (still) prefers to understand QM as a modification of the laws of mechanics:

"It took some time before it was understood that quantum theory is a generalisation of probability, rather than a modification of the laws of mechanics. This was not helped by the term quantum mechanics; more, the Copenhagen interpretation is given in terms of probability, meaning as understood at the time." [–Streater]

"You hate superluminal objects, I hate intrinsic probability — take your pick. I believe physics is physics, you believe physics can be just mathematics."

I just don't believe in "superluminal objects". There is no evidence for them and no need for them. In fact the more accurate (relativistic, field) quantum theory seems to have found it necessary to explicitly exclude them. And I'm not sure exactly what is meant by that last sentence but I am sure that I'm not the one seeing or believing in the greater amount of (ontic) physics in the mathematics.

"An electron is sent towards two slits. It emerges on the other side, interferes with itself and impinges on a phosphor at different places every time you do the experiment, building up a beautiful interference pattern. I know you can predict the pattern. So can I. But can you tell me what that one solitary electron is doing? How does it interfere with itself?"

Well that's a question which only someone who's already decided that it does "interfere with itself" needs to answer! That said, it would be good if the general logical constraint needed to ensure that the two slit experiment does exhibit "interference" patterns – that there must be no way to find out which slit any electron does/did go through – could be translated into a clearer explanation of the individual electron's behaviour.

@phayes and Rob McEachern

Is quantum theory truly a theory of mechanics?I agree with the following statements made by the two of you (with one caveat):

Rob: “quantum theory does not even attempt to describe how objects in the world behave.”

Phayes: [Quantum theory is not a] “a modification of the laws of mechanics”.

The caveat is that when you say “quantum theory” what you mean is “The Copenhagen interpretation and its variants” or as I put it, the Team B interpretation.

For Team B, QM is just a way of using mathematical models to predict observable correlations in the real world in a probabilistic manner. If that’s all you want, you don’t need anything more.

But here’s the thing. Observers are not needed for the world to work. Quantum particles have been interacting without observers for billions of years, thank you. What this means is that we should be able to remove the observer, and “observables” from the picture completely, and obtain a real theory of mechanics — a true picture of how change occurs in the quantum world without the need for observers.

This is what Team E is after: a description of what happens inside the black box without reference to “observers” or “observables.” And contrary to what many people think, such observer-less theories are not impossible— they do exist. The most well-known among them is Bohmian mechanics. All such theories are probabilistic, of course, but the probabilities are generated by our ignorance of the temporal behaviors of sub-ensembles of the physical objects that are part of the theories. This is similar to the way randomness is generated in classical physical theories. Team B, on the other hand, insists that the randomness is an inherent property of the world, and is not generated by any physical ensembles. This to me is simply magical thinking. @Phayes, despite your denials, all versions of Team B’s quantum theories explicitly or implicitly accept objective probabilities, including Qbism and Streater’s version, for the simple reason that they deny that the probability comes from ensembles of physical objects, or simply assume probabilities as primal elements. As for Team B’s assignment of pivotal importance to the observer, the version most guilty of this solipsistic tendency is QBism, which from Team E’s point of view is absolutely worthless.

Towards a true mechanicsAll actual theories of

mechanics(i.e. observer-free and objective probability-free) such as Bohm’s inevitably require superluminal influences, as Bell showed. So we have to conclude that superluminal influences are present in the world: the evidence is undeniable, if you are building a theory of mechanics. If that is not your intention, if you are satisfied with just probabilistic correlations, and do not want to dig any deeper, as Team B does not, then you can, ostrich-like, deny the superluminal influences. In fact, the superluminal influences in these theories are contained inside quantum objects: They are sub-quantal and therefore do not conflict with relativity at the quantum level and higher. But yes, there is a deeper sub-quantum conflict between the two theories that needs to be resolved if they are ever to be reconciled, but we already knew that: Time is implicitly absolute in the quantum world and is famously relative in relativity.What’s worse: Observer dependence/objective probabilities Vs. Hidden superluminalityLet’s look at the two different propositions that the two teams find distasteful. Team B’s distaste for superluminal stuff, I suggest, is just due to the prejudice instilled by relativity. There is no proscription that the world cannot have superluminal stuff, as long as this stuff is contained in a way that does not conflict with relativistic predictions. This kind of sub-quantal containment itself means that there will not be any obvious evidence for superluminality outside the sub-quantal realm, and there will be no conflicts with relativity on the scales and energies we can investigate today. On the other hand, superluminal influences inside entangled objects are logically inevitable if QM predictions are accurate, as Bell showed. So why is it so hard to accept this hidden superlumality? It might be absolutely necessary for the next big unification of space-time, QM and relativity as is foreshadowed in the work of Van Raamsdonk, and in ER=EPR.

That’s why these hidden superluminal influences are not worrisome to me at all — they are, in fact, potential starting points for future progress. On the other hand objective probabilities fall into the class of logical fallacies — accepting objective probabilities is like saying that two plus two is three. All real world probabilities, by logical necessity, require ensembles that generate them. Von Neumann did indeed generalize probability to non-commutative objects but this is a mathematical achievement. Like all mathematical theories, it can use abstract (fictitious) worlds that cannot exist in the physical universe. Hilbert spaces with millions of dimensions and non-commutative operators simply do not exist in the physical world. They have to be mapped to isomorphic but real properties of physical objects existing in our 4-dimensional space-time, just as imaginary numbers are mapped to phase in describing alternate current. This step is missing in the Team B approach and this is what I meant when I said that “you believe that physics can be just mathematics.” Streater is wrong: Quantum mechanics, a physical theory, cannot be, and is not a generalisation of probability, which is mathematical. What

isindeed a generalization of probability is Von Neumann’s theory of probability of non-commutative variables — they are both mathematical theories. But mathematical theories and models do not just shade into the physical. They remain forever separate without a mapping from one to the other. At some point, such a mapping from mathematics abstractions to physical properties must be made. Rob talks of QM being pure math, but then describes this mapping step as follows “Thus, the experimental apparatus can be interpreted simply as an analog computer, that approximately computes the spectrum of the slit's geometry.” Exactly! Analog computers are made of physical things like electrons flowing and wheels turning: Without such objects to map to, pure mathematics has no power in the real world.From mathematics to physicsThus, the abstract (fictitious) six-dimensional vector of an entangled particle in abstract (fictitious) Hilbert space needs to be mapped to physical properties in four dimensional space-time to actually achieve anything physical. They could map to, say, sub-quantal ‘bumps’ on a composite physical particle which, on account of their physical properties like proximality, field strength, potential or some sub-quantal analog, interact with each other in a way that is isomorphic to the abstract mathematics in Hilbert space. Omitting this step, as Team B does, is a logical blunder if we seek a true mechanics, and is unacceptable to Team E. Only such a mapping would make the model truly physical

Metaphorically, an all-powerful God can easily design a world with hidden superluminal properties, but even such a God cannot design a world with objective probability: it is a logical impossibility. If it were possible, then God’s thoughts themselves will form the ensemble that are generating the random, sub-quantal choice, such as which particular part of the screen a single electron will impinge upon. I’d much prefer to have a more physical ensemble-based process make this random choice.

Pradeep:

It has nothing to do with any interpretation.

"What this means is that we should be able to remove the observer, and “observables” from the picture completely"

That will require a completely different type of theory, not just a different interpretation of the existing theory. (Similar to communications engineers basing the theory of an FM receiver on the concept of "instantaneous frequency", rather than Fourier transforms).

Here is the problem: a(b+c) = ab+ac is a mathematical identity, but not a physical one. The two sides of the equation yield the exact same result and hence, cannot be distinguished, by comparing only a final, experimental result, to a corresponding theoretical prediction; one side has twice as many mulitpliers as the other – it is a physically different structure. In other words, the two sides of the equation represent different physical algorithms for obtaining the same mathematical result. Which does mother nature employ? It makes a huge difference, physically, but none at all, mathematically.

"All actual theories of mechanics (i.e. observer-free and objective probability-free) such as Bohm’s inevitably require superluminal influences" That is a total misinterpretation of reality. See my vixra paper describing "A Classical System for Producing 'Quantum Correlations'.", which elucidates the nature of the fundamental misinterpretation existing within both the so-called "loophole-free" Bell-type experiments, as well as all Bell-type theorems.

An entity that only exhibits two states (like spin up or down), is COMPLETELY describable by a single bit of information. Single bits of information DO NOT HAVE MULTIPLE, OBSERVABLE COMPONENTS. All Bell-type theorems and experiments have mistakenly assumed that they DO HAVE MULTIPLE components, and then attempt to determine their NON-EXISTENT statistics. That is the problem: there is no "hidden variable, precisely because there is no variable at all. There is nothing else to measure, that can possibly be independent of the first measurement.

All attempts to measure MULTIPLE COMPONENTS, of an entity that only has one, MUST result in “strange” correlations. If you bother to construct classical objects which manifest only a single, recoverable bit of information, it will be observed that these “strange” correlations obey the exact same statistics as the so-called “quantum correlations”.