There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described today in *Physical Review Letters* — a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna — some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

“Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.

According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another — at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula — one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators — devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent — if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs — this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent — if only one bit specifies them once in every 22 runs — this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

Friedman, Alan Guth and colleagues at MIT were entertaining such speculations a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit. Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

In the first of a planned series of “cosmic Bell test” experiments, the team sent pairs of photons from the roof of Zeilinger’s lab in Vienna through the open windows of two other buildings and into optical modulators, tallying coincident detections as usual. But this time, they attempted to lower the chance that the modulator settings might somehow become correlated with the states of the photons in the moments before each measurement. They pointed a telescope out of each window, trained each telescope on a bright and conveniently located (but otherwise random) star, and, before each measurement, used the color of an incoming photon from each star to set the angle of the associated modulator. The colors of these photons were decided hundreds of years ago, when they left their stars, increasing the chance that they (and therefore the measurement settings) were independent of the states of the photons being measured.

And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear — which would prove that Einstein was right to doubt spooky action.

“For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning — that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement — and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

*This article was reprinted on TheAtlantic.com.*

According to this article is a probabilistic multiverse favoured over a deterministic universe ?

“If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

In a superdeterministic universe, he'd have no choice but to hold that belief.

“Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.”

And a long time ago it was discovered how probability theory works when it's applied to noncommuting observables and yet many physicists and philosophers are still misunderstanding and misinterpreting it and even devising bizarre [psi-]ontologies to 'explain' or 'fix' QM. Spooking themselves with their own shadows.

Something being "boring" has no bearing at all on whether or not it's true.

Still, we have to labor under a certain set of assumptions (Solipsism! Yay!), and if everything is preordained it doesn't matter what assumptions we happen to hold, right?

So, yeah, out the window with superdeterministic universes, and let's assume there is at least some "true chaos" in the mix.

Quoting “The Comedy of Errors” is a serious deep cut, Mr. Hall. I wonder: fan of the FKB production, or just very familiar with The Bard?

Space unfolds faster than light can travel.

To quote the article… "The colors of these photons were decided hundreds of years ago, when they left their stars". If I understand correctly, photons move at the speed of light and that means time doesn't pass for them . So my question is does it really matter how long ago the stars emitted the photons if absolutely no time has elapsed from the photons perspective? Has their colour been already determined or is it determined when they are used in the experiment? Being a layman I am sure this is a silly question with an easy answer, so please be nice =)

If the universe is a simulation and we are all part of someone's video game (which I don't believe), then the "time" a photon was emitted ages ago is irrelevant, because it isn't truly real anyway.

Or if the underlying mechanism of entanglement correlations is retrocausality, it does not matter how far away the star that emitted the light is. The photon in question is always traveling at both + and – c.

Would the imposition of a magnetic field, which changes the polarity of the photons, in one of the paths be a way to resolve the hidden variable/action at a distance question?

So look, if the universe came into being at the Big Bang, and everything we experience and manifest is simply the unfolding of that tiny point in space/time… So it means that there was a point in time (in the beginning of time) where we were all completely interrelated. Kind of like the opposite of 'the parts are reflected in the whole' – it's like 'the whole reflects itself in all of the parts'. So since we are all an expression of that infinitesimal moment in space/time, we are currently all still infinitely related to and reflective of, one another. Hence the butterfly effect – but in this case it's not just Tokyo to Texas, but Tokyo circa whenever, to Texas, multiple years in the future. We are infinitely interrelated. But it's an infinity that goes backwards – not (just) infinite in all directions, but infinite backwards to an infinitely tiny vanishing point in the past.

So – where does that leave us in terms of determined versus free?

Two things. First of all. There is a saying: 'My thoughts are not like your thoughts'. This is from mystical writings about the divine thought(s) that created the world. Which is another way of saying – the big bang.

This means that the level of complexity of that interrelatedness is unlikely to be experienced on a simple level. We may be all infinitely interconnected, and expressions of one another (or all of us are expressions of the infinitesimal point that gave rise to us), but the level of complexity that we are expressed at makes the idea of predetermination (or present interconnection) theoretical. We may be influenced both by today's butterfly on a planet several universes away, as well as by a particle that existed several aeons before us, but the level of complexity in space/time means that this is more of a concept than an actual (perceived) limit on our multitude of 'choices' – if you can call them that. It's like someone giving you a billion choices. But only a billion – and not more. Does one perceive onseself as being constrained?

Okay – and now on a different level. There was a point before the big bang. When things were truly random – before they were compressed into the tiny center which then began expanding. A point where true freedom did exist, because no choices had yet been expressed (the big bang being a 'choice', and therefore a limiter – in other words 'something happened'). At this point there was no 'there' there. Because all that was, on some level still is, that time of truly unconstrained choices still exists. And still exists within us as well, since we are expressions of all that ever was. So on some level, deep within us, there is a point where we are connected to that 'pre-bang' level of unconstrained freedom.

Where can we see that?

Some people say it expresses itself in our moral impulse. This is a nice fairy tale (and that term is not meant dismissively – more like 'we choose our paradigm, why not choose a nice one?').

Ez example: A bum asks for a dime. We do (or don't) reach into our pocket. We begin to extend the dime out in our hand, but someone bumps into us and the dime (our last one) rolls down the sidewalk and into the storm drain. Oops. Too bad – so sad. It was our last dime. so bum ain't getting any.

So where did the freedom occur? Somewhere in the initial impulse – give – or don't give? But it would have to be deeper than what our parents or our environment 'programmed' us to think or feel, or choose. Some level of choice that harks back to that infinite place where all was truly possible, because nothing had yet been expressed.

So the term 'moral level' is really a minimization of what is meant. But somewhere deep inside, that level of freedom exists.

Anywayz, enough musing. Which brings me to an excuse to state my 3 immutable Laws of Physics:

1) We don't know shit

2) Nobody knows shit

3) Shit's weird

Even the proponents of superdeterminism look left and right for oncoming cars before they cross a busy street. Makes me wonder how seriously they take their own ideas.

> The idea that nature might restrict freedom while maintaining local realism has

> become more attractive…

Oh great. Now the NATURE is fascist. Bloody awesome.

(Mods: that is a joke.)

No matter how far away the stars, the physicists that choose them and record their colours cannot themselves be considered free or independent. Who knows what subtle and insidious quantum phenomena caused them to choose those particular stars….

So. Clinamen or no clinamen? It does seem like that's the question here.

As long as only ONE shared quantum property (a-temporal, a-spacial) is being measured, I don't any superluminal signalling, or any 'signal' sent at all. You're not 'changing' anything on the other particle, you're not disturbing charge, mass, momentum, energy, etc.

Question: how do we know there *is* this spooky action at a distance? Just because some particle's state is unknown doesn't preclude it from being definite. You have to measure it to know what it is. So why couldn't the correlation have been baked in when the particles interacted? Then both have a definite but unknown state. In this picture then it makes no sense to say that the distant particle "snapped into" a correlated state when the correlation was determined from the beginning, we just didn't measure it till later.

Neo Conderson

"Just because some particle's state is unknown doesn't preclude it from being definite."

This is the very doctrine of classical 'states' that quantum mechanics was forced to abandon. Otherwise quantum computers would not work. A qubit's state is neither 'definitely' a '1' or a '0'. It is in a superposition of both. The only way around is a complete abandonment of the Copenhagen interpretation and, say, adoption of the deBroglie/Bohm pilot-wave model.

A great deal of confusion is compounded by the rather abstruse and argumentative field of the semantics of quantum 'measurement'. If you don't believe me, examine the Wikipedia article "Measurement in Quantum Mechanics" (and its talk-page) for an example. It's one of those grand Wikipedia quagmires for which there is no solution but the entire deletion of the article.

"The colors of these photons were decided hundreds of years ago.."

Photons have definite color only when measured, they collapse into their state just when we measure them.

—

The loophole this article is talking about is essentially 'superdeterminism', where there is no freedom. These experiments do not contradict superdeterminism.

https://en.wikipedia.org/wiki/Superdeterminism

Both Quantum and Relativity Theories are verified at extreme precision with countless experiments and observations. But one says future cannot be predicted and other says the opposite (that we live in a 4d static Block Universe).

I think all the verification for both implies both must be correct in their own domains/scales.

If we accept that what that says about nature of time?

http://fb36blog.blogspot.com/2017/02/quantum-vs-relativity.html

@JohnNixon: This is exactly what I said in the interview, I'd have no choice. Boring. @FredrikVold: Agreed. True or not, since it is boring I choose not to believe in it. (Wait, how does that go again…)

Further confirmation that the base granularity of reality is facts, not particles. And a fact might be about one particle, or several. Can we stop calling this "weirdness" yet, or is that going to take another century?

I will coment before reading this article twice.

First of, I don't know if this article is vauching for spooky action at a distance. I would think so.

Second, the experiment of pointing a telescope to space and expecting it to have some impact on the result, I mean, if it was done for peace of mind well ok, but to in my view really? what???? a telescope learning what??? that would afect an outcome?…..

so what loop hole did we close can someone tell me?

if energy before big bang was in some state in which it was all or 99.9999999999% pure then somehow to keep that purity and stability everything in that energy must have been interconnected which explains the spooky action at a distance…. so please next time you set out for an experiment call me and I will tell you if it makes sense. 🙂

I've always considered this style of argument against superdeterminism to be very biased and bad science. The idea of dismissing a hypothesis because it makes you uncomfortable reeks of the kind of thing we in the scientific community often criticize religious zealots of doing. To paraphrase both Sagan and Dawkins, the Universe does not owe you anything – not even freewill, nor good feelings about your ability to "freely investigate" as a scientist.

There are also very respected physicists like 't Hooft and Hossenfelder who do good work in these areas.

This is an area where I suspect advancements in neuroscience continuing to demonstrate a lack of evidence for freewill, and continued theoretical computer science offering insights into correlations between Turing completeness and a simulated Universe will support the work of those in the physics community who do support superdeterminism.

When drawing conclusions about the nature of nature's laws from Bell's purely mathematical reasoning, perhaps one ought to first establish the basis for a plausible link between the two which would admit the assertion that:

“The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom”.

The only implicit mathematical assumption that Bell's argument makes is that all laws of nature, if at all representable mathematically, can only be expressed in terms of classical functions that algorithmically computable, i.e., functions that, for any values assigned to their variables, yield a value that can be computed by some algorithm (Turing machine).

Such functions can, indeed, be viewed as entailing a lack of freedom for the physical entities that they purport to represent, since their 'past-related' values are algorithmically determinate (by a Turing machine); and their 'future-related' values are algorithmically predictable (by a Turing machine).

However, actual observations suggest that the laws of quantum behaviour cannot be represented by such functions; thus bringing into question whether or not there are, in fact, any such laws that can be expressed mathematically, and whether or not quantum behaviour is subject to any deterministic law (since all we are able to observe are determinate probabilities which, too, are expressed mathematically by algorithmically computable functions).

Now, it is a far cry from observing that our classical, algorithmically computable, functions are inadequate for expressing quantum phenomena mathematically, to denying determinism; since it raises, for instance, the following two questions:

(i) What direct grounds do we have for claiming—or denying—that every point-particle (such as a photon) has a determinate position in the classical sense?

(ii) Can our mathematical languages admit a position function, say P(x), whose probability density amplitude function is the wave function ψ(x)?

For example:

(a) If we assume that such a function can be defined mathematically in a first-order theory, whence the values of P(x) can be treated in the usual way as classically computable real numbers—which are essentially defined by algorithmically computable functions (definable in principle as below)—then it would follow that, in principle, P(x) too is an algorithmically computable function that is both deterministic and predictable.

ALGORITHMIC COMPUTABILITY: A number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL(F) that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), . . .}.

Such an assumption is, as noted, at variance with actual observations, which compellingly suggest that any such P(x) cannot be an algorithmically computable—hence predictable—function.

It is not unreasonable, then, to argue that it is perhaps from an implicit—perhaps unconscious—assumption of an inability to represent such functions mathematically, in a first-order theory, that current interpretations of Quantum Mechanics—following Bohr's perspective, and over-riding Einstein's uneasiness—implicitly conclude by default that such a P(x) cannot be posited; and so explicitly argue that quantum phenomena cannot be treated as subject to any deterministic law.

(b) If so, then the Achille's heel of such conventional wisdom would seem to lie in the—I would suspect unsuspected—unreasonable assumption (which Einstein sought to identify) that the values of any putative quantum functions must necessarily be algorithmically computable real numbers.

'Unreasonable' since, as Cantor and Turing have shown, the algorithmically computable reals are countable, and so almost all reals are not algorithmically computable; hence one would reasonably expect most values of any putative position function such as P(x) to be algorithmically uncomputable.

If we assume, then, that the values of P(x) are most likely algorithmically uncomputable real numbers, which are essentially defined by algorithmically verifiable functions (definable in principle as below), then it would follow that P(x) too must be treated as an algorithmically verifiable—hence deterministic—function which is algorithmically uncomputable—hence algorithmically unpredictable (but with an associated, algorithmically computable, probability density amplitude function ψ(x)).

ALGORITHMIC VERIFIABILITY: A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL(F, n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), . . . , F(n)}.

(Note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions. It is fairly straightforward to show that there are number theoretic functions which are algorithmically verifiable but not algorithmically computable. See Theorem VII of Goedel's seminal 1931 paper on formally undecidable arithmetical propositions.)

Since we can show that the algorithmically uncomputable values of such a P(x) would be representable in a first-order Peano Arithmetic as algorithmically verifiable functions by means of Goedel's β-function, P(x) too would in principle be definable mathematically in a first-order theory. Hence we would no longer need to deny determinism by default, but could treat P(x) as a 'hidden' variable in a deterministic Quantum Mechanics such as that, perhaps, proposed by David Bohm.

Such an interpretation—of algorithmically verifiable hidden 'variables' with algorithmically uncomputable values—would be supported by the observation that some fundamental dimensionless physical constants appear to be uncomputable real numbers; in which case they would be definable only by algorithmically verifiable, but not algorithmically computable, functions.

Moreover, since two functions that are both algorithmically verifiable, but not algorithmically computable, need not be independent, they need not be subject to Bell's inequality; nor entail non-locality, thus dissolving the 'weirdness' associated with the EPR paradox.

Whether or not one interprets such a putative lack of independence (freedom) to imply some kind of 'superdeterminism' ascribable to a classical Laplacian intellect would, surely, be merely a matter of taste or belief; since it is admittedly mathematically unrepresentable and, in that sense, humanly 'unknowable' if we accept that both human and mechanistic reasoning—ipso facto knowledge—ought to be circumscribed by evidence-based assignments of truth values to the formal propositions of a mathematical language under some well-defined interpretation over the domain of the natural laws that such reasoning seeks to represent unambiguously, and communicate effectively, in the language.

"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."

— Pierre Simon Laplace, A Philosophical Essay on Probabilities

Again quantum entanglement is in the news, unfortunately it is old news as well.

In fact Quantum Mechanics more and more seem to enter a" multi-universe theory -like realm" which while elegant and explaining certain curiosities noted by physicists in its basic assumptions and notions will always remain unverifiable as authors admit.

While they may not want to admit it, Einstein was right calling quantum entanglement as something like spooky action at a distance, as a phenomenon bordering or outside the realm of fundamental physics and more like into a magical universe.

Here is some background on quantum entanglement:

Erwin Schrödinger, responding to Einstein claim that QM is incomplete, first introduced quantum entanglement concept while discussing famous Einstein-Podolski-Rosen (EPR) paradox. Quantum entanglement, according to academic textbooks, is an outcome of the quantum process where two micro particles at coherent quantum states are represented by one wavefunction corresponding to entangled particle pair quantum states.

The most common example cited is a boson particle with spin 1 (photon) split into two particles (fermions) with spin oriented 180 degrees opposite to each other +½ , -½ .

They supposedly are separated, say one sent to Moon, while preserving entangled quantum state…

The probability of Earth located particle to have spin +½ is 50% and -½ is 50%. In order to learn which one of two particles is located on Earth we measure the one we have with us and when result is +½ the other located on the Moon supposedly instantly sets to -½.

But if result on Earth is -½ the other particle on the Moon supposedly instantly sets +½ via as Einstein name a spooky (meaning instant) action at distance.

How this could happen? How an act of measurement of one particle sets up quantum state of other particle in entanglement? How to truly understand Quantum entanglement?

More explanations are here:

https://questfornoumenon.wordpress.com/2015/02/07/a-note-on-science-surrealism-of-quanta/

@Toni Westbrook

"This is an area where I suspect advancements in neuroscience continuing to demonstrate a lack of evidence for freewill, and continued theoretical computer science offering insights into correlations between Turing completeness and a simulated Universe will support the work of those in the physics community who do support superdeterminism."

In this respect, the December 2016 issue of 'Cognitive Systems Research' carries an article 'The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Goedelian thesis' which incidentally makes the following two points in conclusion:

(i) a consistent model of a Universe simulated by a mechanical (Turing complete) intelligence is subject to Bell's inequalities and must, therefore, admit non-locality, since it cannot recognise functions that are algorithmically verifiable, but not algorithmically computable;

(ii) a consistent model of a Universe conceivable by a human (Turing-incomplete) intelligence is not subject to Bell's inequalities, and need not admit non-locality, since it can recognise functions that are algorithmically verifiable, but not algorithmically computable.

In the first case, a mechanical intelligence is constrained by determinism, and must treat EPR as an essentially unresolvable conflict between special relativity and quantum theory.

In the second, a human intelligence does not need to appeal to superdeterminism in order to treat EPR as only an apparent, but resolvable, conflict between special relativity and quantum theory that merely reflects the limitations on how nature's laws can be unambiguously expressed and effectively communicated in a first-order mathematical language.

Though not a physicist, I enjoyed both the article and the comments immensely. I have been suffering from insomnia lately, but before reaching the last comment I fell in to a deep and most refreshing sleep. Thanks to all concerned.

As John Nixon put it excellently, if superdeterminism is true, then I had no choice but to make this article the basis of this month's Quanta Insights column.

Check it out and contribute your thoughts and expertise (perhaps you have no choice either):

https://www.quantamagazine.org/20170216-quantum-entanglement-puzzle/

If you want to take the selection decision back in time as far as the Big Bang, you could hardly do better than select a random photon from the cosmic microwave background spectrum.

I will never understand the fascination with the need to have a non-deterministic universe. Even if it is non-deterministic, we should replace all instances of the words "free" and "freedom" with random since that is pretty much the same thing.

Also, saying that superdeterminism is "boring" is not an argument, it is an emotion which is either determined at Big Bang (in case mr. Larsson is wrong) or is product of pure chance (in case he is right). Calling it free isn't a good science nor is a good philosophy.

Since we're all living in a virtual reality anything can be proved or disproved. The particles that are supposedly light-years apart may actually be superimposed; thus, no real surprise in so called spook action at a distance (of zero)!

Could seemingly spooky action at a distance really be action over a very short range in one of those 26 dimensions of string theory?