I.

Few facts of experience are as obvious and pervasive as the distinction between past and future. We remember one, but anticipate the other. If you run a movie backwards, it doesn’t look realistic. We say there is an arrow of time, which points from past to future.

One might expect that a fact as basic as the existence of time’s arrow would be embedded in the fundamental laws of physics. But the opposite is true. If you could take a movie of subatomic events, you’d find that the backward-in-time version looks perfectly reasonable. Or, put more precisely: The fundamental laws of physics — up to some tiny, esoteric exceptions, as we’ll soon discuss — will look to be obeyed, whether we follow the flow of time forward or backward. In the fundamental laws, time’s arrow is reversible.

Quantized

A monthly column in which top researchers explore the process of discovery. This month’s columnist, Frank Wilczek, is a Nobel Prize-winning physicist at the Massachusetts Institute of Technology.

Logically speaking, the transformation that reverses the direction of time might have changed the fundamental laws. Common sense would suggest that it should. But it does not. Physicists use convenient shorthand — also called jargon — to describe that fact. They call the transformation that reverses the arrow of time “time reversal,” or simply T. And they refer to the (approximate) fact that T does not change the fundamental laws as “T invariance,” or “T symmetry.”

Everyday experience violates T invariance, while the fundamental laws respect it. That blatant mismatch raises challenging questions. *How* does the actual world, whose fundamental laws respect T symmetry, manage to look so asymmetric? Is it possible that someday we’ll encounter beings with the opposite flow — beings who grow younger as we grow older? Might we, through some physical process, turn around our own body’s arrow of time?

Those are great questions, and I hope to write about them in a past future posting. Here, however, I want to consider a complementary question. It arises when we start from the other end, in the facts of common experience. From that perspective, the puzzle is this:

*Why should the fundamental laws have that bizarre and problem-posing property, T invariance? *

The answer we can offer today is incomparably deeper and more sophisticated than that we could offer 50 years ago. Today’s understanding emerged from a brilliant interplay of experimental discovery and theoretical analysis, which yielded several Nobel prizes. Yet our answer still contains a serious loophole. As I’ll explain, closing that loophole may well lead us, as an unexpected bonus, to identify the cosmological “dark matter.”

II.

The modern history of T invariance begins in 1956. In that year, T. D. Lee and C. N. Yang questioned a different but related feature of physical law, which until then had been taken for granted. Lee and Yang were not concerned with T itself, but with its spatial analogue, the parity transformation, “P.” Whereas T involves looking at movies run backward in time, P involves looking at movies reflected in a mirror. Parity invariance is the hypothesis that the events you see in the reflected movies follow the same laws as the originals. Lee and Yang identified circumstantial evidence against that hypothesis and suggested critical experiments to test it. Within a few months, experiments proved that P invariance fails in many circumstances. (P invariance holds for gravitational, electromagnetic and strong interactions, but generally fails in the so-called weak interactions.)

Those dramatic developments around P (non)invariance stimulated physicists to question T invariance, a kindred assumption they had also once taken for granted. But the hypothesis of T invariance survived close scrutiny for several years. It was only in 1964 that a group led by James Cronin and Valentine Fitch discovered a peculiar, tiny effect in the decays of K mesons that violates T invariance.

III.

The wisdom of Joni Mitchell’s insight — that “you don’t know what you’ve got ‘til it’s gone” — was proven in the aftermath.

If, like small children, we keep asking, “Why?” we may get deeper answers for a while, but eventually we will hit bottom, when we arrive at a truth that we can’t explain in terms of anything simpler. At that point we must call a halt, in effect declaring victory: “That’s just the way it is.” But if we later find exceptions to our supposed truth, that answer will no longer do. We will have to keep going.

As long as T invariance appeared to be a universal truth, it wasn’t clear that our italicized question was a useful one. Why was the universe T invariant? It just was. But after Cronin and Fitch, the mystery of T invariance could not be avoided.

Many theoretical physicists struggled with the vexing challenge of understanding how T invariance could be extremely accurate, yet not quite exact. Here the work of Makoto Kobayashi and Toshihide Maskawa proved decisive. In 1973, they proposed that approximate T invariance is an accidental consequence of other, more-profound principles.

The time was ripe. Not long before, the outlines of the modern Standard Model of particle physics had emerged and with it a new level of clarity about fundamental interactions. By 1973 there was a powerful — and empirically successful! — theoretical framework, based on a few “sacred principles.” Those principles are relativity, quantum mechanics and a mathematical rule of uniformity called “gauge symmetry.”

It turns out to be quite challenging to get all those ideas to cooperate. Together, they greatly constrain the possibilities for basic interactions.

Kobayashi and Maskawa, in a few brief paragraphs, did two things. First they showed that if physics were restricted to the particles then known (for experts: if there were just two families of quarks and leptons), then all the interactions allowed by the sacred principles also respect T invariance. If Cronin and Fitch had never made their discovery, that result would have been an unalloyed triumph. But they had, so Kobayashi and Maskawa went a crucial step further. They showed that if one introduces a very specific set of new particles (a third family), then those particles bring in new interactions that lead to a tiny violation of T invariance. It looked, on the face of it, to be just what the doctor ordered.

In subsequent years, their brilliant piece of theoretical detective work was fully vindicated. The new particles whose existence Kobayashi and Maskawa inferred have all been observed, and their interactions are just what Kobayashi and Maskawa proposed they should be.

Before ending this section, I’d like to add a philosophical coda. Are the sacred principles really sacred? Of course not. If experiments force scientists to modify those principles, they will do so. But at the moment, the sacred principles look awfully good. And evidently it’s been fruitful to take them very seriously indeed.

IV.

So far I’ve told a story of triumph. Our italicized question, one of the most striking puzzles about how the world works, has received an answer that is deep, beautiful and fruitful.

But there’s a worm in the rose.

A few years after Kobayashi and Maskawa’s work, Gerard ’t Hooft discovered a loophole in their explanation of T invariance. The sacred principles allow an additional kind of interaction. The possible new interaction is quite subtle, and ’t Hooft’s discovery was a big surprise to most theoretical physicists.

The new interaction, were it present with substantial strength, would violate T invariance in ways that are much more obvious than the effect that Cronin, Fitch and their colleagues discovered. Specifically, it would allow the spin of a neutron to generate an electric field, in addition to the magnetic field it is observed to cause. (The magnetic field of a spinning neutron is broadly analogous to that of our rotating Earth, though of course on an entirely different scale.) Experimenters have looked hard for such electric fields, but so far they’ve come up empty.

Nature does not choose to exploit ’t Hooft’s loophole. That is her prerogative, of course, but it raises our italicized question anew: *Why does Nature enforce T invariance so accurately? *

Several explanations have been put forward, but only one has stood the test of time. The central idea is due to Roberto Peccei and Helen Quinn. Their proposal, like that of Kobayashi and Maskawa, involves expanding the standard model in a fairly specific way. One introduces a neutralizing field, whose behavior is especially sensitive to ’t Hooft’s new interaction. Indeed if that new interaction is present, then the neutralizing field will adjust its own value, so as to cancel that interaction’s influence. (This adjustment process is broadly similar to how negatively charged electrons in a solid will congregate around a positively charged impurity and thereby screen its influence.) The neutralizing field thereby closes our loophole.

Peccei and Quinn overlooked an important, testable consequence of their idea. The particles produced by their neutralizing field — its quanta* — *are predicted to have remarkable properties. Since they didn’t take note of these particles, they also didn’t name them. That gave me an opportunity to fulfill a dream of my adolescence.

A few years before, a supermarket display of brightly colored boxes of a laundry detergent named Axion had caught my eye. It occurred to me that “axion” sounded like the name of a particle and really ought to be one. So when I noticed a new particle that “cleaned up” a problem with an “axial” current, I saw my chance. (I soon learned that Steven Weinberg had also noticed this particle, independently. He had been calling it the “Higglet.” He graciously, and I think wisely, agreed to abandon that name.) Thus began a saga whose conclusion remains to be written.

In the chronicles of the Particle Data Group you will find several pages, covering dozens of experiments, describing unsuccessful axion searches.

Yet there are grounds for optimism.

The theory of axions predicts, in a general way, that axions should be very light, very long-lived particles whose interactions with ordinary matter are very feeble. But to compare theory and experiment we need to be quantitative. And here we meet ambiguity, because existing theory does not fix the value of the axion’s mass. If we know the axion’s mass we can predict all its other properties. But the mass itself can vary over a wide range. (The same basic problem arose for the charmed quark, the Higgs particle, the top quark and several other others. Before each of those particles was discovered, theory predicted all of its properties *except* for the value of its mass.) It turns out that the strength of the axion’s interactions is proportional to its mass. So as the assumed value for axion mass decreases, the axion becomes more elusive.

In the early days physicists focused on models in which the axion is closely related to the Higgs particle. Those ideas suggested that the axion mass should be about 10 keV — that is, about one-fiftieth of an electron’s mass. Most of the experiments I alluded to earlier searched for axions of that character. By now we can be confident such axions don’t exist.

Attention turned, therefore, toward much smaller values of the axion mass (and in consequence feebler couplings), which are not excluded by experiment. Axions of this sort arise very naturally in models that unify the interactions of the standard model. They also arise in string theory.

Axions, we calculate, should have been abundantly produced during the earliest moments of the Big Bang. If axions exist at all, then an axion fluid will pervade the universe. The origin of the axion fluid is very roughly similar to the origin of the famous cosmic microwave background (CMB) radiation, but there are three major differences between those two entities. First: The microwave background has been observed, while the axion fluid is still hypothetical. Second: Because axions have mass, their fluid contributes significantly to the overall mass density of the universe. In fact, we calculate that they contribute roughly the amount of mass astronomers have identified as dark matter! Third: Because axions interact so feebly, they are much more difficult to observe than photons from the CMB.

The experimental search for axions continues on several fronts. Two of the most promising experiments are aimed at detecting the axion fluid. One of them, ADMX (Axion Dark Matter eXperiment) uses specially crafted, ultrasensitive antennas to convert background axions into electromagnetic pulses. The other, CASPEr (Cosmic Axion Spin Precession Experiment) looks for tiny wiggles in the motion of nuclear spins, which would be induced by the axion fluid. Between them, these difficult experiments promise to cover almost the entire range of possible axion masses.

Do axions exist? We still don’t know for sure. Their existence would bring the story of time’s reversible arrow to a dramatic, satisfying conclusion, and very possibly solve the riddle of the dark matter, to boot. The game is afoot.

*This article was reprinted on Wired.com.*

Very interesting and possibly dark matter as axions may turn out to be the best explanation in the end. However, there are alternative explanations for the creative tension between micro-reversibility and macro-irreversibility as well as the nature of dark matter.

Naive free special relativistic quantum field theory with Einstein's equivalence principle shows that virtual fermion anti fermion pairs generate attractive gravity that would behave as dark matter. Similarly virtual bosons have negative zero point pressure and generate antigravity i.e. dark energy.

Wheeler-Feynman and later Hoyle-Narlikar showed that macro-irreversibility (time's arrow) could have a cosmological origin in terms of a future total absorber. The future de Sitter horizon acts like the Wheeler-Feynman total absorber.

There was a 2014 conference on Retrocausality and Free Will at Trinity College, Cambridge organized by Huw Price attended by Yakir Aharonov et-al showing that all the mysteries of quantum entanglement could be explained in terms of retrocausality. Actually, this is an old idea published in the early 1950s by Oliver Costa-de Beauregard and I.J. Good independently.

Finally, recent papers by Roderick I. Sutherland at the university in Sydney, Australia, show how Aharonov's weak measurements reveal the beable trajectories of David Bohm's 1952 pilot wave theory. This was also demonstrated in the lab by Aephraim Steinberg. Sutherland shows how the retrocausal pilot wave-beable theory is Lorentz invariant and how configuration space for many-particle entanglement is not needed. He shows how quantum theory is not fundamentally statistical consistent with Einstein's "God does not play dice" – that the Born rule of orthodox quantum theory is closely tied to de Broglie's guidance constraint e.g. v = (h/m)GradS in simplest case, v = beable velocity, S = phase of pilot wave. Sutherland, further shows, using a Lagrangian coupling between quantum pilot wave and classical beable, that there is a post-quantum two-way action-reaction term. Orthodox quantum theory corresponds to the limit that the reaction of the beable back on its pilot wave is zero. The statistical Born rule emerges in this zero back-reaction limit once the future final boundary condition is integrated out. However, what is most important, is that we now have a rigorous formalism to investigate the post-quantum case for open subsystems. Here, we find that the n0-signaling theorems et-al of orthodox quantum theory are violated when the back-reaction can no longer be neglected. (1) This is equivalent to the super post-quantum computing using CTCs investigated by David Deutsch and Seth Lloyd. (2)

Ref.

(1) "Lagrangian Description for Particle Interpretations of Quantum Mechanics – Entangled Many-Particle Case

Roderick I. Sutherland

Centre for Time, University of Sydney, NSW 2006 Australia

A Lagrangian formulation is constructed for particle interpretations of quantum mechanics, a well-known example of such an interpretation being the Bohm model. The advantages of such a description are that the equations for particle motion, field evolution and conservation laws can all be deduced from a single Lagrangian density expression. The formalism presented is Lorentz invariant. This paper follows on from a previous one which was limited to the single-particle case. The present paper treats the more general case of many particles in an entangled state. It is found that describing more than one particle while maintaining a relativistic description requires the introduction of final boundary conditions as well as initial, thereby entailing retrocausality.

1. Introduction

This paper focuses on interpretations of QM in which the underlying reality is taken to consist of particles have definite trajectories at all times1. It then enriches the associated formalism of such interpretations by providing a Lagrangian description of the unfolding events. The convenience and utility of a Lagrangian formulation is well-known from classical mechanics. The particle equation of motion, the field equation, the conserved current, action-reaction, the energy-momentum tensor, , etc., are all easily derivable in a self-consistent way from a single expression. These advantages continue in the present context. Since a Lagrangian description is available in all other areas of physics and continues to be useful in modern domains such as quantum field theory and the standard model, it is appropriate to expect such a description to be relevant and applicable here as well2.

In addition to the advantages already listed, the Lagrangian approach pursued here to describe particle trajectories also entails the natural introduction of an accompanying field to influence the particle’s motion away from classical mechanics and reproduce the correct quantum predictions. In so doing, it is in fact providing a physical explanation for why quantum phenomena exist at all – the particle is seen to be the source of a field which alters the particle’s trajectory via self-interaction"

"Naïve Quantum Gravity

Roderick I. Sutherland

Centre for Time, University of Sydney, NSW 2006 Australia

A possible alternative route to a quantum theory of gravity is presented. The usual path is to quantize the gravitational field in order to introduce the statistical structure characteristic of quantum mechanics. The procedure followed here instead is to remove the statistical element of quantum theory by introducing final boundary conditions as well as initial. The relevant quantum formalism then becomes compatible with the non-statistical nature of general relativity.

1. Introduction

This paper outlines a simple theory of quantum gravity. Indeed, some would say too simple. The point is, though, that this model appears to be consistent with all existing experimental evidence. This raises questions about the main contenders for a quantum theory of gravity, such as superstring theory and loop quantum gravity. First, is the present amount of effort on these more sophisticated models justified when there is no more evidence supporting them than for the model introduced here? Second, are there other simple approaches such as the present one which could be pursued if the preconceived conventions usually imposed are relaxed somewhat?

The model discussed here adopts the traditional picture of general relativity wherein all of spacetime exists together in the form of a block universe laid out like a map through time. Gravity is then explained geometrically in the usual way via curvature in the time dimension as well as curvature in the spatial dimensions, i.e., curvature of the XT plane as well as curvature of the XY plane. In such a picture, imposing final boundary conditions as well as initial ones is seen to be a natural and indeed more symmetric possibility. This extra restriction allows an alternative approach to quantizing gravity. Instead of starting with the statistical nature of quantum mechanics and therefore attempting to make the gravitational field (or curvature) statistical as well, extra information is introduced to make the quantum mechanical description become non-statistical so that it is compatible with the original, classical form of general relativity.

An obvious additional problem which exists in the many-particle case of quantum mechanics is that the wavefunction is defined in configuration space, whereas a description in four-dimensional spacetime is needed here. This issue is found to be easily resolvable once the retrocausality associated with final boundary conditions is taken into account."

(2)"Quantum no-go theorems in causality respecting systems in presence of

closed timelike curves: Tweaking the Deutsch condition

Asutosh Kumar1, Indranil Chakrabarty2, Arun Kumar Pati1, Aditi Sen(De)1, Ujjwal Sen1

1Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India and

2Center for Security, Theory and Algorithmic Research,

International Institute of Information Technology-Hyderabad, Gachibowli, Hyderabad, India.

We consider causality respecting (CR) quantum systems interacting with closed timelike curves

(CTCs), within the Deutsch model. We introduce the concepts of popping up and elimination of

quantum information and use them to show that no-cloning and no-deleting, which are true in CR

quantum systems, are no more valid in the same that are interacting with CTCs. We also find

limits on the possibility of creation of entanglement between CR systems and CTCs. We prove

that teleportation of quantum information, even in its approximate version, from a CR region to a

CTC is disallowed. We subsequently show that tweaking the Deutsch model, by allowing the input

and output to be not the same, leads to a nontrivial approximate teleportation beyond the classical

limit. … No-go theorems play an important role in quantum infor-

mation science [1]. They are crucial both in nonclassical

applications of quantum states and operations as re-

sources, and towards a better understanding of quantum

concepts. A good example is the no-cloning theorem [2],

which states that nonorthogonal quantum states cannot

be cloned perfectly, and can be seen as an underlying

feature of the security of quantum cryptography [3].

Other examples include the no-broadcasting theorem

[4, 5], the no-deleting theorem [6], the no-information

splitting theorem [7], etc. …

Investigations have been made to see how the presence

of closed timelike curves can affect the computational

power and other abilities to perform information pro-

cessing tasks of a system [21–23]. From the perspective

of quantum computation, it has, e.g., been shown that

access to a CTC would allow factorization of composite

numbers efficiently with the help of a classical computer

[21], a CTC-assisted quantum computer would be able

to solve NP-complete problems [22], both classical and

quantum computers under CTC belong to the same

complexity class [23], etc. On the other hand, Brun et

al. [24] have shown that if one has access to CTCs, then

one can perfectly distinguish nonorthogonal quantum

states, having possible implications for the security of

quantum cryptography [3]."

there appears to be a rather subtle yet serious flaw in taking Pecci-Quinn symmetry breaking to generate the axion mass. The chiral impedance (like quantum Hall, centrifugal, Coriolis, three body, vector Lorentz of the Aharonov-Bohm effect,…) is scale invariant. Scale invariant impedances cannot communicate energy/information, but rather only quantum phase, which is not a single measurement observable.

Time IS invariant – our biology is NOT.

We are unable to perceive time invariance as we are unable to perceive the real dimension of time. Our biological clocks do not sense time they just sense the ever decreasing chemical circles of our cells.

Wonderful article. It's exciting to see your writing on Quanta!!!

It's very refreshing to hear about an alternative dark matter candidate (alternative to WIMPs, which everyone seems to be searching for so unsuccessfully). It's especially compelling that axions are so well-motivated theoretically. We should be looking hard for them even if there were no dark matter to account for!

A few questions from an enthusiastic bystander:

1. Given special relativity, it seems puzzling to talk about time or parity reversal at all. In what reference frame are we reversing time or space?

2. Are axions really the only viable candidate to explain nature's near T-symmetry? If so, why aren't there more experiments looking for them?

3. How soon can we expect ADMX or CASPEr to detect or rule out the existence of axions? Or at least to detect or rule them out as a dark matter candidate? Reading a little about them, apparently they search for axions of different masses, by very different mechanisms. Moreover ADMX is an already-running experiment, whereas CASPEr seems to still be in its infancy.

The concept of time is an integral part of dynamic equilibrium involving non-linear systems. Is there something to learn from these well known complex systems and apply to particle physics? Perhaps it is necessary to rethink of our incremental approach to answer fundamental inconsistencies of the standard model.

It's not just biology, EJV. Think about finding a crater on the surface of the moon. It is a sound basis for concluding that a meteor crashed into the moon IN THE PAST.

The history of a piece of matter is written into its form, but its future is not.

can Axions naming anecdote be attributed to power of intuition? Serendipity, which so often helps discovery in Science?

I may be missing something, but some things I've never understood about physicists' views of time are as follows. If we run a movie of subatomic events backwards, and the events in the reverse order seem to still obey the laws of physics:

A. This doesn't mean nature can run the actual events backwards in real life. That is, just because the math works out if humans put in a minus sign for time doesn't mean that this actually happens in the real world. Just because something is possible in our minds doesn't mean it actually happens outside our minds.

B. If the subatomic events happen in reverse order in real life, does that really mean nature is going back in time? Or, are the physical things just happening in reverse order while time is still going forward? The reason I ask this is that to me, it seems like times is just a function of physical things happening (e.g., physical change). If there were absolutely no physical change in the universe, there would be no time. This would explain why time is moving irreversibly from past to future: because things keep happening. To go from future to past, there would have to be a reduction in the number of things that have already happened in the universe. This doesn't occur. Even if the events of a process look like they're happening in reverse, like if a broken cup spontaneously reassembles, this doesn't mean that time is going backwards; it just means that physical events have happened that reassemble the cup and that they happen in the opposite order of the previous events. But, because physical change is still happening as the cup is reassembled, time is still moving forward.

Thanks.

Roger

Thanks for an always interesting reading!

Interesting job recently was published on Irreversibility and the Arrow of Time in a Quenched Quantum System

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.190601

@roger:

if i remember correctly, in The Black Hole War Leonard Susskind explains the arrow of time quite broadly and the thing is that when reversing time things generally DON'T happen reversed. There was an example with an ice cube: it simply melts. But when you take half molten ice cube and reverse the arrow time, it STILL melts, not the other way around. I can't remember the argument that led to this conclusion though.

I am just a half-vast mathematician so I'm rather slow on the uptake.

I'd like to read and study the definition of time that the geniuses are using.

Are there TWO times? One for everyday use and one for quantum mechanics use?

What IS time?

'time is what keeps everything from happening at once.' is what Roger seems to be saying.

Very interesting topic. Can you explain how a high entropy state changes into a lower one at the quantum level, if time is to be reversed?

am a cardiac surgeon, but interested to know. Thanks

@Daniel:

Thanks for the reference to "The Black Hole War". I haven't read it, but looked online, and there was an excerpt about making the events of a process go backwards, but random noise and observing the reverse process don't allow it to go exactly backwards to the exact situation it was in at the start. Was that it? But, I'd still say that having the events go in the reverse order to whatever situation they end up in is still events happening, and thus time moving forward. But, that's just my uneducated guess. As another commenter pointed out, it does keep stuff from happening all at once!

can you please describe little more on how the quanta of neutralizing field /axions would neutralize by adjusting its own value, so as to cancel that interaction’s influence, as the interaction itself almost doesn't exit in the 1st place – (whose interactions with ordinary matter are very feeble)?

In a way these reflections send us back to Kant. It may be the case that the universe, insofar as we can know about it, obeys the law of T-invariance. It may also simply be that our structures of cognition are imposing a sense of one-directional movement on time. The discoveries of T-invariance may simply be highlighting a radical dissonance between our cognition of the universe and the universe-in-itself. Hence, incidentally, the frequently stated metaphysical ideas that time is an illusion or that our sense of reality itself is, in some fundamental sense, illusory.

So if we watch a movie of particle interactions we cannot tell if the movie played forwards or backwards. I think that may not be true if the movie is about objects with mass interacting under gravity. So maybe time invariance only applies to Quantum Mechanics; not gravity.

So maybe gravity is the cause of arrow of time.

Thanks for your points, Roger; I've made most of those points myself and I'm interested in seeing how someone answers. Glad somebody else is of the same view.

"Time is what keeps everything from happening at once" – or not happening at all.

@Roger:

you are right, because time moving "forward" means basically that "entropy increases". And Mr. Susskinds argument showed that when reversing the arrow of time, entropy still increases. Thus you cannot go from "chaos" to "order" while reversing time arrow. I think there is a misunderstanding in what exactly "reversing arrow of time" means. In layman terms it may mean "going back in time", which is a great simplification, while i think that in GTR/QED terms it's not that simple. Way above my brain capacity though

Is time a "local" event made up of several (or an infinite number) of more or less independent domains–or–is it universal monolithic force where the reversal of one small segment would through the entire system into reversal?

If the last second disappearedjust now I view that time would not progress since the next second has to build on something. I must be that the last second must last forever so time does not pass it just progresses. I know that this conclusion is on a simple plane but it does reveal something about the nature of time and it must be considered…So we live forever in our own time …don't know if or what any numbercrunchers would make of that……..

The course of time is variant between infinite fast and infinite slow (infinite fast and infinite slow excluded). That is why entropy always increases. Entropy cannot be zero. This is a big problem for QM. String theory pretends to offer a solution for this problem, but ST pretends to offer solutions for everything.

How would you know if time reversed itself anyway? Let's say every day at 12:00 time decided to run backwards for 10 minutes. Every memory you had of 11:50-12:00 would be erased and you would just pick up from 11:50 without ever knowing the time reversal took place. For all we know, time could be doing this already.

Very glad I stumbled on Quanta Magazine, as I've been greatly interested in understand the general principles that scientists have discovered or theorised of our underlaying universe. To bad I didn't understand ish from this article…in fact, if not to specialist and researchers, I don't understand who these articles are geared towards. The only thing I got from this is that time runs backward or rather in a reverse symmetry at the fundamental level of physics, but someones there are things…about axions…that we don't…anyway unless this blog talks about very advanced quantums mechanics I do not find these articles to be anywhere near understandable although I understand the basic principles. Maybe to advanced for me yet.

Dear AYS – I work in Non-Equilibrium systems and would like to speak with you.

Ron Kobler

435 901 2003

When TIME started with the big-bang from a singularity of energy – then time should be energy too.

When you discuss the universe to be a machine; then this machine has got all its energy with the start (big bang). When a machine is running, the level of energy in it will decrease. The level of energy in the universe will decrease – because it expands. Therefore the amount of energy per volume will decrease permanently – therefore the time-arrow has a fixed direction.

Additional, every atom has moving parts – therefore every atom is also a tiny machine – which will lose energy. When this energy is the repulsive ´dark energy´ – then this will produce a continuous movement(expansion) of the universe. When you have a continuous movement/acceleration, then the speed of the expansion will increase permanently: this effect might explain the accelerated expansion of the universe.

When we add both movements A) the expansion of the universe after the big bang – and B) the accelerated movement of the ´dark energy´ – then we will have a sum of both effects.

To discuss the nature of time as energy will explain: why the time arrow has a fixed direction; why the universe expands with accelerated speed (dark energy); why reverse time is not possible (this would be a perpetual motion)

As a model, you can study a mechanical clock: When you wind up the spring, then you add potential energy to it. The clock will transfer this potential energy into kinetic energy and move the indicators of the watch. For physicists is the display of the clock the time which passed by. But in reality the level of the potential energy in the spring will decrease when time pass by – but physicists will not measure this difference amount of energy.

AND: the buddhistic philosophy 2500 years ago and Bishop Augustine (in confessions, 1600 years ago) said that future and past do not exist – and present is only an imaginary state of transition (without expansion).

Therefore ´time travel´ is not possible (there is no past/future where we might travel) – and this idea (and the idea of worm-holes) is doubtful/wrong! Or – in other words – when mathematic can calculate time-travels – then the quality of these calculations must be doubtful too.

Could this be related to singular dimension string theory as causal to universal creation?