# Beyond the Second Law

## Introduction

Since the steam engine began modernizing the world, the second law of thermodynamics has reigned over physics, chemistry, engineering and biology. Now, an upgrade is underway.

Thermodynamics — the study of energy — originated during the 1800s, as steam engines drove the Industrial Revolution. To understand its second law, imagine a sponge cake, fresh from the oven, cooling on a countertop. Scent molecules carrying heat drift away from the cake. A physicist might wonder: In how many ways can these molecules be arranged throughout the volume of space they currently occupy? We call this number of arrangements the molecules’ entropy. If the volume just encloses the cake (as it does when the cake is freshest), the entropy is relatively small. If the volume encompasses the whole kitchen (after the molecules have had time to travel farther), the entropy is exponentially larger. The second law of thermodynamics decrees that the entropy of every closed, isolated system (such as our kitchen, assuming the windows and doors are shut) grows or remains constant. Accordingly, the scent of sponge cake wafts across the entire kitchen and never recedes.

We sum up this behavior in an inequality: $latex S_f \ge\ S_i$, where $latex S_i$ is the molecules’ initial entropy and $latex S_f$ their final entropy. The inequality is useful but vague, because it doesn’t tell us how much the entropy will grow, except in a special case: when the molecules are at equilibrium. That happens when large-scale properties — such as temperature and volume — remain constant, and no net flows of anything — such as energy or particles — enter or leave the system. (For example, our cake’s scent molecules reach equilibrium after they’ve fully filled the kitchen.) At equilibrium, the second law strengthens to an equality: $latex S_f = S_i$. This simple, general equality provides precise information about many different types of thermodynamic systems at equilibrium.

But you and I and most of the world are far from equilibrium. And “far from equilibrium” is the wild west to theoretical physicists and chemists: unpredictable and untidy. Imposing laws on the wild west — meaning, for us, proving equalities about physics far from equilibrium — is quite difficult.

But it’s not impossible. For decades, physicists have worked with equalities that strengthen the second law. These equalities are known as fluctuation relations. They connect properties of systems far from equilibrium (which are difficult to reason about theoretically) with equilibrium properties (which are easy to reason about).

To see fluctuation relations in action, imagine a microscopic strand of DNA floating in water. Floating quietly, the DNA is at equilibrium, sharing the water’s temperature. Using lasers, we can hold one end of the strand steady and pull the other end. Stretching the strand jolts it out of equilibrium and requires work in the physics sense of the word: structured energy harnessed to accomplish a useful task. The amount of work required fluctuates from one pulling of the strand to the next, since a water molecule sometimes kicks the strand here, sometimes there. That means every possible amount of work has some probability of being needed during the next pull.

It turns out that these probabilities — which describe the DNA when it’s far from equilibrium — are directly related to properties that the DNA has at equilibrium. And that relation can be captured by an equality.

This is the core of fluctuation relations: Properties of a system far from equilibrium participate in an equality with equilibrium properties. My colleague Chris Jarzynski at the University of Maryland discovered this in 1997. (He’s so modest, he calls the equality the nonequilibrium fluctuation relation, while the rest of us call it Jarzynski’s equality.) Although the DNA experiment provided one of the most famous tests of this principle, the equation governs loads of systems, including those involving electrons, beads the size of bacteria and brass oscillators that resemble centimeter-long tire swings.

Fluctuation relations have implications fundamental and practical. For starters, from these equalities we can derive an expression of the second law of thermodynamics. So fluctuation relations not only extend our knowledge far from equilibrium, as we saw with the DNA strand, but also recapitulate information we know about equilibrium.

But the true power of fluctuation relations lies in an ironic fact: While equilibrium properties are easier to reason about theoretically, they are harder to measure experimentally than far-from-equilibrium properties. For instance, to measure the work needed to stretch the DNA far out of equilibrium, we can simply pull the strand quickly — for a short time. In contrast, to measure the work needed to stretch it while it remains at equilibrium, we’d have to stretch so slowly that the DNA would always remain practically at rest — so our experiment would take an infinitely long time.

Chemists, biologists and pharmacologists are interested in the equilibrium properties of proteins and other molecules, so using fluctuation relations gives them an experimental foothold. They can perform many short nonequilibrium trials and measure the work required in each. From this data, they can infer the probability of needing any given amount of work in the next nonequilibrium trial. Then they can plug those probabilities into the far-from-equilibrium side of the fluctuation relation to determine the equilibrium side. This method still requires oodles of trials, but researchers have leveraged mathematical tools to mitigate the difficulty.

In this way, fluctuation relations have revolutionized thermodynamics, galvanizing experiments and providing detailed predictions about the world far from equilibrium. But their usefulness doesn’t stop there.

During the 2000s, quantum thermodynamicists — those of us who study how quantum physics changes classical concepts like work, heat and efficiency — wanted in on the fun, even though our discipline introduces extra puzzles. How to define and measure quantum work is unclear thanks to quantum uncertainty; for instance, measuring a quantum system’s energy changes that energy.

As a result, different researchers have proposed different definitions for quantum work. I imagine the various definitions as species in a Victorian menagerie. The “hummingbird” definition requires us to measure the quantum system gently, to disturb the energy only a little — as the fluttering of a hummingbird’s wings by your ear for an instant would disturb you. A “wildebeest” definition keeps to the middle of the pack, focusing our attention on average energy exchanges. Other definitions flutter, twitter and trumpet across the quantum-thermodynamics literature.

As you might expect, different definitions lead to different quantum fluctuation relations. The same is true for similar definitions adapted to different physical settings. Some relations are easier to test experimentally, while some are abstract and mathematical. Some describe high-energy particles, like those smashed together at CERN; one describes chaos in black holes; and one describes the universe’s expansion. Experimentalists have tested some quantum fluctuation relations — with trapped ions, quantum dots and more.

Will one equality rise to the top of the pile, like a monarch who’s bested all their relatives for the throne? I expect not. In my opinion, which definitions and equations are useful depends on which system you’re interested in, how you poke it and how you can measure it.

The plurality of quantum fluctuation relations contrasts with the unity stereotypically prized by physicists, such as the long-sought Theory of Everything expected to unify all the fundamental forces. Perhaps some principle will unify the quantum fluctuation relations, revealing them to be different sides of a multidimensional coin. Or perhaps quantum thermodynamics is simply richer than other fields of physics.