Olena Shmahalo / Quanta Magazine

The theory of eternal inflation casts our universe as one of countless bubbles in an eternally frothing sea.

Chapter 1: The Measure Problem

In a Multiverse, What Are the Odds?

Testing the multiverse hypothesis requires measuring whether our universe is statistically typical among the infinite variety of universes. But infinity does a number on statistics.

If modern physics is to be believed, we shouldn’t be here. The meager dose of energy infusing empty space, which at higher levels would rip the cosmos apart, is a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times tinier than theory predicts. And the minuscule mass of the Higgs boson, whose relative smallness allows big structures such as galaxies and humans to form, falls roughly 100 quadrillion times short of expectations. Dialing up either of these constants even a little would render the universe unlivable.

To account for our incredible luck, leading cosmologists like Alan Guth and Stephen Hawking envision our universe as one of countless bubbles in an eternally frothing sea. This infinite “multiverse” would contain universes with constants tuned to any and all possible values, including some outliers, like ours, that have just the right properties to support life. In this scenario, our good luck is inevitable: A peculiar, life-friendly bubble is all we could expect to observe.

Many physicists loathe the multiverse hypothesis, deeming it a cop-out of infinite proportions. But as attempts to paint our universe as an inevitable, self-contained structure falter, the multiverse camp is growing.

The problem remains how to test the hypothesis. Proponents of the multiverse idea must show that, among the rare universes that support life, ours is statistically typical. The exact dose of vacuum energy, the precise mass of our underweight Higgs boson, and other anomalies must have high odds within the subset of habitable universes. If the properties of this universe still seem atypical even in the habitable subset, then the multiverse explanation fails.

But infinity sabotages statistical analysis. In an eternally inflating multiverse, where any bubble that can form does so infinitely many times, how do you measure “typical”?

Guth, a professor of physics at the Massachusetts Institute of Technology, resorts to freaks of nature to pose this “measure problem.” “In a single universe, cows born with two heads are rarer than cows born with one head,” he said. But in an infinitely branching multiverse, “there are an infinite number of one-headed cows and an infinite number of two-headed cows. What happens to the ratio?”

For years, the inability to calculate ratios of infinite quantities has prevented the multiverse hypothesis from making testable predictions about the properties of this universe. For the hypothesis to mature into a full-fledged theory of physics, the two-headed-cow question demands an answer.

Eternal Inflation

As a junior researcher trying to explain the smoothness and flatness of the universe, Guth proposed in 1980 that a split second of exponential growth may have occurred at the start of the Big Bang. This would have ironed out any spatial variations as if they were wrinkles on the surface of an inflating balloon. The inflation hypothesis, though it is still being tested, gels with all available astrophysical data and is widely accepted by physicists.

Katherine Taylor for Quanta Magazine

Video: MIT cosmologist Alan Guth, 67, discusses why two-headed cows are an important problem in an infinite multiverse.

In the years that followed, Andrei Linde, now of Stanford University, Guth and other cosmologists reasoned that inflation would almost inevitably beget an infinite number of universes. “Once inflation starts, it never stops completely,” Guth explained. In a region where it does stop — through a kind of decay that settles it into a stable state — space and time gently swell into a universe like ours. Everywhere else, space-time continues to expand exponentially, bubbling forever.

Each disconnected space-time bubble grows under the influence of different initial conditions tied to decays of varying amounts of energy. Some bubbles expand and then contract, while others spawn endless streams of daughter universes. The scientists presumed that the eternally inflating multiverse would everywhere obey the conservation of energy, the speed of light, thermodynamics, general relativity and quantum mechanics. But the values of the constants coordinated by these laws were likely to vary randomly from bubble to bubble.

Paul Steinhardt, a theoretical physicist at Princeton University and one of the early contributors to the theory of eternal inflation, saw the multiverse as a “fatal flaw” in the reasoning he had helped advance, and he remains stridently anti-multiverse today. “Our universe has a simple, natural structure,” he said in September. “The multiverse idea is baroque, unnatural, untestable and, in the end, dangerous to science and society.”

Steinhardt and other critics believe the multiverse hypothesis leads science away from uniquely explaining the properties of nature. When deep questions about matter, space and time have been elegantly answered over the past century through ever more powerful theories, deeming the universe’s remaining unexplained properties “random” feels, to them, like giving up. On the other hand, randomness has sometimes been the answer to scientific questions, as when early astronomers searched in vain for order in the solar system’s haphazard planetary orbits. As inflationary cosmology gains acceptance, more physicists are conceding that a multiverse of random universes might exist, just as there is a cosmos full of star systems arranged by chance and chaos.

“When I heard about eternal inflation in 1986, it made me sick to my stomach,” said John Donoghue, a physicist at the University of Massachusetts, Amherst. “But when I thought about it more, it made sense.”

One for the Multiverse

The multiverse hypothesis gained considerable traction in 1987, when the Nobel laureate Steven Weinberg used it to predict the infinitesimal amount of energy infusing the vacuum of empty space, a number known as the cosmological constant, denoted by the Greek letter Λ (lambda). Vacuum energy is gravitationally repulsive, meaning it causes space-time to stretch apart. Consequently, a universe with a positive value for Λ expands — faster and faster, in fact, as the amount of empty space grows — toward a future as a matter-free void. Universes with negative Λ eventually contract in a “big crunch.”

Physicists had not yet measured the value of Λ in our universe in 1987, but the relatively sedate rate of cosmic expansion indicated that its value was close to zero. This flew in the face of quantum mechanical calculations suggesting Λ should be enormous, implying a density of vacuum energy so large it would tear atoms apart. Somehow, it seemed our universe was greatly diluted.

Weinberg turned to a concept called anthropic selection in response to “the continued failure to find a microscopic explanation of the smallness of the cosmological constant,” as he wrote in Physical Review Letters (PRL). He posited that life forms, from which observers of universes are drawn, require the existence of galaxies. The only values of Λ that can be observed are therefore those that allow the universe to expand slowly enough for matter to clump together into galaxies. In his PRL paper, Weinberg reported the maximum possible value of Λ in a universe that has galaxies. It was a multiverse-generated prediction of the most likely density of vacuum energy to be observed, given that observers must exist to observe it.

A decade later, astronomers discovered that the expansion of the cosmos was accelerating at a rate that pegged Λ at 10−123 (in units of “Planck energy density”). A value of exactly zero might have implied an unknown symmetry in the laws of quantum mechanics — an explanation without a multiverse. But this absurdly tiny value of the cosmological constant appeared random. And it fell strikingly close to Weinberg’s prediction.

“It was a tremendous success, and very influential,” said Matthew Kleban, a multiverse theorist at New York University. The prediction seemed to show that the multiverse could have explanatory power after all.

Close on the heels of Weinberg’s success, Donoghue and colleagues used the same anthropic approach to calculate the range of possible values for the mass of the Higgs boson. The Higgs doles out mass to other elementary particles, and these interactions dial its mass up or down in a feedback effect. This feedback would be expected to yield a mass for the Higgs that is far larger than its observed value, making its mass appear to have been reduced by accidental cancellations between the effects of all the individual particles. Donoghue’s group argued that this accidentally tiny Higgs was to be expected, given anthropic selection: If the Higgs boson were just five times heavier, then complex, life-engendering elements like carbon could not arise. Thus, a universe with much heavier Higgs particles could never be observed.

Until recently, the leading explanation for the smallness of the Higgs mass was a theory called supersymmetry, but the simplest versions of the theory have failed extensive tests at the Large Hadron Collider near Geneva. Although new alternatives have been proposed, many particle physicists who considered the multiverse unscientific just a few years ago are now grudgingly opening up to the idea. “I wish it would go away,” said Nathan Seiberg, a professor of physics at the Institute for Advanced Study in Princeton, N.J., who contributed to supersymmetry in the 1980s. “But you have to face the facts.”

However, even as the impetus for a predictive multiverse theory has increased, researchers have realized that the predictions by Weinberg and others were too naive. Weinberg estimated the largest Λ compatible with the formation of galaxies, but that was before astronomers discovered mini “dwarf galaxies” that could form in universes in which Λ is 1,000 times larger. These more prevalent universes can also contain observers, making our universe seem atypical among observable universes. On the other hand, dwarf galaxies presumably contain fewer observers than full-size ones, and universes with only dwarf galaxies would therefore have lower odds of being observed.

Researchers realized it wasn’t enough to differentiate between observable and unobservable bubbles. To accurately predict the expected properties of our universe, they needed to weight the likelihood of observing certain bubbles according to the number of observers they contained. Enter the measure problem.

Measuring the Multiverse

Guth and other scientists sought a measure to gauge the odds of observing different kinds of universes. This would allow them to make predictions about the assortment of fundamental constants in this universe, all of which should have reasonably high odds of being observed. The scientists’ early attempts involved constructing mathematical models of eternal inflation and calculating the statistical distribution of observable bubbles based on how many of each type arose in a given time interval. But with time serving as the measure, the final tally of universes at the end depended on how the scientists defined time in the first place.

Courtesy of Raphael Bousso

Berkeley physicist Raphael Bousso, 43, extrapolated from the physics of black holes to devise a novel way of measuring the multiverse, one that successfully explains many of our universe’s features.

“People were getting wildly different answers depending on which random cutoff rule they chose,” said Raphael Bousso, a theoretical physicist at the University of California, Berkeley.

Alex Vilenkin, director of the Institute of Cosmology at Tufts University in Medford, Mass., has proposed and discarded several multiverse measures during the last two decades, looking for one that would transcend his arbitrary assumptions. Two years ago, he and Jaume Garriga of the University of Barcelona in Spain proposed a measure in the form of an immortal “watcher” who soars through the multiverse counting events, such as the number of observers. The frequencies of events are then converted to probabilities, thus solving the measure problem. But the proposal assumes the impossible up front: The watcher miraculously survives crunching bubbles, like an avatar in a video game dying and bouncing back to life.

In 2011, Guth and Vitaly Vanchurin, now of the University of Minnesota Duluth, imagined a finite “sample space,” a randomly selected slice of space-time within the infinite multiverse. As the sample space expands, approaching but never reaching infinite size, it cuts through bubble universes encountering events, such as proton formations, star formations or intergalactic wars. The events are logged in a hypothetical databank until the sampling ends. The relative frequency of different events translates into probabilities and thus provides a predictive power. “Anything that can happen will happen, but not with equal probability,” Guth said.

Still, beyond the strangeness of immortal watchers and imaginary databanks, both of these approaches necessitate arbitrary choices about which events should serve as proxies for life, and thus for observations of universes to be counted and converted into probabilities. Protons seem necessary for life; space wars do not — but do observers require stars, or is this too limited a concept of life? With either measure, choices can be made so that the odds stack in favor of our inhabiting a universe like ours. The degree of speculation raises doubts.

The Causal Diamond

Bousso first encountered the measure problem in the 1990s as a graduate student working with Stephen Hawking, the doyen of black hole physics. Black holes prove there is no such thing as an omniscient measurer, because someone inside a black hole’s “event horizon,” beyond which no light can escape, has access to different information and events from someone outside, and vice versa. Bousso and other black hole specialists came to think such a rule “must be more general,” he said, precluding solutions to the measure problem along the lines of the immortal watcher. “Physics is universal, so we’ve got to formulate what an observer can, in principle, measure.”

This insight led Bousso to develop a multiverse measure that removes infinity from the equation altogether. Instead of looking at all of space-time, he homes in on a finite patch of the multiverse called a “causal diamond,” representing the largest swath accessible to a single observer traveling from the beginning of time to the end of time. The finite boundaries of a causal diamond are formed by the intersection of two cones of light, like the dispersing rays from a pair of flashlights pointed toward each other in the dark. One cone points outward from the moment matter was created after a Big Bang — the earliest conceivable birth of an observer — and the other aims backward from the farthest reach of our future horizon, the moment when the causal diamond becomes an empty, timeless void and the observer can no longer access information linking cause to effect.

Olena Shmahalo / Quanta Magazine, source: Raphael Bousso, Roni Harnik, Graham Kribs and Gilad Perez

The infinite multiverse can be divided into finite regions called causal diamonds that range from large and rare with many observers (left) to small and common with few observers (right). In this scenario, causal diamonds like ours should be large enough to give rise to many observers but small enough to be relatively common.

Bousso is not interested in what goes on outside the causal diamond, where infinitely variable, endlessly recursive events are unknowable, in the same way that information about what goes on outside a black hole cannot be accessed by the poor soul trapped inside. If one accepts that the finite diamond, “being all anyone can ever measure, is also all there is,” Bousso said, “then there is indeed no longer a measure problem.”

In 2006, Bousso realized that his causal-diamond measure lent itself to an evenhanded way of predicting the expected value of the cosmological constant. Causal diamonds with smaller values of Λ would produce more entropy — a quantity related to disorder, or degradation of energy — and Bousso postulated that entropy could serve as a proxy for complexity and thus for the presence of observers. Unlike other ways of counting observers, entropy can be calculated using trusted thermodynamic equations. With this approach, Bousso said, “comparing universes is no more exotic than comparing pools of water to roomfuls of air.”

Using astrophysical data, Bousso and his collaborators Roni Harnik, Graham Kribs and Gilad Perez calculated the overall rate of entropy production in our universe, which primarily comes from light scattering off cosmic dust. The calculation predicted a statistical range of expected values of Λ. The known value, 10-123, rests just left of the median. “We honestly didn’t see it coming,” Bousso said. “It’s really nice, because the prediction is very robust.”

Making Predictions

Bousso and his collaborators’ causal-diamond measure has now racked up a number of successes. It offers a solution to a mystery of cosmology called the “why now?” problem, which asks why we happen to live at a time when the effects of matter and vacuum energy are comparable, so that the expansion of the universe recently switched from slowing down (signifying a matter-dominated epoch) to speeding up (a vacuum energy-dominated epoch). Bousso’s theory suggests it is only natural that we find ourselves at this juncture. The most entropy is produced, and therefore the most observers exist, when universes contain equal parts vacuum energy and matter.

In 2010 Harnik and Bousso used their idea to explain the flatness of the universe and the amount of infrared radiation emitted by cosmic dust. Last year, Bousso and his Berkeley colleague Lawrence Hall reported that observers made of protons and neutrons, like us, will live in universes where the amount of ordinary matter and dark matter are comparable, as is the case here.

“Right now the causal patch looks really good,” Bousso said. “A lot of things work out unexpectedly well, and I do not know of other measures that come anywhere close to reproducing these successes or featuring comparable successes.”

The causal-diamond measure falls short in a few ways, however. It does not gauge the probabilities of universes with negative values of the cosmological constant. And its predictions depend sensitively on assumptions about the early universe, at the inception of the future-pointing light cone. But researchers in the field recognize its promise. By sidestepping the infinities underlying the measure problem, the causal diamond “is an oasis of finitude into which we can sink our teeth,” said Andreas Albrecht, a theoretical physicist at the University of California, Davis, and one of the early architects of inflation.

Kleban, who like Bousso began his career as a black hole specialist, said the idea of a causal patch such as an entropy-producing diamond is “bound to be an ingredient of the final solution to the measure problem.” He, Guth, Vilenkin and many other physicists consider it a powerful and compelling approach, but they continue to work on their own measures of the multiverse. Few consider the problem to be solved.

Every measure involves many assumptions, beyond merely that the multiverse exists. For example, predictions of the expected range of constants like Λ and the Higgs mass always speculate that bubbles tend to have larger constants. Clearly, this is a work in progress.

“The multiverse is regarded either as an open question or off the wall,” Guth said. “But ultimately, if the multiverse does become a standard part of science, it will be on the basis that it’s the most plausible explanation of the fine-tunings that we see in nature.”

Perhaps these multiverse theorists have chosen a Sisyphean task. Perhaps they will never settle the two-headed-cow question. Some researchers are taking a different route to testing the multiverse. Rather than rifle through the infinite possibilities of the equations, they are scanning the finite sky for the ultimate Hail Mary pass — the faint tremor from an ancient bubble collision.

Part two of this series, exploring efforts to detect colliding bubble universes, will appear on Monday, Nov. 10.

Correction: This article was revised on November 4, 2014, removing a sentence that did not fully account for recent progress in multiple-parameter calculations of fundamental constants. It was further revised on January 23, 2015, to credit Andrei Linde as the pioneer of the theory of eternal inflation.

This article was reprinted on Wired.com.

View Reader Comments (19)

Leave a Comment

Reader CommentsLeave a Comment

  • What differentiate the fantasy from purely science? Let try to answer this question as to create the physical rules of science explanation and especially when because of the some reasons experimental proof is difficult. Such researches are astronomical ones for example. So how we have to examine theories in hard experimental proving environment? Simply answer is: as to observe the logic of likelihood of universe picture in accordance to the physical laws. It means that we cannot use any category which definition contradict to the known and proved many times physical laws, such as: ‘big bang”, “black holes”, “negative energy”, absolute or empty space, if we cannot proof their existence and more over if we cannot defined exactly what we searching for. That is fantasy, but not science! Obviously with such categories it is impossible to create likelihood of the universe picture, thus nobody can take this explanation seriously even the writers of novels as well. There is necessary to observe one more mathematical rules and that is the explanation to be only possible! G.Kanev

  • If entropy correlates with density of observers – could it be that observer itself is some kind of reflective property of matter? Could physicists arrive at a solution that posits a self aware observer that might be a universal constant tied to the constants of the local physical constants in such a way that what appears very unstable might yet be observable.

  • This article definitely made me think! However, it, and the theory behind it, most definitely contain unspoken assumptions that require examination.

    For example, the authors state that, “Proponents of the multiverse idea must show that, among the rare universes that support life, ours is statistically typical…If the properties of this universe still seem atypical even in the habitable subset, then the multiverse explanation fails.” This seems odd: first…couldn’t we just be inhabiting one of the more rare, atypical universes? Why would this refute the theory? And second: we see an immediate assumption of what must be proved, namely that the multiverses exist in the first place. The burden of proof requirement is elided, shifted from the larger claim—that the multiverse exists at all—to a secondary matter: demonstrating the nature of statistical typicality in the (assumed) multiverses.

    Having dodged the burden-of-proof requirement and having simply assumed the multiverse, as a theoretical extrapolation from cosmic inflation—itself highly theoretical, but at least coherent with other evidence, i.e., the Big Bang evidence— physicists and cosmologists can then play math games around the “infinite universes” required by the theory. Note that these universes are not anywhere observed; they are merely theoretical requirements of the initial assumption/un-empirical multiverse postulate. The remainder of the article is largely a report on those math games—by scholastic theologians, in effect, arguing on the variant interpretations of statistical infinities. I suppose in some universes the rules of evidence are utterly reversed and the multiverse is just accepted as an article of religious faith. We could even imagine a sort of Ontological Proof of the multiverse theory: the theory must be true because in some alternate universe—it is true!

    The phlogiston theory comes to mind; i.e., multiverse cosmology advances an unobserved theoretical postulate to explain a phenomenon, namely the fine-tuning problem. Like phlogiston, multiverse theory is perhaps a start in the right direction, but better theories must be found, and a better theory is, indeed, already at hand. Multiplying hypothetical entities to explain phenomena is inelegant and violates Occam’s Razor. If the evidence points strongly to a specific explanation, it is sheer intellectual perversity and stubbornness to reject it. The alternative explanation for the fine tuning, namely theism, is far simpler and economical and seems much better suited to this explanatory task; however it offends our pride and is highly unfashionable in today’s intellectual climate. For those who claim the theism conclusion is “unscientific” one must ask: compared to what? Your slapdash, ad hoc, unempirical mishmash of angels-on-a-pin theory?

    Science deals with the observable physical universe. Philosophy, theology and metaphysics offer higher-order interpretive explanations of the world. Multiverse theory is not science, it is metaphysical speculation masquerading as science. Philosophical theology gives way better metaphysical, ummm, head, so to speak. (Along these lines I’d recommend “God and the Cosmos” by Harry Lee Poe and Jimmy H. Davis.)

    John Doba
    Houston TX

  • Hello,
    Nice article! As a medically retired chemical engineer, I have 1991 Gulf War Illnesses and schizoaffective disorder, I am partial to the concept of Entropy! What is also interesting is that other theorist believe thermodynamics is likely the same in any universe.

    In truth, I have never taken a course in relativity so I am ignorant. As such, the idea of cone, something that can be complicated to the uneducated, is daunting to me. For example, I know that time is the fourth dimension, so I assume that cause and effect will be unobservable because time will be zero. As such, our physics don’t work, and it would be like dividing by zero in mathematics.

    Is the idea of maximum entropy feasible in light of Dr. Hawking’s recent suggestion that energy might escape from black holes, so, as such, thermal equilibrium is not necessarily reached? Also, would this require a universe (bubble) of a maximum size? If so, would that be a highly unstable universe, realizing that disorder is maximum, cause a potential deflation?


    Have a nice day!

  • It is hard to believe that ‘evolution’ would produce such varietal life forms (in our whimsical imaginations) i.e ‘two headed cows’ and so on as a winner species (due to the chaotic rules of Natural selection) where vacuum conditions allow for life in ad infinitum universes. Wow, I might be a genius physicist in one of those Universes (I am not a physicist in this one however). However, lets say that the maths do not change at all in any of these Universes because if they did (a structure like the Real number line) there could be no connection (symmetry rules change) and henced pinched off from the other Universes. The only real difference between these Universes would be the vacuum conditions. It looks then to be a marvelous coincidence (in our Universe having life) that supersymmetry allows for a perfect meeting of the coupling constants (running beta values of the coupling constants) at what is the GUT level energy at about 10^16 GeV. What if the vacuum conditions being different in the other Universes allows for a pure Math computation (as a supercoincidence?) of a dimensionless constant such that it is so effective that it could be considered ‘correct’ ?

  • Bousso’s entropy is not very anthropic, and a clear scientific definition of an “observer” is not likely to be very anthropic either.

  • Having followed the subject since the days Alan Guth first proposed his particular patch to one of the many problems in cosmology, I would like to refer back to one of the original patches, as an example of the sort of logic that is allowed.
    When it was first realized that all those distant galaxies were redshifted as though they were moving directly away from us, it made us appear to be at the center of the universe and since this didn’t seem appropriate, the premise of expanding relativistic spacetime was proposed to explain that since space itself can be expanding, every point appears as the center. Voila! Problem solved. What they conveniently overlooked is that in order for this expanding space to be relativistic, the speed of light would have to increase, in order for it to remain Constant to this dimension of space. Unfortunately that would effectively negate explaining redshift, since the light would be presumably “energized.”
    Then the argument becomes that light is just being carried along by this expansion and the speed of light is only measured in local frames. Yet the very proof of this expansion is the redshift of that very light!!! So if those galaxies are moving away, such that it will take light longer to cross this distance, that presupposes a stable dimension of space, as measured by the speed of light, against which to measure this expansion, based on the redshift of that very same light. Now all this makes perfect sense on a black board. Much as C.S. Escher’s drawings make perfect sense in two dimensions.
    The fact is that we are at the center of our view of the universe and so an optical explanation for redshift would be a simple solution, but physics does not appear to approve of simple solutions.
    Consider that gravity is “equivalent” to acceleration, but the surface of the planet is not apparently rushing out in all directions to keep us stuck to it. Could it be there is some cosmic effect that is equivalent to recession, as the source of redshift, without those distant galaxies actually flying away?
    The assumption is that after the Big Bang, the rate of redshift would drop off evenly, but what they found is that it drops of quickly, then flattens out as it gets closer to us, so the need for dark energy to explain this steady rate of expansion/redshift. Yet if we look at it from the other direction, as an optical effect outward from our point of view, which compounds on itself, this curve upward from the relatively stable increase to ever increasing redshift is the hockey stick effect of it going parabolic.
    According to Einstein’s original calculations, gravity would cause space to eventually collapse to a point and so he added the cosmological constant to balance this. Now gravity is the prevalent force in galaxies and the space between galaxies appears to expand. What seems to be overlooked is that if these two effects are in balance, then what is expanding between galaxies, is collapsing into them at an equal rate, resulting in overall flat space. Which would make Einstein’s original fudge extremely prescient and what we have would appear to be a galactic cycling of expanding radiation and contracting mass. Otherwise known as a convection cycle.
    But this would mean the last hundred years have been a bit of a wild goose chase and since no one is going to admit such, it appears we are to be treated to ever more extravagant extrapolations. In economics, this is also known as a bubble.

  • Why is anyone even trying to postulate where existence came from using the scientific method? Nature can be studied this way. What came before or why nature exists are questions for philosophers and theologians to tackle, not physicists. If the universe appears to be fine-tuned, then perhaps that’s exactly what it is. At that point you have to reach outside of observable reality to find an answer and infinite universes become about as plausible an explanation as does a creator or a computer simulation. Why is there an assumption (because that is all it is) that nature itself cannot be consciously designed? There is nothing scientific about such an assumption, anymore than assuming thunder is created by a god with a big hammer.

    I wonder, how would one prove whether or not something is artificial? How do you prove that, say, a birdhouse was a constructed object and not simply random? If we assume it has to be random because there cannot be a creator of it, then would we come up with an infinite birdhouse theory where the vast majority of birdhouses are just piles of junk and a useful, structured one is a rarity given form by pure chance? No…you see the apparent designed structure of the birdhouse and conclude that it is plausible that someone or something designed it so. You still cannot prove it, but it isn’t wrong to consider that possibility.

    The multiverse hypothesis just seems like lazy science to me, pushing off a question into an unobservable realm and letting that particular answer satisfy your assumptions (there is no god, no computer simulation, no supernatural anything because I don’t want there to be), instead of asking the real tough question: why assume that what is observed requires a scientific explanation in the first place? Why is a fine-tuned universe a problem for science?

  • There is one observation and one calculated number which strongly suggest that there must be many separate universes like ours. The observation is the baryon asymmetry of our universe. The calculated number is the derived age of our universe. I leave it to the readers of this site to find the logic which goes into my statements.
    You will end up with something infinitely more simple than this gobbeldygook namely a Continuous Formation of Universes, half of which contain only matter while the other half contains only antimatter. It does not matter where these universes are nor whether they are organized in a multiverse. They must have formed in the past which is all that is required. A prediction is that universes will continue to form in the future.
    I tip my hat to Fred Hoyle because this idea is a variant of his continuous creation which shipwrecked on the reef of the Big Bang.

  • With regards to ‘life’ I believe that it is correct that we would not exist if the value of the fine structure constant was slightly larger or smaller.

  • A look at the tools used to delve into the topic of the Universe’s nuts & bolts may lead us to an acknowledgement that the mathematics is more advanced than colliders. So such infinitesimal measured energies for the lambda and the Higgs mass is probably telling us more about our machinery than theory.
    10−123 is an exceedingly fine measure: surely we don’t believe we operate with finesse at such energies?!

    For example, stellar nucleosynthesis was believed to have been understood and was predictive. Solar neutrinos, however, seem to be implicated in effects that have not been explained yet like oscillations in radioactive decay rates on Earth relative to orbital position relative to the Sun.
    How has that not lit the world of physics on fire? Decay rates were thought to be statistically unvarying: how would neutrinos be affecting decay rate? Neutrinos penetrating all matter and ‘not interacting with matter’, or not?

    Seems we shouldn’t be too concerned with “anthropic” relations until we’ve explained all aspects of immediate observable effect. Not to dismiss anthropic concepts offhandedly, and hopefully not to have those concepts relied upon when other efforts are called for. Related to the observation that a sufficiently advanced technology will seem like magic to those less capable, something about anthropic pit-stops suggests pretension when perhaps humility would be more appropriate: our machines are still yet-crude in comparison to the finesse that will be required to work on these scales.


  • There may well be lots of good reasons for positing a multiverse. But the initial premise (outlined in the first two paragraphs) of this article isn’t one of them.

    The premise that “our incredible luck” in finding ourselves in a universe with the settings to enable intelligent life requires explanation rests on the assumption that if something is unlikely it needs explaining. But not all unlikely events need explaining- all sorts of extremenly improbable things happen everyday and nobody bats an eyelid.

    It’s only when we have reason to think the unlikely outcome is suspicous or special in some way that we want an explanation. So for example, let’s say Joe Blogs wins the lottery; this is unlikely but not something we immediately seek an explanation for. However, if we discover that his wife is head of the Lottery Commisson we might become suspicious and enquire further.

    So returning to the case of the (our?) universe, it’s improbable capacity to support intelligent life is just as unlikely as any other particular set of basic parameters it might have had. It only stands in need of explanation if we think that the outcome of intelligent life (i..e. us) is suspicious. But what reason other than good oldfashioned anthropocentric hubris do we have for believing this?

  • How about we live in one of the multiverses, just so happens, its the one where multiverses don’t exist?

  • Instead of mulitverses, what it if’s all just a big simulation, like a computer game, (to use a metaphor that we can understand,) where every pixel potentially could be illuminated, but only some of them are stimulated? Sort of like, everything exists at once, but only selected parts are illuminated in a constantly but coherently changing illumination/flow of energy available to the perception of the observer? Everything would exist at the same time, but be experienced as separated moments.

  • To follow up on Alice’s comment, a detailed and careful (and for me somewhat surprisingly compelling) version of that argument is:

  • No. There is not a multiverse. There are segments to the universe….before the cmb and after the great attractor. In between is us and the rest of our “segment”. The problem we have with perceiving it correctly is the divide between quantum math and relativity math. The reason is neither are exactly right. We have invented forces and we have invented dark energy, and dark matter. Instead we should envision a Euclidian space filled with particles that are almost spherical but have indents in opposing sides. These spin AND wobble and bounce of each other and transfer kinetic energy and universal “pressure”. Gaps in this “ball bath” are what we perceive as matter. Meaning the universe is inverse to what we imagine it to be. What we’ve come to believe is matter is void. What we think of as dark matter is the physical substance, dark energy is kinetic energy and forces are the transfer of kinetic energy. The universe is simpler than the one we have imagined. And now we struggle to break the mold of past visionary to get to the actual truth. Einstein forgot to imagine the impact if the flashlight shining from one train to the other was spun on it’s axis. Oooops.

  • Weinberg used multiverse theory to predict the CConstant.?

    Yeah right, not in this universe. That kind of revisionist history reminds me of calling Einsteins blunder to avoid the big bang–thus a Creation event–actually correct. What Weinberg did was exactly what Hoyle did with nuclear physics….you simply select values in which the universe's purpose was to create Man.

    Calling that multiverse theory is beyond backwards. Remember, Multiverse hypothesis, not theory, also creates bruce wayne as batman, and any other story you wanna make up that is logically consistent. You can literally write anything down right this second and that exists somewhere. This goes down as the moment when the atheists who flocked to Origins to prove their worldview, found it designed and created, but instead denied their own results and were laughed out of town by humanity. Embarrassing.

Leave a Comment

Your email address will not be published. Your name will appear near your comment.

Quanta Magazine moderates all comments with the goal of facilitating an informed, substantive, civil conversation about the research developments we cover. Comments that are abusive, profane, self-promotional, misleading, incoherent or off-topic will be rejected. We can only accept comments that are written in English.