Standard geometric objects can be described by simple rules — every straight line, for example, is just *y* = *ax* + *b* — and they stand in neat relation to each other: Connect two points to make a line, connect four line segments to make a square, connect six squares to make a cube.

These are not the kinds of objects that concern Scott Sheffield. Sheffield, a professor of mathematics at the Massachusetts Institute of Technology, studies shapes that are constructed by random processes. No two of them are ever exactly alike. Consider the most familiar random shape, the random walk, which shows up everywhere from the movement of financial asset prices to the path of particles in quantum physics. These walks are described as random because no knowledge of the path up to a given point can allow you to predict where it will go next.

Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.

Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.

“You take the most natural objects — trees, paths, surfaces — and you show they’re all related to each other,” Sheffield said. “And once you have these relationships, you can prove all sorts of new theorems you couldn’t prove before.”

In the coming months, Sheffield and Miller will publish the final part of a three-paper series that for the first time provides a comprehensive view of random two-dimensional surfaces — an achievement not unlike the Euclidean mapping of the plane.

“Scott and Jason have been able to implement natural ideas and not be rolled over by technical details,” said Wendelin Werner, a professor at ETH Zurich and winner of the Fields Medal in 2006 for his work in probability theory and statistical physics. “They have been basically able to push for results that looked out of reach using other approaches.”

**A Random Walk on a Quantum String**

In standard Euclidean geometry, objects of interest include lines, rays, and smooth curves like circles and parabolas. The coordinate values of the points in these shapes follow clear, ordered patterns that can be described by functions. If you know the value of two points on a line, for instance, you know the values of all other points on the line. The same is true for the values of the points on each of the rays in this first image, which begin at a point and radiate outward.

One way to begin to picture what random two-dimensional geometries look like is to think about airplanes. When an airplane flies a long-distance route, like the route from Tokyo to New York, the pilot flies in a straight line from one city to the other. Yet if you plot the route on a map, the line appears to be curved. The curve is a consequence of mapping a straight line on a sphere (Earth) onto a flat piece of paper.

If Earth were not round, but were instead a more complicated shape, possibly curved in wild and random ways, then an airplane’s trajectory (as shown on a flat two-dimensional map) would appear even more irregular, like the rays in the following images.

Each ray represents the trajectory an airplane would take if it started from the origin and tried to fly as straight as possible over a randomly fluctuating geometric surface. The amount of randomness that characterizes the surface is dialed up in the next images — as the randomness increases, the straight rays wobble and distort, turn into increasingly jagged bolts of lightning, and become nearly incoherent.

Yet incoherent is not the same as incomprehensible. In a random geometry, if you know the location of some points, you can (at best) assign probabilities to the location of subsequent points. And just like a loaded set of dice is still random, but random in a different way than a fair set of dice, it’s possible to have different probability measures for generating the coordinate values of points on random surfaces.

What mathematicians have found — and hope to continue to find — is that certain probability measures on random geometries are special, and tend to arise in many different contexts. It is as though nature has an inclination to generate its random surfaces using a very particular kind of die (one with an uncountably infinite number of sides). Mathematicians like Sheffield and Miller work to understand the properties of these dice (and the “typical” properties of the shapes they produce) just as precisely as mathematicians understand the ordinary sphere.

The first kind of random shape to be understood in this way was the random walk. Conceptually, a one-dimensional random walk is the kind of path you’d get if you repeatedly flipped a coin and walked one way for heads and the other way for tails. In the real world, this type of movement first came to attention in 1827 when the English botanist Robert Brown observed the random movements of pollen grains suspended in water. The seemingly random motion was caused by individual water molecules bumping into each pollen grain. Later, in the 1920s, Norbert Wiener of MIT gave a precise mathematical description of this process, which is called Brownian motion.

Brownian motion is the “scaling limit” of random walks — if you consider a random walk where each step size is very small, and the amount of time between steps is also very small, these random paths look more and more like Brownian motion. It’s the shape that almost all random walks converge to over time.

Two-dimensional random spaces, in contrast, first preoccupied physicists as they tried to understand the structure of the universe.

In string theory, one considers tiny strings that wiggle and evolve in time. Just as the time trajectory of a point can be plotted as a one-dimensional curve, the time trajectory of a string can be understood as a two-dimensional curve. This curve, called a worldsheet, encodes the history of the one-dimensional string as it wriggles through time.

“To make sense of quantum physics for strings,” said Sheffield, “you want to have something like Brownian motion for surfaces.”

For years, physicists have had something like that, at least in part. In the 1980s, physicist Alexander Polyakov, who’s now at Princeton University, came up with a way of describing these surfaces that came to be called Liouville quantum gravity (LQG). It provided an incomplete but still useful view of random two-dimensional surfaces. In particular, it gave physicists a way of defining a surface’s angles so that they could calculate the surface area.

In parallel, another model, called the Brownian map, provided a different way to study random two-dimensional surfaces. Where LQG facilitates calculations about area, the Brownian map has a structure that allows researchers to calculate distances between points. Together, the Brownian map and LQG gave physicists and mathematicians two complementary perspectives on what they hoped were fundamentally the same object. But they couldn’t prove that LQG and the Brownian map were in fact compatible with each other.

“It was this weird situation where there were two models for what you’d call the most canonical random surface, two competing random surface models, that came with different information associated with them,” said Sheffield.

Beginning in 2013, Sheffield and Miller set out to prove that these two models described fundamentally the same thing.

**The Problem With Random Growth**

Sheffield and Miller began collaborating thanks to a kind of dare. As a graduate student at Stanford in the early 2000s, Sheffield worked under Amir Dembo, a probability theorist. In his dissertation, Sheffield formulated a problem having to do with finding order in a complicated set of surfaces. He posed the question as a thought exercise as much as anything else.

“I thought this would be a problem that would be very hard and take 200 pages to solve and probably nobody would ever do it,” Sheffield said.

But along came Miller. In 2006, a few years after Sheffield had graduated, Miller enrolled at Stanford and also started studying under Dembo, who assigned him to work on Sheffield’s problem as way of getting to know random processes. “Jason managed to solve this, I was impressed, we started working on some things together, and eventually we had a chance to hire him at MIT as a postdoc,” Sheffield said.

In order to show that LQG and the Brownian map were equivalent models of a random two-dimensional surface, Sheffield and Miller adopted an approach that was simple enough conceptually. They decided to see if they could invent a way to measure distance on LQG surfaces and then show that this new distance measurement was the same as the distance measurement that came packaged with the Brownian map.

To do this, Sheffield and Miller thought about devising a mathematical ruler that could be used to measure distance on LQG surfaces. Yet they immediately realized that ordinary rulers would not fit nicely into these random surfaces — the space is so wild that one cannot move a straight object around without the object getting torn apart.

The duo forgot about rulers. Instead, they tried to reinterpret the distance question as a question about growth. To see how this works, imagine a bacterial colony growing on some surface. At first it occupies a single point, but as time goes on it expands in all directions. If you wanted to measure the distance between two points, one (seemingly roundabout) way of doing that would be to start a bacterial colony at one point and measure how much time it took the colony to encompass the other point. Sheffield said that the trick is to somehow “describe this process of gradually growing a ball.”

It’s easy to describe how a ball grows in the ordinary plane, where all points are known and fixed and growth is deterministic. Random growth is far harder to describe and has long vexed mathematicians. Yet as Sheffield and Miller were soon to learn, “[random growth] becomes easier to understand on a random surface than on a smooth surface,” said Sheffield. The randomness in the growth model speaks, in a sense, the same language as the randomness on the surface on which the growth model proceeds. “You add a crazy growth model on a crazy surface, but somehow in some ways it actually makes your life better,” he said.

The following images show a specific random growth model, the Eden model, which describes the random growth of bacterial colonies. The colonies grow through the addition of randomly placed clusters along their boundaries. At any given point in time, it’s impossible to know for sure where on the boundary the next cluster will appear. In these images, Miller and Sheffield show how Eden growth proceeds over a random two-dimensional surface.

The first image shows Eden growth on a fairly flat — that is, not especially random — LQG surface. The growth proceeds in an orderly way, forming nearly concentric circles that have been color-coded to indicate the time at which growth occurs at different points on the surface.

In subsequent images, Sheffield and Miller illustrate growth on surfaces of increasingly greater randomness. The amount of randomness in the function that produces the surfaces is controlled by a constant, gamma. As gamma increases, the surface gets rougher — with higher peaks and lower valleys — and random growth on that surface similarly takes on a less orderly form. In the previous image, gamma is 0.25. In the next image, gamma is set to 1.25, introducing five times as much randomness into the construction of the surface. Eden growth across this uncertain surface is similarly distorted.

When gamma is set to the square root of eight-thirds (approximately 1.63), LQG surfaces fluctuate even more dramatically. They also take on a roughness that matches the roughness of the Brownian map, which allows for more direct comparisons between these two models of a random geometric surface.

Random growth on such a rough surface proceeds in a very irregular way. Describing it mathematically is like trying to anticipate minute pressure fluctuations in a hurricane. Yet Sheffield and Miller realized that they needed to figure out how to model Eden growth on very random LQG surfaces in order to establish a distance structure equivalent to the one on the (very random) Brownian map.

“Figuring out how to mathematically make [random growth] rigorous is a huge stumbling block,” said Sheffield, noting that Martin Hairer of the University of Warwick won the Fields Medal in 2014 for work that overcame just these kinds of obstacles. “You always need some kind of amazing clever trick to do it.”

**Random Exploration**

Sheffield and Miller’s clever trick is based on a special type of random one-dimensional curve that is similar to the random walk except that it never crosses itself. Physicists had encountered these kinds of curves for a long time in situations where, for instance, they were studying the boundary between clusters of particles with positive and negative spin (the boundary line between the clusters of particles is a one-dimensional path that never crosses itself and takes shape randomly). They knew these kinds of random, noncrossing paths occurred in nature, just as Robert Brown had observed that random crossing paths occurred in nature, but they didn’t know how to think about them in any kind of precise way. In 1999 Oded Schramm, who at the time was at Microsoft Research in Redmond, Washington, introduced the SLE curve (for Schramm-Loewner evolution) as the canonical noncrossing random curve.

Schramm’s work on SLE curves was a landmark in the study of random objects. It’s widely acknowledged that Schramm, who died in a hiking accident in 2008, would have won the Fields Medal had he been a few weeks younger at the time he’d published his results. (The Fields Medal can be given only to mathematicians who are not yet 40.) As it was, two people who worked with him built on his work and went on to win the prize: Wendelin Werner in 2006 and Stanislav Smirnov in 2010. More fundamentally, the discovery of SLE curves made it possible to prove many other things about random objects.

“As a result of Schramm’s work, there were a lot of things in physics they’d known to be true in their physics way that suddenly entered the realm of things we could prove mathematically,” said Sheffield, who was a friend and collaborator of Schramm’s.

For Miller and Sheffield, SLE curves turned out to be valuable in an unexpected way. In order to measure distance on LQG surfaces, and thus show that LQG surfaces and the Brownian map were the same, they needed to find some way to model random growth on a random surface. SLE proved to be the way.

“The ‘aha’ moment was [when we realized] you can construct [random growth] using SLEs and that there is a connection between SLEs and LQG,” said Miller.

SLE curves come with a constant, kappa, which plays a similar role to the one gamma plays for LQG surfaces. Where gamma describes the roughness of an LQG surface, kappa describes the “windiness” of SLE curves. When kappa is low, the curves look like straight lines. As kappa increases, more randomness is introduced into the function that constructs the curves and the curves turn more unruly, while obeying the rule that they can bounce off of, but never cross, themselves. Here is an SLE curve with kappa equal to 0.5, followed by an SLE curve with kappa equal to 3.

Sheffield and Miller noticed that when they dialed the value of kappa to 6 and gamma up to the square root of eight-thirds, an SLE curve drawn on the random surface followed a kind of exploration process. Thanks to works by Schramm and by Smirnov, Sheffield and Miller knew that when kappa equals 6, SLE curves follow the trajectory of a kind of “blind explorer” who marks her path by constructing a trail as she goes. She moves as randomly as possible except that whenever she bumps into a piece of the path she has already followed, she turns away from that piece to avoid crossing her own path or getting stuck in a dead end.

“[The explorer] finds that each time her path hits itself, it cuts off a little piece of land that is completely surrounded by the path and can never be visited again,” said Sheffield.

Sheffield and Miller then considered a bacterial growth model, the Eden model, that had a similar effect as it advanced across a random surface: It grew in a way that “pinched off” a plot of terrain that, afterward, it never visited again. The plots of terrain cut off by the growing bacteria colony looked exactly the same as the plots of terrain cut off by the blind explorer. Moreover, the information possessed by a blind explorer at any time about the outer unexplored region of the random surface was exactly the same as the information possessed by a bacterial colony. The only difference between the two was that while the bacterial colony grew from all points on its outer boundary at once, the blind explorer’s SLE path could grow only from the tip.

In a paper posted online in 2013, Sheffield and Miller imagined what would happen if, every few minutes, the blind explorer were magically transported to a random new location on the boundary of the territory she had already visited. By moving all around the boundary, she would be effectively growing her path from all boundary points at once, much like the bacterial colony. Thus they were able to take something they could understand — how an SLE curve proceeds on a random surface — and show that with some special configuring, the curve’s evolution exactly described a process they hadn’t been able to understand, random growth. “There’s something special about the relationship between SLE and growth,” said Sheffield. “That was kind of the miracle that made everything possible.”

The distance structure imposed on LQG surfaces through the precise understanding of how random growth behaves on those surfaces exactly matched the distance structure on the Brownian map. As a result, Sheffield and Miller merged two distinct models of random two-dimensional shapes into one coherent, mathematically understood fundamental object.

**Turning Randomness Into a Tool**

Sheffield and Miller have already posted the first two papers in their proof of the equivalence between LQG and the Brownian map on the scientific preprint site arxiv.org; they intend to post the third and final paper later this summer. The work turned on the ability to reason across different random shapes and processes — to see how random noncrossing curves, random growth, and random two-dimensional surfaces relate to one another. It’s an example of the increasingly sophisticated results that are possible in the study of random geometry.

“It’s like you’re in a mountain with three different caves. One has iron, one has gold, one has copper — suddenly you find a way to link all three of these caves together,” said Sheffield. “Now you have all these different elements you can build things with and can combine them to produce all sorts of things you couldn’t build before.”

Many open questions remain, including determining whether the relationship between SLE curves, random growth models, and distance measurements holds up in less-rough versions of LQG surfaces than the one used in the current paper. In practical terms, the results by Sheffield and Miller can be used to describe the random growth of real phenomena like snowflakes, mineral deposits, and dendrites in caves, but only when that growth takes place in the imagined world of random surfaces. It remains to be seen whether their methods can be applied to ordinary Euclidean space, like the space we live in.

*This article was reprinted on Wired.com.*

If the two dimensional surfaces had a property like mass, how would mass grow, as the surface did?

So… 1.63 huh?

The sqrt(8/3) or 1.63 is quite close to 1.618 or the golden ratio which emerges in many fractal structures as it is most irrational number with all 1s in its continued fraction. Can there be a link?

What about if instead of imagining the blind explorer being magically transported to a random new location on the boundary of the territory she had already visited, you imagine at each decision point virtual "copy" explorers breaching off in the different directions (kind of like virtual particles in quantum physics)?

I cannot help but feel that there must be some fundamental connection between the growth processes discussed in this article, and those associated with the Tracy-Widom distribution.

Wait. How is this not just fractals?

It might be interesting to somehow introduce the "ant" algorithm into the blind explorer SLE trail so that instead of avoiding a prior path the explorer selects according to a prior "odorant" deposited on the path.

Can this discovery be applied to the stock market. ?

The variation of pricing over time is like a drunk man walking over a random surface.

If so we are all going to be millionaire's

"…She moves as randomly as possible except that whenever she bumps into a piece of the path she has already followed, she turns away from that piece to avoid crossing her own path or getting stuck in a dead end."

As stated, as she encounters her old path, she cuts off a piece of the plane, but she cannot know whether to turn one way, "into" the cut-off zone, or the other way and remain outside of it. If she turns into the zone she has become confined to that space and can never escape. I cannot see that the angle between the paths at the encounter point cannot be used to make the decision. This must impose a statistically calculable maximum size to the area she can blindly explore, no?

What are the implications of this, if any?

Obviously, I meant to write above that "I cannot see that the angle between the paths at the encounter point CAN be used to make the decision." Sorry for any confusion.

Ray writes that "she cannot know whether to turn one way, into the cut-off zone, or the other way and remain outside of it".

This is an excellent point. There are two ways to deal with this: first, you can just ASSUME that she has some magical way to know which of the two choices will keep her on the outside. (On a finite surface, you can say you have some fixed target point, and the "outside" is defined to be the piece that contains the target.) This is the most straightforward approach, and is what is tacitly assumed in this article.

Second, you can imagine that the explorer divides into two copies of herself, and one copy explorers the inside piece (from which it can't escape) and the other explores the outside piece. If you take the second approach and allow repeated branching, you obtain a type of randomized depth first search tree, which is actually important for reasons of its own. If you really want a lot of technical details, here is an example of a paper on "exploration trees" that makes use of the second approach.

http://arxiv.org/abs/math/0609167

Almost sounds like they took the logistic difference equation, turned it into a plane, and adjusted the constant to see how that not only changed the surface but how it facilitated random growth.

Ray's problem solved:

Imagine that the boundary has left and right sides colored. If you encounter a left side, keep it on your left (turn right.) The edges facing into a closed region must all be right or left.

Or equivalently, that the path has a direction arrow. When encountering the path, turn in a way opposite the direction arrow. If you follow the direction arrow, you're trapped in a spiral.

A physical example would be a marker that lays down a solid line with one fuzzy fringe.

This isn't fractals because it's non-deterministic; fractals have complex paths through space because of repeating processes that proceed at higher and higher levels of accuracy, chaotic processes are one of the processes that can do that, and they produce fractal patterns, by amplification of initial conditions or amplification of their own feedback, but in such a way as to preserve the information from the last amplification. You give them a starting point and they work on that in greater and greater detail.

The processes talked about in this article are creating complexity through new information. They aren't starting with a seed and elaborating on it from there, but like a computer sampling atmospheric noise, they are continuously producing unexpected, or at least, unpredictable events.

They could end up being approximately fractal, or not end up being fractal at all; you can have a system that looks "rough", in the sense of having complicated curves and squiggly lines, but it might not have the "self-similarity", the regular patterns between different parts of the shape or different ways of looking at it, that distinguish a tree with it's branches and twigs from a random paint splatter. In the case of an unknown shape, I'm not sure how easy it would be by eye to determine if it was randomly generated or fractal, but there are mathematical measures people use to make some estimate. (Also, chaotic processes can create patterns practically indistinguishable from randomness, so this kind of maths can be useful for categorising their behaviour.)

You can think about it as if there are a few different guesses you can make about the kinds of maths something follows; you might look at it in terms of it's frequencies, as if it was a set of layered repeating functions like a complex steady sound, you could look at in terms of stable points, if it tends to return to certain places, like orbiting planets. You could look at the ways it repeats itself, or how difficult it is to determine it's previous behaviour from it's current behaviour, and the distribution of where it usually appears etc.

Each of these is a way of simplifying a complex rough shape down to something comprehensible, and each of them might miss out more or less of it's details. A tree is not actually a fractal, just like it is not actually a linear pole or a series of random paths towards the sky, but depending on whether your dealing with a birch tree or a conifer or an oak, each of those descriptions might be more or less accurate.

So if someone tells you everything is fractals, treat them with the same suspicion you would if they said that everything in the universe goes in circles, there's more to rough things than just that.

Anyway, here's what I find surprising about this result; I'm not a mathematician, so this is going to be quite a superficial thing. But what surprises me is that it's like they started with a random process, created a generalised picture of it, then went back to randomness in order to define it. You'd think that that would be going backwards in terms of getting closer to a solution, so it's cool. It's also quite nice to think about how that random growth model would tend eventually towards circles on a flat plane; each area in the vicinity of the shape will be more likely to be filled in the more concave the perimeter is in it's vicinity, with the extreme example being a little loop of perimeter around a single square, and so there will be a tendency for the flatter sections or dipping in sections to fill out before the extreme edges. I have a feeling if you kept shrinking the squares of the growth model infinitesimally the effect could end up pretty similar to a uniformly growing circle.

Can this be applied to prime numbers? Their growth is somewhat random, and are similar to bacterial growth. Also the non-crossing path seems like a good tool if you were to look at previous primes/prime factors as a path already crossed.

Maybe they can tie this with http://phys.org/news/2016-07-quantum-bounds.html

It's interesting that it seems to lead right into complexity as dark matter for the expansion, yet in an aversion of loops as ever growing completeness it does not become isomorphic to a holonic boundary, unless it is infinitely larger than the surrealistic. And yet math is still there.

Without the randomness the strings seem equivalent to branes, yet the indiscernible orthogonal is as frustrating as undecidability. Perhaps closed time like curves are only useful when time is dichotomized and so classical systems emulations are axiomatically limited, but economically feasible. How strange that the objective is vague.

I can't even say if causal triangulation would help here. The method does not seem to have a need consistency as an internally reflected parameter. Absolutely stunning.

Even without self return I wonder if it self compacts. As though nesting occurs. It probably depends on if it is continuously versus discrete random.

I'm idiot.