As Supersymmetry Fails Tests, Physicists Seek New Ideas

LHC tunnel


No hints of “new physics” beyond the predictions of the Standard Model have turned up in experiments at the Large Hadron Collider, a 17-mile circular tunnel at CERN Laboratory in Switzerland that slams protons together at high energies. 

Comments (10)

As a young theorist in Moscow in 1982, Mikhail Shifman became enthralled with an elegant new theory called supersymmetry that attempted to incorporate the known elementary particles into a more complete inventory of the universe.

“My papers from that time really radiate enthusiasm,” said Shifman, now a 63-year-old professor at the University of Minnesota. Over the decades, he and thousands of other physicists developed the supersymmetry hypothesis, confident that experiments would confirm it. “But nature apparently doesn’t want it,” he said. “At least not in its original simple form.”

With the world’s largest supercollider unable to find any of the particles the theory says must exist, Shifman is joining a growing chorus of researchers urging their peers to change course.

An image of Mikhail Shifman

M. Shifman

As one of the early developers of a popular theory called supersymmetry, Mikhail Shifman has been disappointed to see it fail experimental tests. 

In an essay posted last month on the physics website, Shifman called on his colleagues to abandon the path of “developing contrived baroque-like aesthetically unappealing modifications” of supersymmetry to get around the fact that more straightforward versions of the theory have failed experimental tests. The time has come, he wrote, to “start thinking and developing new ideas.”

But there is little to build on. So far, no hints of “new physics” beyond the Standard Model — the accepted set of equations describing the known elementary particles — have shown up in experiments at the Large Hadron Collider, operated by the European research laboratory CERN outside Geneva, or anywhere else. (The recently discovered Higgs boson was predicted by the Standard Model.) The latest round of proton-smashing experiments, presented last week at the Hadron Collider Physics conference in Kyoto, Japan, ruled out another broad class of supersymmetry models, as well as other theories of “new physics,” by finding nothing unexpected in the rates of several particle decays.

“Of course, it is disappointing,” Shifman said. “We’re not gods. We’re not prophets. In the absence of some guidance from experimental data, how do you guess something about nature?”

Younger particle physicists now face a tough choice: follow the decades-long trail their mentors blazed, adopting ever more contrived versions of supersymmetry, or strike out on their own, without guidance from any intriguing new data.

“It’s a difficult question that most of us are trying not to answer yet,” said Adam Falkowski, a theoretical particle physicist from the University of Paris-South in Orsay, France, who is currently working at CERN. In a blog post about last week’s results, Falkowski joked that it was time to start applying for jobs in neuroscience.

“There’s no way you can really call it encouraging,” said Stephen Martin, a high-energy particle physicist at Northern Illinois University who works on supersymmetry, or SUSY for short. “I’m certainly not someone who believes SUSY has to be right; I just can’t think of anything better.”

Supersymmetry has dominated the particle physics landscape for decades, to the exclusion of all but a few alternative theories of physics beyond the Standard Model.

“It’s hard to overstate just how much particle physicists of the past 20 to 30 years have invested in SUSY as a hypothesis, so the failure of the idea is going to have major implications for the field,” said Peter Woit, a particle theorist and mathematician at Columbia University.

The theory is alluring for three primary reasons: It predicts the existence of particles that could constitute “dark matter,” an invisible substance that permeates the outskirts of galaxies. It unifies three of the fundamental forces at high energies. And — by far the biggest motivation for studying supersymmetry — it solves a conundrum in physics known as the hierarchy problem.

The problem arises from the disparity between gravity and the weak nuclear force, which is about 100 million trillion trillion (10^32) times stronger and acts at much smaller scales to mediate interactions inside atomic nuclei. The particles that carry the weak force, called W and Z bosons, derive their masses from the Higgs field, a field of energy saturating all space. But it is unclear why the energy of the Higgs field, and therefore the masses of the W and Z bosons, isn’t far greater. Because other particles are intertwined with the Higgs field, their energies should spill into it during events known as quantum fluctuations. This should quickly drive up the energy of the Higgs field, making the W and Z bosons much more massive and rendering the weak nuclear force about as weak as gravity.

Illustration by CERN & IES de SAR

Supersymmetry proposes that every particle in the Standard Model, shown at left, has a “superpartner” particle still awaiting discovery. (Illustration: CERN & IES de SAR)

Supersymmetry solves the hierarchy problem by theorizing the existence of a “superpartner” twin for every elementary particle. According to the theory, fermions, which constitute matter, have superpartners that are bosons, which convey forces, and existing bosons have fermion superpartners. Because particles and their superpartners are of opposite types, their energy contributions to the Higgs field have opposite signs: One dials its energy up, the other dials it down. The pair’s contributions cancel out, resulting in no catastrophic effect on the Higgs field. As a bonus, one of the undiscovered superpartners could make up dark matter.

“Supersymmetry is such a beautiful structure, and in physics, we allow that kind of beauty and aesthetic quality to guide where we think the truth may be,” said Brian Greene, a theoretical physicist at Columbia University.

Over time, as the superpartners failed to materialize, supersymmetry has grown less beautiful. According to mainstream models, to evade detection, superpartner particles would have to be much heavier than their twins, replacing an exact symmetry with something like a carnival mirror. Physicists have put forward a vast range of ideas for how the symmetry might have broken, spawning myriad versions of supersymmetry.

Illustration by CERN & IES de SAR

According to mainstream supersymmetry models, because the superpartners have yet to be detected, they must be much heavier than the known particles, turning what was an exact symmetry into more of a carnival mirror. 

But the breaking of supersymmetry can pose a new problem. “The heavier you have to make some of the superpartners compared to the existing particles, the more that cancellation of their effects doesn’t quite work,” Martin explained.

Most particle physicists in the 1980s thought they would detect superpartners that are only slightly heavier than the known particles. But the Tevatron, the now-retired particle accelerator at Fermilab in Batavia, Ill., found no such evidence. As the Large Hadron Collider probes increasingly higher energies without any sign of supersymmetry particles, some physicists are saying the theory is dead. “I think the LHC was a last gasp,” Woit said.

Today, most of the remaining viable versions of supersymmetry predict superpartners so heavy that they would overpower the effects of their much lighter twins if not for fine-tuned cancellations between the various superpartners. But introducing fine-tuning in order to scale back the damage and solve the hierarchy problem makes some physicists uncomfortable. “This, perhaps, shows that we should take a step back and start thinking anew on the problems for which SUSY-based phenomenology was introduced,” Shifman said.

But some theorists are forging ahead, arguing that, in contrast to the beauty of the original theory, nature could just be an ugly combination of superpartner particles with a soupçon of fine-tuning. “I think it is a mistake to focus on popular versions of supersymmetry,” said Matt Strassler, a particle physicist at Rutgers University. “Popularity contests are not reliable measures of truth.”

An image of Adam Falkowski

Adam Falkowski

Adam Falkowski, a theorist currently working at CERN, said the lack of intriguing data emerging at the LHC will trigger a gradual decline in the number of jobs in particle physics. 

In some of the less popular supersymmetry models, the lightest superpartners are not the ones the Large Hadron Collider experiments have looked for. In others, the superpartners are not heavier than existing particles but merely less stable, making them more difficult to detect. These theories will continue to be tested at the Large Hadron Collider after it is upgraded to full operational power in about two years.

If nothing new turns up — an outcome casually referred to as the “nightmare scenario” — physicists will be left with the same holes that riddled their picture of the universe three decades ago, before supersymmetry neatly plugged them. And, without an even higher-energy collider to test alternative ideas, Falkowski says, the field will undergo a slow decay: “The number of jobs in particle physics will steadily decrease, and particle physicists will die out naturally.”

Greene offers a brighter outlook. “Science is this wonderfully self-correcting enterprise,” he said. “Ideas that are wrong get weeded out in time because they are not fruitful or because they are leading us to dead ends. That happens in a wonderfully internal way. People continue to work on what they find fascinating, and science meanders toward truth.”

Note: This article was updated on Nov. 26, 2012, to clarify the role of the weak nuclear force inside atomic nuclei.

This article was reprinted on

Add a Comment

View Comments (10)

Comments for this entry

  • To suggest that particle physics is on the verge of dying just because an entire generation of theorists have pursued a failed path is grossly inaccurate. Max Planck was credited with saying that progress in physics happens by burying one dead theory at a time. The greatest opportunities for paradigm changes develop from crises similar to the one created by the SUSY fiasco.

  • Prior to the start-up of the LHC, the possibility of finding nothing appreciable beyond the standard model of particle physics was called “The Nightmare Scenario” because that meant most of the theoretical attempts over the last 40 years to explain the shortcomings of the standard model (e.g., string theory and supersymmetry, etc.) were misguided.

    The main problems with the standard model of particle physics are:

    1. The Standard Model is primarily a heuristic model with 26-30 fundamental parameters that have to be “put in by hand”.

    2. The Standard Model did not and cannot predict the masses of the fundamental particles that make up all of the luminous matter that we can observe.

    3. The Standard Model did not and cannot predict the existence of the dark matter that constitutes the overwhelming majority of matter in the cosmos. The Standard Model describes heuristically the “foam on top of the ocean”.

    4. The vacuum energy density crisis clearly suggests a fundamental flaw at the very heart of particle physics. The VED crisis involves the fact that the vacuum energy densities predicted by particle physicists (microcosm) and measured by cosmologists (macrocosm) differ by up to 120 orders of magnitude (roughly 10^70 to 10^120, depending on how one ‘guess-timates’ the particle physics VED).

    5. The conventional Planck mass is highly unnatural, i.e., it bears no relation to any particle observed in nature, and calls into question the foundations of the quantum chromodynamics sector of the Standard Model.

    6. Many of the key particles of the Standard Model have never been directly observed. Rather, their existence is inferred from secondary, or more likely, tertiary decay products. Quantum chromodynamics is entirely built on inference, conjecture and speculation. It is too complex for simple definitive predictions and testing.

    7. The standard model cannot include gravitation which is the most fundamental and well-tested interaction of the cosmos.

    Clearly it is time for a new approach to understanding nature. Almost certainly this will involve expanding the set of fundamental geometrical symmetries of nature. It is now crucial to seriously question the old and poorly tested assumptions of the past like strict reductionism, absolute scale and an absolute value of G for all scales of the discrete hierarchical cosmos.

    Time to study nature, not Platonic abstractions.

    Robert L. Oldershaw
    Discrete Scale Relativity
    Fractal Cosmology

  • Addendum

    “How can physics live up to its true greatness except by a new revolution in outlook which dwarfs all its past revolutions? And when it comes, will we not say to each other, ‘Oh, how beautiful and simple it all is! How could we ever have missed it for so long!’.” John Archibald Wheeler

  • I think that part of the incompatibility that gravity causes is because it crosses dimensional properties. We don’t yet know how to include that in our standard model mathematics. Hopefully, somdeone will find a way in the near future.

  • Ervin Goldfain (and Max Planck) said it.
    THIS is new and important knowledge and not a “failure”.

  • I would like to make reference to my paper published last year in Physica Scripta 85 (2012), entitled “Particle masses without the Higgs mechanism and supersymmertry”,and to the cited publications. In popular science talks even prominent physicists speak quite often about sources of energy. In one talk someone said black holes are the largest source of enery in the universe. Now, any talk about sources of energy imply a GOD who violated his own law of mass-energy conservation at the big bang, thereafter in black holes, or otherwise. The energy of the universe has to do with the divergent zero point vacuum energy, and the hope was that supersymmetry could solve this problem with the likewise divergent negative zero point energy of the supersymmetric Fermi sector in the MSSM, but supersymmetry was not found at the LHC in Geneva. The only reasonable alternative explanation is that there are negative and positive masses in equal amounts making the total energy in the universe exatly equal to zero. Now, suppose that there are large positive and negative mass particles which are gravitationally interacting with each other. Their combined mass is not zero, because of their non-zero gravitational field mass, which for a mass dipole is positive. Assuming that these positive and negative masses are of the order +- 10^13 GeV, one finds by a simple calculation that the gravitational field mass of this mass dipole is of the order 100 GeV, about the energy of the electroweak energy scale , that is by 11 orders of magnitude smaller. But 10^13 GeV is only by a factor 10^6 smaller than the Planck mass, comparable to the mass ratio of the nuclear to the atomic mass scale. This explains the smallness of the typical elementary particle mass as the small gravitational field mass of a mass dipole made up of a large positive and a large negative mass. And it explains the Higgs mass as the gravitational interaction energy coming from static spin zero gravitons, very much as the electrostatic interaction energy in atomic physics can be explained to come from the static longitudinal photons of QED.

  • Instead of extra particles that support the supersymmetry model, why can’t the standard model particles also have the properties that are required? Answering this question would help me better understand the concept if i can call it such. Thank you.

  • The reason why this won’t work is the divergent zero-point vacuum energy, which for bosons is positive and fermions negative, is largely cancelling each other out in supersymmetric theories where for each boson there is a supersymmetric fermion and vice verse. In fact there is one supersymmetric theory, although not realized in nature, which for this reason is completely finite. The problem is that physicists have been carried away by nice mathematics, and not guided by experimental facts: No supersymmetric particle has ever been observed in the cosmic radiation or the LHC. One good example were this rule to stick to experimental facts was ignored is string theory. The publicty hungry Lubos Motl and fanatic believer in string theory, had once in all his e-mails placed the motto: “String/M Theory is the language in which GOD wrote the world”. When I wrote him: “Lubos, that is not science but religion in the guise of science” he seemed to have stopped writing this nonsense.

  • There was a time, when invention meant to trim the sharp edges off of a square, and voila, wheel is invented. Of course, you had to think outside the box to come up with new ideas, but it took way less effort to come up with something new. Inventions and new ideas could be created by individuals. As we humans advanced more and more, the new ideas get harder and harder to come by. It takes more than one individual to come up with new ideas. More than one person’s effort is required to polish a new idea. Just look at a humble desktop computer. There is no human being on the planet that can make such a thing on his own, alone. I think the same argument applies to theoretical physics. New ideas take longer to prove right or wrong. And that’s the nature of the progress. we discover more and as a result we face tougher problems.

  • As Supersymmetry Fails Tests – Physicists Seek New Ideas

    About “Physicists Seek New Ideas” – here is a way to better utilize the data from previous experiments to facilitate theoretical physicists making more and better predictions.

    First stop : Address Oldershaw’s complaints by pushing forward to make new ways for software to analyze and model all collider data from the LHC , Tevatron , and ACME .

    Oldershaw’s complaints below push us forward to find a better way and, spoiler alert, I believe you physicists have already planted the seeds of the next quantum leap forward :-)

    But first, address Oldershaw – The main problems with the standard model of particle physics are:

    1. The Standard Model is primarily a heuristic model with 26-30 fundamental parameters that have to be “put in by hand”.
    ^^ computer simulation models based on actual experimental data should fix this !! See below ^^

    2. The Standard Model did not and cannot predict the masses of the fundamental particles that make up all of the luminous matter that we can observe. ..
    ^^ wow – so IS Oldershaw correct here ?! ^^

    Anyhow I am so glad you caught yourselves and recognized precise theoretical predictions should come BEFORE increasing experimental parameters like eV density to arbitrarily high levels just because you can – as that would be the equivalent of Enrico Fermi “not bothering” to understand atomic theory enough to predict a chain reaction and so not build in control rods into his design for that first pile in Chicago .. And the disastrous results would have been: Hello Anyone ? Anyone ? Bueller ? . Finally – at the risk of being redundant and preachy – these are fundamental forces of nature a la atomic chain reactions, are we in a local minima of stability etc., so be sure you can explain the data from prior experiments with confidence before forcing mother nature ahead ; in other words predictions should ALWAYS precede experiments. And when enough predictions fail – do NOT proceed until you have some fresh predictions based on, or at least not directly contradicted by, prior experiential data.

    In other words, when nature experimentally shoots down your theories faster than you can advance predictions from them – pause, reflect, and use computer software models to find a new path to disciplined experiments before moving ahead methodically.

    ^^ I hope Mikhail Shifman reads, modifies and responds to the above ^^

    While Shifman, and especially Greene are inspiring here >>

    I am so glad to hear scientists “fessing” up and moving forward here – as any good physics-psych would say “the first step to fixing SUSY is SUSY has to admit she has a problem, and SUSY has to really want to change ;-)

    And, ok sorry for the cheap shot here , but I think I may have sensed the issue when I told one of my physics profs a year ago – that my last thought about string theory some decades ago was that “String theory never predicted anything but the need for another dimension to fix itself .. So Not much of a theory” .. In contrast to Einstein’s general relativity predicting Mercury parallax etc. … My point here is good theories make predictions, take their licks if they are wrong, and don’t make arbitrarily complex modifications to obscure the facts when experimental data does not support the theories predictions (Occam’s razor) .

    Ok, so I really like you theoretical physicists … And I must admit I really want that GUT field theory just like you do too but, alas, I’m been wondering for months if higher energy eV atom smashing is just too uncontrolled to be a controlled scientific experiment … Smashing a watch to pieces and trying to reassemble the parts to figure out the number of jewels in the movement … Is it just me folks or … Errrrm do you see the problem here ?

    But don’t be hatin’ when you should be innovatin’ ; and don’t despair I have at least 2 major ideas about how to revive your search for GUT again here …

    #1) is about more intelligently gathering and analyzing data from the existing LHC collider .

    #2) is about using high precision lasers for more controlled orthogonal/vector controlled experiments
    … And a way to monetize the whole darn thing to boot ( but that comes later ) …

    #3) And, on 3rd thought, I know the Higgs was predicted decades ago and recently confirmed, so why the concern here that “the well has run dry”?

    And I’ve also started to plan out the framework of the software for the mathematical modeling language that most compactly represents models and captures the actual data from ALL of the following DATA sets and to make them all comparable:

    #1) LHC experiment…

    #2) Harvard ACME super PRECISE, super LOW ENERGY LASER experiment data.

    #3) Tevatron historical data ( just realized this is already a great working example, and added this to this post for you :-) )

    So before you place a fie on me :-) .. Please open your mind up and think about this new approach #2) from Harvard as it seems an excellent way forward to decide the N of the N-dimensional fabric of physics i.e. does string theory / M-theory can even POSSIBLY apply to actual experimental data ?

    Electron EDM calculation race for precision determines fate of theoretical physics models.

    Specifically ,

    Super precise low cost ACME laser experiments
    LH collider (LHC) experiments


    Simultaneously, the ACME experiment, run by a team of less than 50, built for a few million dollars (and much, much smaller), has created a more precise test of these advanced theories. This experiment hinges on an extremely painstaking and precise method to picture the shape and size of electrons.

    … The electron EDM is a very strong probe of physics BEYOND the standard model.

    … The electric field in ThO has been calculated to be 84 GV/cm, one of the largest known.

    … Therefore, we can perform our field reversal without reversing any external fields, which will give us very powerful rejection of systematic errors. …

    ACME Electron EDM

    One of my major points is that ACME Electron EDM at Harvard has clearly, completely and absolutely nailed the design of particle physics experiments on the critical aspect of Orthogonality , so expect more from them. In other words, Harvard ACME lasers seem MUCH more likely to decide the future of SUSY, String Theory, and quantum coupling than the LHC – that is UNLESS LHC gets wise and retrofits/transforms experiment and/or data based on principles below I feel kinda sorry for them.

    And hey C V I suspect this is also a path to not just theoretical fame but several practical fortunes, but probably not in the manner You think I think ;-) .

    —- Definitions and details

    Define Orthogonality and why it is a good goal for all scientific experiments :

    Example of orthogonal factorial design

    Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.

    Factorial experiments

    Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.

    Wikipedia, the free encyclopedia


    And Winterberg et al are inspiring here.

    Ok, so in a nutshell, my idea is to be certain the mathematical software models constantly evaluate the compactness/efficiency with which they represent the ACTUAL experimental data being created by
    #1) LHC atom smasher
    #2) Harvard ACME lasers
    #3) Tevatron historical data

    Essentially, think of scientific predictions as competing genetic algorithm models and you will agree that the following predictions and data analysis should always be automatically occurring:

    For example, the Tevatron DATA ANALYSIS predicted Higgs Boson Properties — 4 Years AFTER Shutting Down

    Read more :

    The 4 year old Tevatron experimental DATA can exclude Higgs masses above 147 GeV at 95 % confidence level, a region which has already been wiped out last year by the data resulting from more powerful collisions analyzed by the LHC experiments. Per >>

    So, the post experiment computer analysis and human discussion should be facilitated and automated by custom built software to run on CERN computers ; I have plans for that in my file here : v2015-03-17 LHC Data processing via N-dimensional faceted search engine use SOLR to match LHC Data to theory – GUT Prospects.doc

    And I have plenty more but that’s all for now folks until you let me know what you think, and are kind about it :-) .

    Marc Cox

Leave a Comment

Your email address will not be published. Your name will appear near your comment. Required *

Quanta Magazine moderates all comments with the goal of facilitating an informed, substantive, civil conversation about the research developments we cover. Comments that are abusive, profane, self-promotional, misleading, incoherent or off-topic will be rejected. We can only accept comments that are written in English.