The one reason that physicists won’t give up on supersymmetry

Avatar
8 Views
24 Min Read

One of the greatest ideas in all of physics, regardless of whether it turns out to be a true idea that reflects reality or not, is that of supersymmetry, or SUSY for short. The Standard Model of elementary particles was cobbled together over the course of the 20th century, growing from initial ideas and observations about the quantum nature of light and matter. The experimental and observational discovery of subatomic particles — not just protons, neutrons, and electrons but also quarks, neutrinos, muons, plus their antimatter counterparts and more — came alongside developments in quantum field theory, profoundly revolutionizing our conception of existence.

Although it’s now almost 100 years in the past, the positron, or the antimatter counterpart of the electron, wasn’t first discovered experimentally, but was rather predicted as a theoretical necessity to prevent a theoretical pathology from giving the electron an infinite amount of self-energy. The positron’s discovery was a vindication of that theoretical idea, and launched the era of quantum field theory in particle physics. In order to avoid a similar pathology with the masses of the Standard Model particles, a new type of symmetry can “protect” them from blowing up to unrealistically large values, and that symmetry is precisely what SUSY, or supersymmetry, is. Here’s why, despite the lack of evidence for its existence, physicists are having a hard time leaving this theoretical idea behind.

If you have two conductors with equal and opposite charges on them, it’s an exercise in classical physics alone to calculate the electric field and its strength at every point in space. In regular (Schrodinger-like) quantum mechanics, we discuss how particles respond to that electric field, but the field itself is not quantized as well. This seems to be the biggest flaw in the original formulation of quantum mechanics.

Credit: Geek3/Wikimedia Commons

There’s a joke among physics teachers when we talk about the concept of electric potential energy: that it’s like the neighborhood crack dealer. Why? Because the first one’s free.

It’s true: if you have one and only one point charge — where the electric charge of an object isn’t distributed in three-dimensional space but is rather confined to a single point — you can bring it in, from even an infinite distance away, to absolutely any location you choose, and it doesn’t cost you any energy. However, once you’ve placed that charge down, if you want to bring a second charge in, whether it’s a point or not, whether it’s the same species (positive or negative) as the first charge, whether it comes from a finite distance away or an infinite distance away, etc., it must experience the electric field generated by that first charge and do work against it. In other words, although the first one was free, the second charge (and all subsequent charges) cost energy.

If you assumed that the electron weren’t a point particle, but rather were a sphere-like particle whose electric charge was distributed throughout it, you could calculate how large it would be if the energy from the electron’s electric charge (E) were responsible for the electron’s mass (m) from Einstein’s most famous equation: E = mc². If you were to perform this calculation, you’d find that the electron was about 2.9 femtometers in radius, or more than three times larger than the actual size of a proton. Clearly, this doesn’t match reality, as the modern Large Hadron Collider has constrained the electron’s size to be more than 10,000 times smaller than this value.

From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known, but we do understand the Universe from large, cosmic scales down to tiny, subatomic ones. The scale of electrons, quarks, and gluons is the limit to how far we’ve ever probed nature: down to scales of ~10^-19 meters, where these structures remain point-like.

Credit: Magdalena Kowalska/CERN/ISOLDE team

If we move on from this classical picture of reality to a quantum one, where electrons are both point-like particles (when they’re observed to interact) and also wave-like probability cloud distributions (when they’re simply propagating in space), we have to accept that not only are particles like electrons quantum in nature, but that the fields they generate — electric and magnetic fields, for example — must be quantum as well, and also must simultaneously obey the laws of relativity. The first attempt at writing down an equation that treated both particles and fields as quantum and relativistic was the Klein-Gordon equation, derived in 1926, but the equation that also got particle spin correct was the Dirac equation, which came two years later in 1928.

The problem with the Dirac equation, as simple and straightforward as it is for describing the electron, is that there are negative energy solutions that are allowed, mathematically, by the theory. That means, in theory, there is no “lowest-energy state” for the electron, and it can keep transitioning to progressively more-and-more negative states, emitting energy with each step. In a leap of faith, Dirac hypothesized that some kind of “anti-electron” particle exists to fill those negative energy states: a particle that Dirac originally called a “hole” that would have a positive, rather than a negative electric charge. With one fell swoop, the positron was born. Four years later, in 1932, Carl Anderson detected the positron, confirming its existence.

Just as an atom is a positively charged, massive nucleus orbited by one or more electrons, anti-atoms simply flip all of the constituent matter particles for their antimatter counterparts, with positron(s) orbiting the negatively-charged antimatter nucleus. The same energetic possibilities exist for antimatter as matter. First hypothesized in 1928 by Dirac, antimatter (in the form of positrons) was first detected only a few years later in the lab: in 1932.

Credit: Katie Bertsche/Lawrence Berkeley Lab

But now, we were compelled to revisit the idea of the electron’s self-energy. Remember, classically, you’d expect the electron to have a finite size; if it were smaller, then the totality of its electric charge must be compressed into a smaller volume, which implies a greater self-energy and a mass that’s too large to be consistent with what’s been observed. Quantum mechanically, however, the electron must be point-like: the electric charge is concentrated into one location and is zero everywhere else. This would imply that the electron’s total electrostatic energy diverges: it goes to infinity as we take the radius of the electron down towards zero.

Furthermore, because electrons have an intrinsic angular momentum (or spin) to them, they generate magnetic fields as well. Because total energy, in electromagnetism, is a sum of both electric energy and magnetic energy, this means there’s an additional contribution to the electrostatic energy in the form of magnetic energy. And finally, if the electromagnetic field is real (and quantum), then there are field fluctuations in free space: even where there are no electrons present. This also diverges, and it diverges more severely than the other forms of energy: electrostatic and magnetic. Making sense of the electron’s mass seemed farther away than ever.

In this diagram, two atoms are brought close together, and (i) they’re initially unpolarized. If one of the atoms (ii) becomes polarized, the adjacent atom will experience the electrostatic forces from the positive and negative components of the near atom (iii), causing it to polarize as well, which results in an attractive Van der Waals force. This polarizing effect can even occur within the quantum vacuum itself: in the absence of actual charged particles.

Credit: Christopher Rowley/Wikimedia Commons

However, the same “fix” that Dirac imposed for his unwanted negative energy states — the existence of an antimatter, opposite-charged counterpart to the electron: a positron — would help “screen” the electron from these unwanted divergences in its self-energy. Many of us view the vacuum of empty space in the context of quantum physics not as completely empty, but as being filled with virtual quantum states: with fluctuations similar to particle-antiparticle pairs that briefly pop in-and-out of existence.

That might be a fine way to view completely empty space under certain conditions, but if you’re talking about the space near a particle like an electron, electrons and positrons will respond in a different manner to the electron’s presence: they will become polarized, with the positive charges preferentially appearing “nearer” to the electron and the negative charges preferentially appearing “farther” from the electron. Just like the medium surrounding an electric charge becomes polarized in classical electromagnetism, the quantum vacuum itself becomes polarized in quantum field theory.

These polarized surroundings effectively “shield” the electron from such divergences, allowing its mass to remain finite and small, without pathologies. The positron, as an equal-and-opposite counterpart to the electron, protects its low mass and prevents its self-energy from blowing up to too-large values.

A visualization of QCD illustrates how particle-antiparticle pairs pop out of the quantum vacuum for very small amounts of time as a consequence of Heisenberg uncertainty. The quantum vacuum is interesting because it demands that empty space itself isn’t so empty, but is filled with all the particles, antiparticles, and fields in various states that are demanded by the quantum field theory that describes our Universe, even though this tool is a visualization only. If the vacuum becomes polarized, such as by having a charged particle nearby, then positive and negative charges will respond differently, effectively “screening” the space nearest to the charge from the charge itself.

Credit: Derek B. Leinweber

By adding antimatter to the Universe — by noting that every quantum particle of matter has, in theory, an equal-mass but opposite-charged quantum counterpart: antimatter — physicists were able to eliminate the pathology of the electron’s self-energy, allowing them to wind up with a coherent description of matter that enabled the electron to have its relatively small, observed mass.

Fast forward to the present, however, and we have a similar puzzle about the masses of the fundamental particles. The way, in modern physics (i.e., physics according to the Standard Model) that particles acquire a rest mass is through the Higgs mechanism. The breaking of the Higgs symmetry gives rise to Goldstone bosons, and those bosons mix (or “get eaten by”) the electroweak bosons: giving the W-and-Z bosons their mass, creating a massless photon, and a single, massive Higgs boson as a result.

However, the Higgs field also couples to all particles with mass: the quarks, the leptons, and even a self-coupling to the Higgs boson itself. If we asked the simple question of “what do our theories predict the masses of these Standard Model particles to be?” the answer we get back is shocking: something around the Planck mass, or around ~1022 MeV each. Yet we’ve measured the masses of the particles of the Standard Model, and the lowest-mass ones are the neutrinos (at perhaps a millionth of an MeV, or less), the electrons are about half an MeV, while the highest mass ones are the W-and-Z bosons, the Higgs boson, and the top quark, come in at around ~100,000 MeV.

This to-scale diagram shows the relative masses of the quarks and leptons, with neutrinos being the lightest particles and the top quark being the heaviest. No explanation, within the Standard Model alone, can account for these mass values.

Credit: Luis Álvarez-Gaumé/CERN Latin American School of HEP, 2019

What explains this massive discrepancy? Why are the measured masses of the fundamental particles so much lower than our naive expectations would lead us to believe?

This puzzle is commonly known as the hierarchy problem in physics: the observed fact that the rest masses of the fundamental particles are all in a relatively narrow range that’s much, much lower than the value of the Planck mass. If all of the Standard Model particles with mass couple to the Higgs, and the Higgs has a self-coupling (i.e., it couples to itself), then why are the masses of all the particles, including the Higgs boson itself, so low in value, as opposed to “blowing up” to some large, Planck mass-like value?

It’s no secret that this problem is unsolved. But it’s the promise of solving this problem in one fell swoop that makes supersymmetry (or SUSY) so attractive. Just as, generations ago, the proposal of the positron saved us from a pathological self-energy (and a far-too-great mass) for the electron, the proposal of a new kind of symmetry — SUSY — could save us from the idea of a pathologically large mass for the Higgs boson and all Standard Model particles. Just as the contributions from the positron, visualizable as fluctuations in the vacuum of space (along with virtual electrons), could “cancel out” the pathological parts of the electron’s self-energy, the hypothesized SUSY partner particles could cancel out the pathological contribution of the Standard Model particles to the Higgs mass.

In the Standard Model, heavy particles like the top quark contribute to the Higgs mass through loop diagrams like the one shown at the top. If there’s a comparably-massed superpartner particle, as shown in the lower image, it could cancel out that coupling, preventing the mass of the Higgs (and other Standard Model particles) from becoming too large.

Credit: VermillionBird/Wikimedia Commons

For example, in the above diagram, you can see the top quark correction to the Higgs boson mass coming in the upper loop diagram. The Higgs boson and top quark are both heavy particles, so this virtual process should help add to the mass of the Higgs boson. Even worse: the more particles and the more loops you allow your diagrams to have, the greater and greater the expected mass of the Higgs becomes. In the Standard Model, alone, this is truly a pathology.

But now consider this: what if, for every contribution that any of the Standard Model particles makes to the Higgs boson’s mass — or to any mass at all — there were an equal-and-opposite contribution that canceled out that contribution? That’s the big idea behind SUSY: that for every normal particle, there’s a supersymmetric partner particle, one with the same electric charge, color charge, weak isospin and weak hypercharge, but with a spin that’s off by from its Standard Model counterpart, meaning that for every Standard Model fermion, there’s a supersymmetric boson, and for every Standard Model boson, there’s a supersymmetric fermion counterpart.

The Standard Model particles and their supersymmetric counterparts. Slightly under 50% of these particles have been discovered, and just over 50% have never shown a trace that they exist. Supersymmetry is an idea that hopes to improve on the Standard Model, but it has yet to achieve the all-important step for supplanting the prevailing scientific theory: having its new predictions borne out by experiment.

Credit: Claire David

As long as the masses of these SUSY partners are low enough and in the right mass range, this new ingredient of supersymmetry could effectively “cancel out” the Standard Model pathologies, protecting the masses of these particles in the same way that the positron’s existence protects the electron from having a pathologically large self-energy. The contributions from the top quark, for instance, could be cancelled out from a supersymmetric partner particle known as a stop, which would be a boson-like SUSY particle known as a squark: the SUSY partner of the top quark. Similarly, the self-coupling from the Higgs boson would be canceled by its SUSY partner: a fermion-like SUSY particle known as a Higgsino.

This remains compelling for a simple reason: all other attempts to solve the hierarchy problem in theoretical physics have failed even more miserably than this SUSY-inspired solution. However, the failure of the Large Hadron Collider to turn up even a shred of evidence for any SUSY partner particles — with the entirety of the hierarchy-problem-solving mass range having already been probed — means that we may be compelled to look for more complex solutions to the hierarchy problem. While many contend that there may be no solution to such a problem at all, in that these masses may come to us without any underlying physical explanation, the goal of science is to explain the properties of the Universe, and most physicists are not willing to give that up just yet.

The running of the three fundamental coupling constants (electromagnetic, weak, and strong) with energy, in the Standard Model (left) and with a new set of supersymmetric particles (right) included. The fact that the three lines almost meet is a suggestion that they might meet if new particles or interactions are found beyond the Standard Model, but the running of these constants is perfectly within expectations of the Standard Model alone. The fact that the coupling constants may all meet at a point in supersymmetric (SUSY) scenarios may not mean very much for reality.

Credit: W.-M. Yao et al. (Particle Data Group), J. Phys. (2006)

It’s truly for its power in solving the hierarchy problem that SUSY remains of great interest to physicists. The fact that it leads to a potential dark matter candidate (if you impose R-parity symmetry and the lightest supersymmetric particle is indeed chargeless) is nice, but not a compelling motivation, as there are hundreds of known ways to theoretically generate dark matter particles. The fact that the addition of SUSY particles supports the coupling constants unifying near the hypothetical Grand Unification scale is also nice, but not necessarily reflective of nature nor a sufficient motivation for pursuing this theory.

These three points have traditionally represented the underlying impetus for suspecting that SUSY matters for our Universe, but only the hierarchy problem is truly compelling for eliminating a pathology that otherwise must be reckoned with. The lack of observed superpartner particles at the LHC, all the way up to a few TeV of energy (tens of times greater than the heaviest Standard Model particle), suggests that even if SUSY exists at some higher energy scale, it may not solve the hierarchy problem after all.

Particle physicists have been slow to give up on the idea of SUSY, largely because there are no better alternatives. But with no evidence for these particles existing at the relevant energies for which the theory was first proposed, it may truly be time for theorists to move on. As Feynman once so pointedly stated, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

This article The one reason that physicists won’t give up on supersymmetry is featured on Big Think.

Read More

Share This Article
Leave a comment