Look, there is a crack in the Standard Model! April 7, 2021
Posted by apetrov in Uncategorized.Tags: muon g-2
2 comments
This post is about the result of the Muon g-2 experiment announced today at Fermilab. It has an admittedly bad title: the Standard Model has not cracked in any way, it is still a correct theory that decently describes interactions of known elementary particles at the energies we checked. Yet, maybe we finally found an observable that is not completely described by the Standard Model. Its theoretically computed value needs an additional component to agree with the newly reported experimental value and this component might well be New Physics! What is this observable?
This observable is the value of the anomalous magnetic moment of the muon. The muon, an elementary particle (a lepton), is a close cousin of an electron. It has very similar properties to the electron, but is about 200 times heavier and is unstable — it only lives for about two microseconds. We don’t yet know why Nature chose to create two similar copies of an electron: muon and tau-lepton. But we can study their properties to find out.
Just like an electron, the muon has spin, which makes it susceptible to the effects of the magnetic field, which is characterized by its magnetic moment. The magnetic moment tells us how the muon reacts to its presence: think of the compass needle as a classical analogy. Over a century ago, brilliant physicist Paul Dirac predicted the value of an electron’s magnetic moment, which is directly applicable to muon as well. This prediction involved a parameter, which he called g, from the gyromagnetic ratio or a g-factor. Dirac’s prediction was that, for an electron (and a muon), it is supposed to be exactly g=2. This was one of the predictions that allowed experimentalists to test the validity of Dirac’s theory which eventually led to its triumph.
With further development of quantum field theories, it was realized that g is not exactly two. The effects of virtual particles lead to the effect that the photon of the magnetic field probing the muon could instead hit those virtual particles instead, potentially changing the value of the g-factor. Now, dealing with virtual particles could be tricky in theoretical computations, as their effects lead to unphysical infinities that need to be absorbed in the definitions of muon’s mass, charge, and the wave-function. But the leading effect of such particles — assuming only the Standard Model particles — turns out to be finite! Julian Schwinger showed that in his 1948 paper. This result was so influential at the time that is literally engraved on his tombstone! This paved the way to compute the quantum radiative corrections to muon’s magnetic moment. Since the effect of such radiative corrections is to change the magnetic moment, they lead to the deviation to the Dirac’s theory prediction and lead to the non-zero value of a = (g-2)/2, which is conventionally referred to as the anomalous magnetic moment. This is precisely what Muon g-2 collaboration measured very precisely.
Why is it interesting? The thing is that among the known virtual particles there could also be new, unknown particles. If those particles interact with the photons, they could also affect the numerical value of the anomalous magnetic moment. So the idea is simple: compute it with as much precision as possible and then compare it to the measurement that is done with the best precision possible. This is precisely what was done.
Easily said but not so easily done. Precise predictions of the anomalous magnetic moment involved computations of thousands of Feynman diagrams and evaluation of contributions that can not be computed by expanding in some small quantity (aka non-perturbative effects). There are many theoretical methods used to compute those, including numerical computations in lattice QCD. But there is now an agreement among the theorists on the anomalous magnetic moment of the muon: a = 0.00116591810(43) (see here for a paper). This number is known with astonishing precision, which is indicated by the bracketed numbers.
The experimental analysis is incredibly hard. Since muons decay, the measurement of their properties is not trivial. Muons are produced in the decays of other particles, called pions, that are created at Fermilab by smashing accelerated protons into targets. Once created, they are directed into a storage ring where they decay in a magnetic field giving out their spin information. The storage ring contains about 10,000 muons at the time going around the ring. To make the measurement, it is important to know the magnetic field in which those muons are moving with incredible precision. There is also an electric field that makes the muons going around the ring, whose effect is carefully removed by choosing how fast the muons fly. If all those (and other) effects are not accounted for, they would affect the result of the measurement! Their combined effect is usually referred to as systematic uncertainties. Most of the work done by the Muon g-2 collaboration at Fermilab was to reduce such effects, which eventually led to the acceptable level of those systematic uncertainties.
And here is the result (drum roll): the anomalous magnetic moment measured by the Muon g-2 collaboration is a=0.00116592061(41).
Ok, what does it all mean? First of all, the result is seemingly only ever so slightly different from the theoretical prediction. But it is not. What is more interesting is that if one combines this new result with the old result from the Brookhaven National Lab, one gets a very significant difference between the theoretical predictions and a combined result of two measurements: it is about 4.2 sigma. Sigmas measure the statistical significance of the result, 4.2 sigma means that the chance that the theoretical and experimental results agree — which is possible due to statistical fluctuations — is about 1 out 40,000! This is incredible!
The result might mean that there are particles that are not described by the Standard Model and the New Physics could be just around the corner! Come back here for more discoveries!
David vs. Goliath: What a tiny electron can tell us about the structure of the universe December 22, 2018
Posted by apetrov in Blogroll, Particle Physics, Physics, Science, Uncategorized.add a comment

Roman Sigaev/ Shutterstock.com
Alexey Petrov, Wayne State University
What is the shape of an electron? If you recall pictures from your high school science books, the answer seems quite clear: an electron is a small ball of negative charge that is smaller than an atom. This, however, is quite far from the truth.

Vector FX / Shutterstock.com
The electron is commonly known as one of the main components of atoms making up the world around us. It is the electrons surrounding the nucleus of every atom that determine how chemical reactions proceed. Their uses in industry are abundant: from electronics and welding to imaging and advanced particle accelerators. Recently, however, a physics experiment called Advanced Cold Molecule Electron EDM (ACME) put an electron on the center stage of scientific inquiry. The question that the ACME collaboration tried to address was deceptively simple: What is the shape of an electron?
Classical and quantum shapes?
As far as physicists currently know, electrons have no internal structure – and thus no shape in the classical meaning of this word. In the modern language of particle physics, which tackles the behavior of objects smaller than an atomic nucleus, the fundamental blocks of matter are continuous fluid-like substances known as “quantum fields” that permeate the whole space around us. In this language, an electron is perceived as a quantum, or a particle, of the “electron field.” Knowing this, does it even make sense to talk about an electron’s shape if we cannot see it directly in a microscope – or any other optical device for that matter?
To answer this question we must adapt our definition of shape so it can be used at incredibly small distances, or in other words, in the realm of quantum physics. Seeing different shapes in our macroscopic world really means detecting, with our eyes, the rays of light bouncing off different objects around us.
Simply put, we define shapes by seeing how objects react when we shine light onto them. While this might be a weird way to think about the shapes, it becomes very useful in the subatomic world of quantum particles. It gives us a way to define an electron’s properties such that they mimic how we describe shapes in the classical world.
What replaces the concept of shape in the micro world? Since light is nothing but a combination of oscillating electric and magnetic fields, it would be useful to define quantum properties of an electron that carry information about how it responds to applied electric and magnetic fields. Let’s do that.

Harvard Department of Physics, CC BY-NC-SA
Electrons in electric and magnetic fields
As an example, consider the simplest property of an electron: its electric charge. It describes the force – and ultimately, the acceleration the electron would experience – if placed in some external electric field. A similar reaction would be expected from a negatively charged marble – hence the “charged ball” analogy of an electron that is in elementary physics books. This property of an electron – its charge – survives in the quantum world.
Likewise, another “surviving” property of an electron is called the magnetic dipole moment. It tells us how an electron would react to a magnetic field. In this respect, an electron behaves just like a tiny bar magnet, trying to orient itself along the direction of the magnetic field. While it is important to remember not to take those analogies too far, they do help us see why physicists are interested in measuring those quantum properties as accurately as possible.
What quantum property describes the electron’s shape? There are, in fact, several of them. The simplest – and the most useful for physicists – is the one called the electric dipole moment, or EDM.
In classical physics, EDM arises when there is a spatial separation of charges. An electrically charged sphere, which has no separation of charges, has an EDM of zero. But imagine a dumbbell whose weights are oppositely charged, with one side positive and the other negative. In the macroscopic world, this dumbbell would have a non-zero electric dipole moment. If the shape of an object reflects the distribution of its electric charge, it would also imply that the object’s shape would have to be different from spherical. Thus, naively, the EDM would quantify the “dumbbellness” of a macroscopic object.
Electric dipole moment in the quantum world
The story of EDM, however, is very different in the quantum world. There the vacuum around an electron is not empty and still. Rather it is populated by various subatomic particles zapping into virtual existence for short periods of time.

Designua/Shutterstock.com
These virtual particles form a “cloud” around an electron. If we shine light onto the electron, some of the light could bounce off the virtual particles in the cloud instead of the electron itself.
This would change the numerical values of the electron’s charge and magnetic and electric dipole moments. Performing very accurate measurements of those quantum properties would tell us how these elusive virtual particles behave when they interact with the electron and if they alter the electron’s EDM.
Most intriguing, among those virtual particles there could be new, unknown species of particles that we have not yet encountered. To see their effect on the electron’s electric dipole moment, we need to compare the result of the measurement to theoretical predictions of the size of the EDM calculated in the currently accepted theory of the Universe, the Standard Model.
So far, the Standard Model accurately described all laboratory measurements that have ever been performed. Yet, it is unable to address many of the most fundamental questions, such as why matter dominates over antimatter throughout the universe. The Standard Model makes a prediction for the electron’s EDM too: it requires it to be so small that ACME would have had no chance of measuring it. But what would have happened if ACME actually detected a non-zero value for the electric dipole moment of the electron?

AP Photo/KEYSTONE/Martial Trezzini
Patching the holes in the Standard Model
Theoretical models have been proposed that fix shortcomings of the Standard Model, predicting the existence of new heavy particles. These models may fill in the gaps in our understanding of the universe. To verify such models we need to prove the existence of those new heavy particles. This could be done through large experiments, such as those at the international Large Hadron Collider (LHC) by directly producing new particles in high-energy collisions.
Alternatively, we could see how those new particles alter the charge distribution in the “cloud” and their effect on electron’s EDM. Thus, unambiguous observation of electron’s dipole moment in ACME experiment would prove that new particles are in fact present. That was the goal of the ACME experiment.
This is the reason why a recent article in Nature about the electron caught my attention. Theorists like myself use the results of the measurements of electron’s EDM – along with other measurements of properties of other elementary particles – to help to identify the new particles and make predictions of how they can be better studied. This is done to clarify the role of such particles in our current understanding of the universe.
What should be done to measure the electric dipole moment? We need to find a source of very strong electric field to test an electron’s reaction. One possible source of such fields can be found inside molecules such as thorium monoxide. This is the molecule that ACME used in their experiment. Shining carefully tuned lasers at these molecules, a reading of an electron’s electric dipole moment could be obtained, provided it is not too small.
However, as it turned out, it is. Physicists of the ACME collaboration did not observe the electric dipole moment of an electron – which suggests that its value is too small for their experimental apparatus to detect. This fact has important implications for our understanding of what we could expect from the Large Hadron Collider experiments in the future.
Interestingly, the fact that the ACME collaboration did not observe an EDM actually rules out the existence of heavy new particles that could have been easiest to detect at the LHC. This is a remarkable result for a tabletop-sized experiment that affects both how we would plan direct searches for new particles at the giant Large Hadron Collider, and how we construct theories that describe nature. It is quite amazing that studying something as small as an electron could tell us a lot about the universe.
Alexey Petrov, Professor of Physics, Wayne State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
30 years of Chernobyl disaster April 26, 2016
Posted by apetrov in Uncategorized.2 comments
30 years ago, on 26 April 1986, the biggest nuclear accident happened at the Chernobyl nuclear power station.
The picture above is of my 8th grade class (I am in the front row) on a trip from Leningrad to Kiev. We wanted to make sure that we’d spend May 1st (Labor Day in the Soviet Union) in Kiev! We took that picture in Gomel, which is about 80 miles away from Chernobyl, where our train made a regular stop. We were instructed to bury some pieces of clothing and shoes after coming back to Leningrad due to excess of radioactive dust on them…
“Ladies and gentlemen, we have detected gravitational waves.” February 11, 2016
Posted by apetrov in Uncategorized.4 comments
The title says it all. Today, The Light Interferometer Gravitational-Wave Observatory (or simply LIGO) collaboration announced the detection of gravitational waves coming from the merger of two black holes located somewhere in the Southern sky, in the direction of the Magellanic Clouds. In the presentation, organized by the National Science Foundation, David Reitze (Caltech), Gabriela Gonzales (Louisiana State), Rainer Weiss (MIT), and Kip Thorn (Caltech), announced to the room full of reporters — and thousand of scientists worldwide via the video feeds — that they have seen a gravitational wave event. Their paper, along with a nice explanation of the result, can be seen here.
The data that they have is rather remarkable. The event, which occurred on 14 September 2015, has been seen by two sites (Livingston and Hanford) of the experiment, as can be seen in the picture taken from their presentation. It likely happened over a billion years ago (1.3B light years away) and is consistent with the merger of two black holes, of 29 and 46 solar masses. The resulting larger black hole’s mass is about 62 solar masses, which means that about 3 solar masses of energy (29+36-62=3) has been radiated in the form of gravitational waves. This is a huge amount of energy! The shape of the signal is exactly what one should expect from the merging of two black holes, with 5.1 sigma significance.
It is interesting to note that the information presented today totally confirms the rumors that have been floating around for a couple of months. Physicists like to spread rumors, as it seems.
Since the gravitational waves are quadrupole, the most straightforward way to see the gravitational waves is to measure the relative stretches of the its two arms (see another picture from the MIT LIGO site) that are perpendicular to each other. Gravitational wave from black holes falling onto each other and then merging. The LIGO device is a marble of engineering — one needs to detect a signal that is very small — approximately of the size of the nucleus on the length scale of the experiment. This is done with the help of interferometry, where the laser beams bounce through the arms of the experiment and then are compared to each other. The small change of phase of the beams can be related to the change of the relative distance traveled by each beam. This difference is induced by the passing gravitational wave, which contracts one of the arms and extends the other. The way noise that can mimic gravitational wave signal is eliminated should be a subject of another blog post.
This is really a remarkable result, even though it was widely expected since the (indirect) discovery of Hulse and Taylor of binary pulsar in 1974! It seems that now we have another way to study the Universe.
Nobel Prize in Physics 2015 October 6, 2015
Posted by apetrov in Uncategorized.add a comment
So, the Nobel Prize in Physics 2015 has been announced. To much surprise of many (including the author), it was awarded jointly to Takaaki Kajita and Arthur B. McDonald “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” Well deserved Nobel Prize for a fantastic discovery.
What is this Nobel prize all about? Some years ago (circa 1997) there were a couple of “deficit” problems in physics. First, it appeared that the detected number of (electron) neutrinos coming form the Sun was measured to be less than expected. This could be explained in a number of ways. First, neutrino could oscillate — that is, neutrinos produced as electron neutrinos in nuclear reactions in the Sun could turn into muon or tau neutrinos and thus not be detected by existing experiments, which were sensitive to electron neutrinos. This was the most exciting possibility that ultimately turned out to be correct! But it was by far not the only one! For example, one could say that the Standard Solar Model (SSM) predicted the fluxes wrong — after all, the flux of solar neutrinos is proportional to core temperature to a very high power (~T25 for 8B neutrinos, for example). So it is reasonable to say that neutrino flux is not so well known because the temperature is not well measured (this might be disputed by solar physicists). Or something more exotic could happen — like the fact that neutrinos could have large magnetic moment and thus change its helicity while propagating in the Sun to turn into a right-handed neutrino that is sterile.
The solution to this is rather ingenious — measure neutrino flux in two ways — sensitive to neutrino flavor (using “charged current (CC) interactions”) and insensitive to neutrino flavor (using “neutral current (NC) interactions”)! Choosing heavy water — which contains deuterium — is then ideal for this detection. This is exactly what SNO collaboration, led by A. McDonald did
As it turned out, the NC flux was exactly what SSM predicted, while the CC flux was smaller. Hence the conclusion that electron neutrinos would oscillate into other types of neutrinos!
Another “deficit problem” was associated with the ratio of “atmospheric” muon and electron neutrinos. Cosmic rays hit Earth’s atmosphere and create pions that subsequently decay into muons and muon neutrinos. Muons would also eventually decay, mainly into an electron, muon (anti)neutrino and an electron neutrino, as
As can be seen from the above figure, one would expect to have 2 muon-flavored neutrinos per one electron-flavored one.
This is not what Super K experiment (T. Kajita) saw — the ratio really changed with angle — that is, the ratio of neutrino fluxes from above would differ substantially from the ratio from below (this would describe neutrinos that went through the Earth and then got into the detector). The solution was again neutrino oscillations – this time, muon neutrinos oscillated into the tau ones.
The presence of neutrino oscillations imply that they have (tiny) masses — something that is not predicted by minimal Standard Model. So one can say that this is the first indication of physics beyond the Standard Model. And this is very exciting.
I think it is interesting to note that this Nobel prize might help the situation with funding of US particle physics research (if anything can help…). It shows that physics has not ended with the discovery of the Higgs boson — and Fermilab might be on the right track to uncover other secrets of the Universe.
Harvard University is to change its name April 1, 2015
Posted by apetrov in Uncategorized.add a comment
A phrase from William Shakespeare’s Romeo and Juliet states: “What’s in a name? That which we call a rose By any other name would smell as sweet.” This cannot be any further from the truth in the corporate world. The name of a corporation is its face, so setting a brand requires a lot of work and money. But what happens when something goes wrong? The way to deal with corporate problems often involves re-branding, changing the name and the face of the corporation. It works as customers usually do not check the history of a company before buying its products or using its services. It simply works.
With the Universities today run according to the corporate model, it was only a matter of time until re-branding came to the academic world. And leading Universities, like Harvard, seem to be embracing the model. Since 2013 article in Harvard Crimson, big Universities became a focus of investigations of many leading newspapers and politicians. Harvard, in particular, has been a focus of a brewing controversy. The University with the largest endowment of any university in the world, has got its name associated with the person who was not, in fact, the founder of Harvard University. As reported, in the very recent internal investigation by Harvard Crimson, John Harvard cannot be the founder of the school, because the Massachusetts Colony’s vote had come two years prior to Harvard’s bequest (compare this to Ezra Cornell’s founding of Cornell University). This led several prominent Massachusetts politicians to suggest that the University will be returned to the ownership by the Commonwealth with its name changed to University of Massachusetts, Cambridge. “We have a fantastic University system here in Massachusetts, with the flagship campus in Amherst,” said one of the prominent politicians who preferred not to be named, “Any University in the World would be proud to be a part of it.”
Returning a prominent private University to the ownership by the State is highly unusual nowadays and is probably highly specific to New England. With tightening budgets many states seek to privatize the Universities to remove them from their budget. For instance, there is a talk that a large public Midwestern school, Wayne State University, will soon change its owners and its name. Two prominent figures, W. Rooney and W. Gretzky, are rumored to work on acquiring the University and re-branding it as simply Wayne’s University. And the changes are rumored go even further. An external company Haleburton has already completed an assessment of the University’s strengths. The company noted WSU’s worldwide reputation in chemistry, physics and medicine and its Carnegie I research status, and recommended that the school should concentrate its efforts on graduating hockey, football, basketball and baseball players. “We are preparing our graduates to have highly successful careers. What job in the United States brings more money than the NFL or NHL player?” a member of WSU’s Academic Senate has been quoted in saying. “We are all excited about the change and looking forward to what else future would bring us.”
Data recall at the LHC? April 1, 2014
Posted by apetrov in Uncategorized.Tags: check the date, particle physics, physics
1 comment so far
In a stunning turn of events, Large Hadron Collider (LHC) management announced a recall and review of thousands of results that came from its four main detectors, ATLAS, CMS, LHCb and ALICE, in the course of the past several years when it learned that the ignition switches used to start the LHC accelerator (see the image) might have been produced by GM.
GM’s CEO, A. Ibarra, who is better known in the scientific world for the famous Davidson-Ibarra bound in leptogenesis, will be testifying on the Capitol Hill today. This new revelation will definitely add new questions to already long list of queries to be addressed by the embattled CEO. In particular, the infamous LHC disaster that happened on 10 September 2008, which cost taxpayers over 21Million dollars to fix, and has long suspected been caused by a magnet quench, might have been caused by too much paper accidentally placed on a switch by a graduate student, who was on duty that day.
“We want to know why it took LHC management five years to issue that recall”, an unidentified US Government official said in the interview, “We want to know what is being done to correct the problem. From our side, we do everything humanly possible to accommodate US high energy particle physics researchers and help them to avoid such problems in the future. For example, we included a 6.6% cut in US HEP funding in the President’s 2015 budget request.” He added, “We suspected that something might be going on at the LHC after it was convincingly proven to us at our weekly seminar that the detected Higgs boson is ‘simply one Xenon atom of the 1 trillion 167 billion 20 million Xenon atoms which there are in the LHC!’”
This is not the first time accelerators cause physicists to rethink their results and designs. For example, last year Japanese scientists had to overcome the problem of unintended acceleration of positrons at their flagship facility KEK.
At this point, it is not clear how GM’s ignition switches problems would affect funding of operations at the National Ignition Facility in Livermore, CA.
And the 2013 Nobel Prize in Physics goes to… October 8, 2013
Posted by apetrov in Particle Physics, Physics, Science, Uncategorized.1 comment so far
Today the 2013 Nobel Prize in Physics was awarded to François Englert (Université Libre de Bruxelles, Belgium) and Peter W. Higgs (University of Edinburgh, UK). The official citation is “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider.” What did they do almost 50 years ago that warranted their Nobel Prize today? Let’s see (for the simple analogy see my previous post from yesterday).
The overriding principle of building a theory of elementary particle interactions is symmetry. A theory must be invariant under a set of space-time symmetries (such as rotations, boosts), as well as under a set of “internal” symmetries, the ones that are specified by the model builder. This set of symmetries restrict how particles interact and also puts constraints on the properties of those particles. In particular, the symmetries of the Standard Model of particle physics require that W and Z bosons (particles that mediate weak interactions) must be massless. Since we know they must be massive, a new mechanism that generates those masses (i.e. breaks the symmetry) must be put in place. Note that a theory with massive W’s and Z that are “put in theory by hand” is not consistent (renormalizable).
The appropriate mechanism was known in the beginning of the 1960’s. It goes under the name of spontaneous symmetry breaking. In one variant it involves a spin-zero field whose self-interactions are governed by a “Mexican hat”-shaped potential
It is postulated that the theory ends up in vacuum state that “breaks” the original symmetries of the model (like the valley in the picture above). One problem with this idea was that a theorem by G. Goldstone required a presence of a massless spin-zero particle, which was not experimentally observed. It was Robert Brout, François Englert, Peter Higgs, and somewhat later (but independently), by Gerry Guralnik, C. R. Hagen, Tom Kibble who showed a loophole in a version of Goldstone theorem when it is applied to relativistic gauge theories. In the proposed mechanism massless spin-zero particle does not show up, but gets “eaten” by the massless vector bosons giving them a mass. Precisely as needed for the electroweak bosons W and Z to get their masses! A massive particle, the Higgs boson, is a consequence of this (BEH or Englert-Brout-Higgs-Guralnik-Hagen-Kibble) mechanism and represents excitation of the Higgs field about its new vacuum state.
It took about 50 years to experimentally confirm the idea by finding the Higgs boson! Tracking the historic timeline, the first paper by Englert and Brout, was sent to Physical Review Letter on 26 June 1964 and published in the issue dated 31 August 1964. Higgs’ paper, received by Physical Review Letters on 31 August 1964 (on the same day Englert and Brout’s paper was published) and published in the issue dated 19 October 1964. What is interesting is that the original version of the paper by Higgs, submitted to the journal Physics Letters, was rejected (on the grounds that it did not warrant rapid publication). Higgs revised the paper and resubmitted it to Physical Review Letters, where it was published after another revision in which he actually pointed out the possibility of the spin-zero particle — the one that now carries his name. CERN’s announcement of Higgs boson discovery came 4 July 2012.
Is this the last Nobel Prize for particle physics? I think not. There are still many unanswered questions — and the answers would warrant Nobel Prizes. Theory of strong interactions (which ARE responsible for masses of all luminous matter in the Universe) is not yet solved analytically, the nature of dark matter is not known, the picture of how the Universe came to have baryon asymmetry is not cleared. Is there new physics beyond what we already know? And if yes, what is it? These are very interesting questions that need answers.
Higgs mechanism for electrical engineers October 7, 2013
Posted by apetrov in Particle Physics, Physics, Science, Uncategorized.Tags: higgs boson
2 comments
Since the Higgs boson’s discovery a little over a year ago at CERN I have been getting a lot of questions from my friends to explain to them “what this Higgs thing does.” So I often tried to use the crowd analogy that is ascribed to Prof. David Miller, to describe the Higgs (or Englert-Brout-Higgs-Guralnik-Hagen-Kibble) mechanism. Interestingly enough, it did not work well for most of my old school friends, majority of whom happen to pursue careers in engineering. So I thought that perhaps another analogy would be more appropriate. Here it is, please let me know what you think!
Imagine Higgs field as represented by some quantity of slightly magnetized iron filings, i.e. small pieces of iron that look like powder, spread over a table or other surface to represent Higgs field that permeates the Universe. Iron filings are common not only as dirt in metal shops, they are often used in school experiments and other science demonstrations to visualize the magnetic field. It is important for them to be slightly magnetized, as this represents self-interaction of the Higgs field. Here they are pictured in a somewhat cartoonish way:
How can Higgs field generate mass? Moreover, how can one field generate different masses for different types of particles? Let us first make an analogue of fermion mass generation. If we take a small magnet and put it in the filings, the magnet would pick up a bunch of filings, right? How much would it pick up? It depends on the “strength” of that magnet. It could be a little:
…or it could be a lot, depending on what kind of magnet we use — or how strong it is:
If we neglect the masses of our magnets, as we assumed they are small, the mass of the picked up mess with the magnets inside is totally determined by the mass of the picked filings, which in turn is determined by the interaction strength between the magnets and the filings. This is precisely how fermion mass generation works in the Standard Model!
In the Standard Model the massless fermions are coupled to the Higgs field via so-called Yukawa interactions, whose strength is parametrized by a number, the Yukawa coupling constant. For different fermion types (or flavors) the couplings would be numerically different, ranging from one to one part in a million. As a result of interaction with the Higgs field (NOT the boson!) in the form of its vacuum expectation value, all fermions acquire masses (ok, maybe not all — neutrinos could be different). And those masses would depend on the strength of the interaction of fermions with Higgs field, just like in our example with magnets and iron filings!
Now imagine that we simply kicked the table! No magnets. The filings would clamp together to form lumps of filings. Each lump would have a mass, which would only depend on how strong the filings attract to each other (remember that they are slightly magnetized?). If we don’t know how strong they are magnetized, we cannot tell how massive each lamp will be, so we would have to measure their masses.
This gives a good analogy of the fact that Higgs boson is an excitation of the Higgs field (the fact that was pointed out by Higgs), and why we cannot predict its mass from the first principles, but need a direct observation at the LHC!
Notice that this picture (so far) does not provide direct analogy to how gauge bosons (W’s and Z bosons) receive their masses. W’s and Z are also initially massless because of the gauge (internal) symmetries required by the construction of the Standard Model. We did know their mass from earlier CERN and SLAC experiments — and even prior to those, we knew that W’s were massive from the fact that weak interactions are of the finite range.
To extend our analogy, let’s clean up the mess — literally! Let’s throw a bucket of water over the table covered with those iron filings and see what happens. Streams of water would pick up iron filings and flow from the table. Assuming that that water’s mass is negligible, the total mass of those water streams (aka dirty water) would be completely determined by the mass of picked iron filings, just like masses of W’s and Z are determined by the Higgs field.
This explanation seemed to work better for my engineering friends! What do you think?
Another one bites the dust. Or “Super-B? What Super-B?” November 28, 2012
Posted by apetrov in Uncategorized.Tags: charm physics, particle physics, Super-B experiment
1 comment so far
Studies of New Physics require several independent approaches. In the language of experimental physics it means several different experiments. Better yet, several accelerators that have detectors that study similar things, but produce results with different systematic and statistical uncertainties. For a number of years that was how things were: physicists searched for New Physics in high-energy experiments where new particles could be produced directly (think TeVatron or LHC experiments), or low-energy, extremely clean measurements that explored quantum effects of heavy new physics particles. In other words, New Physics could also be searched for indirectly.
As a prominent example of the later approach, detectors BaBar at SLAC (USA) and Belle at KEK (Japan) studied decays of copiously produced B-mesons in hopes to find glimpses of New Physics in quantum loops. These experiments measured many Standard Model-related parameters (in particular, confirming the mechanism of CP-violation in the Standard Model) and discovered many unexpected effects (like new mesons containing charmed quarks, as well as oscillations of charm mesons). But they did not see any effects that could not be explained by the Standard Model. A way to go in this case was to significantly increase luminosity of the machine, thereby allowing for very rare processes to be observed. Two super-flavor factories (those machines are really like factories, churning out millions of B-mesons) were proposed, the Belle-II experiment at KEK and a new Super-B factory at the newly-created Cabibbo Lab in Frascatti, Italy. I have already written about the Cabibbo Lab.
It appears, however, that Italian government decided today that it cannot fund the Super-B flavor factory. Tommaso Dorigo reported it in his blog this morning. Here is more hard data: there is a press release (in Italian) from the INFN that basically tells you that “economic conditions… were incompatible with the costs of the project evaluated.” Which is another way of saying that Italian government is not going to fund it. This follows by the news from the PhysicsWorld saying the same thing.
Many physicists have been expressing doubts that the original Super-B plan, which was, in my opinion, very bold, could be executed within the proposed time frame. Yet, physicists pressed on… that is until this morning’s announcement. Reality of our world sets in — there is not enough money for basic research…
So what’s left? There is still, of course, Belle-II. Moreover, excellent performance of LHCb experiment at CERN (I wrote about that here) leaves us with great hopes. That is, if Nature cooperates…