What are Leptons?

During the 19th and 20th centuries, physicists began to probe deep into the nature of matter and energy. In so doing, they quickly realized that the rules which govern them become increasingly blurry the deeper one goes. Whereas the predominant theory used to be that all matter was made up of indivisible atoms, scientists began to realize that atoms are themselves composed of even smaller particles.

From these investigations, the Standard Model of Particle Physics was born. According to this model, all matter in the Universe is composed of two kinds of particles: hadrons – from which Large Hadron Collider (LHC) gets its name – and leptons. Where hadrons are composed of other elementary particles (quarks, anti-quarks, etc), leptons are elementary particles that exist on their own.

Definition:

The word lepton comes from the Greek leptos, which means “small”, “fine”, or “thin”. The first recorded use of the word was by physicist Leon Rosenfeld in his book Nuclear Forces (1948). In the book, he attributed the use of the word to a suggestion made by Danish chemist and physicist Prof. Christian Moller.

The Standard Model of Elementary Particles. Image: By MissMJ - Own work by uploader, PBS NOVA [1], Fermilab, Office of Science, United States Department of Energy, Particle Data Group, CC BY 3.0
The Standard Model of Particle Physics, showing all known elementary particles. Credit: Wikipedia Commons/MissMJ/PBS NOVA/Fermilab/Particle Data Group
The term was chosen to refer to particles of small mass, since the only known leptons in Rosenfeld’s time were muons. These elementary particles are over 200 times more massive than electrons, but have only about one-ninth the the mass of a proton. Along with quarks, leptons are the basic building blocks of matter, and are therefore seen as “elementary particles”.

Types of Leptons:

According to the Standard Model, there are six different types of leptons. These include the Electron, the Muon, and Tau particles, as well as their associated neutrinos (i.e. electron neutrino, muon neutrino, and tau neutrino). Leptons have negative charge and a distinct mass, whereas their neutrinos have a neutral charge.

Electrons are the lightest, with a mass of 0.000511 gigaelectronvolts (GeV), while Muons have a mass of 0.1066 Gev and Tau particles (the heaviest) have a mass of 1.777 Gev. The different varieties of the elementary particles are commonly called “flavors”. While each of the three lepton flavors are different and distinct (in terms of their interactions with other particles), they are not immutable.

A neutrino can change its flavor, a process which is known as “neutrino flavor oscillation”. This can take a number of forms, which include solar neutrino, atmospheric neutrino, nuclear reactor, or beam oscillations. In all observed cases, the oscillations were confirmed by what appeared to be a deficit in the number of neutrinos being created.

Muons, a type of lepton, shown being produced by the Large Hadron Collider. Credit: CERN
Muons, a type of lepton, shown being produced by the Large Hadron Collider. Credit: CERN

One observed cause has to do with “muon decay” (see below), a process where muons change their flavor to become electron neutrinos  or  tau neutrinos – depending on the circumstances. In addition, all three leptons and their neutrinos have an associated antiparticle (antilepton).

For each, the antileptons have an identical mass, but all of the other properties are reversed. These pairings consist of the electron/positron, muon/antimuon, tau/antitau, electron neutrino/electron antineutrino, muon neutrino/muan antinuetrino, and tau neutrino/tau antineutrino.

The present Standard Model assumes that there are no more than three types (aka. “generations”) of leptons with their associated neutrinos in existence. This accords with experimental evidence that attempts to model the process of nucleosynthesis after the Big Bang, where the existence of more than three leptons would have affected the abundance of helium in the early Universe.

Properties:

All leptons possess a negative charge. They also possess an intrinsic rotation in the form of their spin, which means that electrons with an electric charge – i.e. “charged leptons” – will generate magnetic fields. They are able to interact with other matter only though weak electromagnetic forces. Ultimately, their charge determines the strength of these interactions, as well as the strength of their electric field and how they react to external electrical or magnetic fields.

None are capable of interacting with matter via strong forces, however. In the Standard Model, each lepton starts out with no intrinsic mass. Charged leptons obtain an effective mass through interactions with the Higgs field, while neutrinos either remain massless or have only very small masses.

History of Study:

The first lepton to be identified was the electron, which was discovered by British physicist J.J. Thomson and his colleagues in 1897 using a series of cathode ray tube experiments. The next discoveries came during the 1930s, which would lead to the creation of a new classification for weakly-interacting particles that were similar to electrons.

The first discovery was made by Austrian-Swiss physicist Wolfgang Pauli in 1930, who proposed the existence of the electron neutrino in order to resolve the ways in which beta decay contradicted the Conservation of Energy law, and Newton’s Laws of Motion (specifically the Conservation of Momentum and Conservation of Angular Momentum).

The positron and muon were discovered by Carl D. Anders in 1932 and 1936, respectively. Due to the mass of the muon, it was initially mistook for a meson. But due to its behavior (which resembled that of an electron) and the fact that it did not undergo strong interaction, the muon was reclassified. Along with the electron and the electron neutrino, it became part of a new group of particles known as “leptons”.

In 1962, a team of American physicists – consisting of Leon M. Lederman, Melvin Schwartz, and Jack Steinberger – were able to detect of interactions by the muon neutrino, thus showing that more than one type of neutrino existed. At the same time, theoretical physicists postulated the existence of many other flavors of neutrinos, which would eventually be confirmed experimentally.

The tau particle followed in the 1970s, thanks to experiments conducted by Nobel-Prize winning physicist Martin Lewis Perl and his colleagues at the SLAC National Accelerator Laboratory. Evidence of its associated neutrino followed thanks to the study of tau decay, which showed missing energy and momentum analogous to the missing energy and momentum caused by the beta decay of electrons.

In 2000, the tau neutrino was directly observed thanks to the Direct Observation of the NU Tau (DONUT) experiment at Fermilab. This would be the last particle of the Standard Model to be observed until 2012, when CERN announced that it had detected a particle that was likely the long-sought-after Higgs Boson.

Today, there are some particle physicists who believe that there are leptons still waiting to be found. These “fourth generation” particles, if they are indeed real, would exist beyond the Standard Model of particle physics, and would likely interact with matter in even more exotic ways.

We have written many interesting articles about Leptons and subatomic particles here at Universe Today. Here’s What are Subatomic Particles?, What are Baryons?First Collisions of the LHC, Two New Subatomic Particles Found, and Physicists Maybe, Just Maybe, Confirm the Possible Discovery of 5th Force of Nature.

For more information, SLAC’s Virtual Visitor Center has a good introduction to Leptons and be sure to check out the Particle Data Group (PDG) Review of Particle Physics.

Astronomy Cast also has episodes on the topic. Here’s Episode 106: The Search for the Theory of Everything, and Episode 393: The Standard Model – Leptons & Quarks.

Sources:

What is the CERN Particle Accelerator?

Particle Collider

What if it were possible to observe the fundamental building blocks upon which the Universe is based? Not a problem! All you would need is a massive particle accelerator, an underground facility large enough to cross a border between two countries, and the ability to accelerate particles to the point where they annihilate each other – releasing energy and mass which you could then observe with a series of special monitors.

Well, as luck would have it, such a facility already exists, and is known as the CERN Large Hardron Collider (LHC), also known as the CERN Particle Accelerator. Measuring roughly 27 kilometers in circumference and located deep beneath the surface near Geneva, Switzerland, it is the largest particle accelerator in the world. And since CERN flipped the switch, the LHC has shed some serious light on some deeper mysteries of the Universe.

Purpose:

Colliders, by definition, are a type of a particle accelerator that rely on two directed beams of particles. Particles are accelerated in these instruments to very high kinetic energies and then made to collide with each other. The byproducts of these collisions are then analyzed by scientists in order ascertain the structure of the subatomic world and the laws which govern it.

The Large Hadron Collider is the most powerful particle accelerator in the world. Image: CERN
The Large Hadron Collider is the most powerful particle accelerator in the world. Credit: CERN

The purpose of colliders is to simulate the kind of high-energy collisions to produce particle byproducts that would otherwise not exist in nature. What’s more, these sorts of particle byproducts decay after very short period of time, and are are therefor difficult or near-impossible to study under normal conditions.

The term hadron refers to composite particles composed of quarks that are held together by the strong nuclear force, one of the four forces governing particle interaction (the others being weak nuclear force, electromagnetism and gravity). The best-known hadrons are baryons – protons and neutrons – but also include mesons and unstable particles composed of one quark and one antiquark.

Design:

The LHC operates by accelerating two beams of “hadrons” – either protons or lead ions – in opposite directions around its circular apparatus. The hadrons then collide after they’ve achieved very high levels of energy, and the resulting particles are analyzed and studied. It is the largest high-energy accelerator in the world, measuring 27 km (17 mi) in circumference and at a depth of 50 to 175 m (164 to 574 ft).

The tunnel which houses the collider is 3.8-meters (12 ft) wide, and was previously used to house the Large Electron-Positron Collider (which operated between 1989 and 2000). This tunnel contains two adjacent parallel beamlines that intersect at four points, each containing a beam that travels in opposite directions around the ring. The beam is controlled by 1,232 dipole magnets while 392 quadrupole magnets are used to keep the beams focused.

Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place. Credit: Wikipedia Commons/gamsiz
Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place.Credit: Wikipedia Commons/gamsiz

About 10,000 superconducting magnets are used in total, which are kept at an operational temperature of -271.25 °C (-456.25 °F) – which is just shy of absolute zero – by approximately 96 tonnes of liquid helium-4. This also makes the LHC the largest cryogenic facility in the world.

When conducting proton collisions, the process begins with the linear particle accelerator (LINAC 2). After the LINAC 2 increases the energy of the protons, these particles are then injected into the Proton Synchrotron Booster (PSB), which accelerates them to high speeds.

They are then injected into the Proton Synchrotron (PS), and then onto the Super Proton Synchrtron (SPS), where they are sped up even further before being injected into the main accelerator. Once there, the proton bunches are accumulated and accelerated to their peak energy over a period of 20 minutes. Last, they are circulated for a period of 5 to 24 hours, during which time collisions occur at the four intersection points.

During shorter running periods, heavy-ion collisions (typically lead ions) are included the program. The lead ions are first accelerated by the linear accelerator LINAC 3, and the Low Energy Ion Ring (LEIR) is used as an ion storage and cooler unit. The ions are then further accelerated by the PS and SPS before being injected into LHC ring.

While protons and lead ions are being collided, seven detectors are used to scan for their byproducts. These include the A Toroidal LHC ApparatuS (ATLAS) experiment and the Compact Muon Solenoid (CMS), which are both general purpose detectors designed to see many different types of subatomic particles.

Then there are the more specific A Large Ion Collider Experiment (ALICE) and Large Hadron Collider beauty (LHCb) detectors. Whereas ALICE is a heavy-ion detector that studies strongly-interacting matter at extreme energy densities, the LHCb records the decay of particles and attempts to filter b and anti-b quarks from the products of their decay.

Then there are the three small and highly-specialized detectors – the TOTal Elastic and diffractive cross section Measurement (TOTEM) experiment, which measures total cross section, elastic scattering, and diffractive processes; the Monopole & Exotics Detector (MoEDAL), which searches magnetic monopoles or massive (pseudo-)stable charged particles; and the Large Hadron Collider forward (LHCf) that monitor for astroparticles (aka. cosmic rays).

History of Operation:

CERN, which stands for Conseil Européen pour la Recherche Nucléaire (or European Council for Nuclear Research in English) was established on Sept 29th, 1954, by twelve western European signatory nations. The council’s main purpose was to oversee the creation of a particle physics laboratory in Geneva where nuclear studies would be conducted.

Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN
Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN

Soon after its creation, the laboratory went beyond this and began conducting high-energy physics research as well. It has also grown to include twenty European member states: France, Switzerland, Germany, Belgium, the Netherlands, Denmark, Norway, Sweden, Finland, Spain, Portugal, Greece, Italy, the UK, Poland, Hungary, the Czech Republic, Slovakia, Bulgaria and Israel.

Construction of the LHC was approved in 1995 and was initially intended to be completed by 2005. However, cost overruns, budget cuts, and various engineering difficulties pushed the completion date to April of 2007. The LHC first went online on September 10th, 2008, but initial testing was delayed for 14 months following an accident that caused extensive damage to many of the collider’s key components (such as the superconducting magnets).

On November 20th, 2009, the LHC was brought back online and its First Run ran from 2010 to 2013. During this run, it collided two opposing particle beams of protons and lead nuclei at energies of 4 teraelectronvolts (4 TeV) and 2.76 TeV per nucleon, respectively. The main purpose of the LHC is to recreate conditions just after the Big Bang when collisions between high-energy particles was taking place.

Major Discoveries:

During its First Run, the LHCs discoveries included a particle thought to be the long sought-after Higgs Boson, which was announced on July 4th, 2012. This particle, which gives other particles mass, is a key part of the Standard Model of physics. Due to its high mass and elusive nature, the existence of this particle was based solely in theory and had never been previously observed.

The discovery of the Higgs Boson and the ongoing operation of the LHC has also allowed researchers to investigate physics beyond the Standard Model. This has included tests concerning supersymmetry theory. The results show that certain types of particle decay are less common than some forms of supersymmetry predict, but could still match the predictions of other versions of supersymmetry theory.

In May of 2011, it was reported that quark–gluon plasma (theoretically, the densest matter besides black holes) had been created in the LHC. On November 19th, 2014, the LHCb experiment announced the discovery of two new heavy subatomic particles, both of which were baryons composed of one bottom, one down, and one strange quark. The LHCb collaboration also observed multiple exotic hadrons during the first run, possibly pentaquarks or tetraquarks.

Since 2015, the LHC has been conducting its Second Run. In that time, it has been dedicated to confirming the detection of the Higgs Boson, and making further investigations into supersymmetry theory and the existence of exotic particles at higher-energy levels.

The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN
The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN

In the coming years, the LHC is scheduled for a series of upgrades to ensure that it does not suffer from diminished returns. In 2017-18, the LHC is scheduled to undergo an upgrade that will increase its collision energy to 14 TeV. In addition, after 2022, the ATLAS detector is to receive an upgrade designed to increase the likelihood of it detecting rare processes, known as the High Luminosity LHC.

The collaborative research effort known as the LHC Accelerator Research Program (LARP) is currently conducting research into how to upgrade the LHC further. Foremost among these are increases in the beam current and the modification of the two high-luminosity interaction regions, and the ATLAS and CMS detectors.

Who knows what the LHC will discover between now and the day when they finally turn the power off? With luck, it will shed more light on the deeper mysteries of the Universe, which could include the deep structure of space and time, the intersection of quantum mechanics and general relativity, the relationship between matter and antimatter, and the existence of “Dark Matter”.

We have written many articles about CERN and the LHC for Universe Today. Here’s What is the Higgs Boson?, The Hype Machine Deflates After CERN Data Shows No New Particle, BICEP2 All Over Again? Researchers Place Higgs Boson Discovery in Doubt, Two New Subatomic Particles Found, Is a New Particle about to be Announced?, Physicists Maybe, Just Maybe, Confirm the Possible Discovery of 5th Force of Nature.

If you’d like more info on the Large Hadron Collider, check out the LHC Homepage, and here’s a link to the CERN website.

Astronomy Cast also has some episodes on the subject. Listen here, Episode 69: The Large Hadron Collider and The Search for the Higgs Boson and Episode 392: The Standard Model – Intro.

Sources:

What Is Bohr’s Atomic Model?

Atomic theory has come a long way over the past few thousand years. Beginning in the 5th century BCE with Democritus‘ theory of indivisible “corpuscles” that interact with each other mechanically, then moving onto Dalton’s atomic model in the 18th century, and then maturing in the 20th century with the discovery of subatomic particles and quantum theory, the journey of discovery has been long and winding.

Arguably, one of the most important milestones along the way has been Bohr’ atomic model, which is sometimes referred to as the Rutherford-Bohr atomic model. Proposed by Danish physicist Niels Bohr in 1913, this model depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits (defined by their energy levels) around the center.

Atomic Theory to the 19th Century:

The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808). Credit: Public Domain
Various atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy (1808). Credit: Public Domain

It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800’s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory.

This theory expanded on the laws of conversation of mass and definite proportions and came down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.

Discovery of the Electron:

By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, this would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.

Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.

The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com
The Plum Pudding model of the atom proposed by J.J. Thomson. Credit: britannica.com

This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.

These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’s Philosophical Magazine, to wide acclaim.

The Rutherford Model:

Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”

In this experiment, Geiger and Marsden measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.

Credit: glogster.com
Diagram detailing the “gold foil experiment” conducted by Hans Geiger and Ernest Marsden. Credit: glogster.com

Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.

By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.

The Bohr Model:

Subsequent experiments by Antonius Van den Broek and Niels Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).

In addition, Bohr’s model refined certain elements of the Rutherford model that were problematic. These included the problems arising from classical mechanics, which predicted that electrons would release electromagnetic radiation while orbiting a nucleus. Because of the loss in energy, the electron should have rapidly spiraled inwards and collapsed into the nucleus. In short, this atomic model implied that all atoms were unstable.

Diagram of an electron dropping from a higher orbital to a lower one and emitting a photon. Image Credit: Wikicommons
Diagram of an electron dropping from a higher orbital to a lower one and emitting a photon. Image Credit: Wikicommons

The model also predicted that as electrons spiraled inward, their emission would rapidly increase in frequency as the orbit got smaller and faster. However, experiments with electric discharges in the late 19th century showed that atoms only emit electromagnetic energy at certain discrete frequencies.

Bohr resolved this by proposing that electrons orbiting the nucleus in ways that were consistent with Planck’s quantum theory of radiation. In this model, electrons can occupy only certain allowed orbitals with a specific energy. Furthermore, they can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation in the process.

These orbits were associated with definite energies, which he referred to as energy shells or energy levels. In other words, the energy of an electron inside an atom is not continuous, but “quantized”. These levels are thus labeled with the quantum number n (n=1, 2, 3, etc.) which he claimed could be determined using the Ryberg formula – a rule formulated in 1888 by Swedish physicist Johannes Ryberg to describe the wavelengths of spectral lines of many chemical elements.

Influence of the Bohr Model:

While Bohr’s model did prove to be groundbreaking in some respects – merging Ryberg’s constant and Planck’s constant (aka. quantum theory) with the Rutherford Model – it did suffer from some flaws which later experiments would illustrate. For starters, it assumed that electrons have both a known radius and orbit, something that Werner Heisenberg would disprove a decade later with his Uncertainty Principle.

In addition, while it was useful for predicting the behavior of electrons in hydrogen atoms, Bohr’s model was not particularly useful in predicting the spectra of larger atoms. In these cases, where atoms have multiple electrons, the energy levels were not consistent with what Bohr predicted. The model also didn’t work with neutral helium atoms.

The Bohr model also could not account for the Zeeman Effect, a phenomenon noted by Dutch physicists Pieter Zeeman in 1902, where spectral lines are split into two or more in the presence of an external, static magnetic field. Because of this, several refinements were attempted with Bohr’s atomic model, but these too proved to be problematic.

In the end, this would lead to Bohr’s model being superseded by quantum theory – consistent with the work of Heisenberg and Erwin Schrodinger. Nevertheless, Bohr’s model remains useful as an instructional tool for introducing students to more modern theories – such as quantum mechanics and the valence shell atomic model.

It would also prove to be a major milestone in the development of the Standard Model of particle physics, a model characterized by “electron clouds“, elementary particles, and uncertainty.

We have written many interesting articles about atomic theory here at Universe Today. Here’s John Dalton’s Atomic Model, What is the Plum Pudding Model, What is the Electron Cloud Model?, Who Was Democritus?, and What are the Parts of the Atom?

Astronomy Cast also has some episodes on the subject: Episode 138: Quantum Mechanics, Episode 139: Energy Levels and Spectra, Episode 378: Rutherford and Atoms and Episode 392: The Standard Model – Intro.

Sources:

Shape-shifting neutrinos earn physicists the 2015 Nobel

What do Albert Einstein, Neils Bohr, Paul Dirac, and Marie Curie have in common? They each won the Nobel prize in physics. And today, Takaaki Kajita and Arthur McDonald have joined their ranks, thanks to a pioneering turn-of-the-century discovery: in defiance of long-held predictions, neutrinos shape-shift between multiple identities, and therefore must have mass.

The neutrino, a slight whiff of a particle that is cast off in certain types of radioactive decay, nuclear reactions, and high-energy cosmic events, could be called… shy. Electrically neutral but enormously abundant, half the time a neutrino could pass through a lightyear of lead without interacting with a single other particle. According to the Standard Model of particle physics, it has a whopping mass of zero.

As you can imagine, neutrinos are notoriously difficult to detect.

But in 1956, scientists did exactly that. And just a few years later, a trio of physicists determined that neutrinos came in not just one, not two, but three different types, or flavors: the electron neutrino, the muon neutrino, and the tau neutrino.

The first annotated neutrino event. Image credit:
The neutrino was first detected in 1956 by Clyde Cowan and Frederick Reines. In 1970, scientists captured the first image of a neutrino track in a hydrogen bubble chamber. Image: Argonne National Laboratory

But there was a problem. Sure, scientists had figured out how to detect neutrinos—but they weren’t detecting enough of them. In fact, the number of electron neutrinos arriving on Earth due to nuclear reactions in the Sun’s core was only one-third to one-half the number their calculations had predicted. What, scientists wondered, was happening to the rest?

Kajita, working at the Super-Kamiokande detector in Japan in 1998, and McDonald, working at the Sudbury Neutrino Observatory in Canada in 1999, determined that the electron neutrinos were not disappearing at all; rather, these particles were changing identity, spontaneously oscillating between the three flavor-types as they traveled through space.

Moreover, the researchers proclaimed, in order for neutrinos to make such transformations, they must have mass.

This is due to some quantum funny business having to do with the oscillations themselves. Grossly simplified, a massless particle, which always travels at the speed of light, does not experience time—Einstein’s theory of special relativity says so. But change takes time. Any particle that oscillates between identities needs to experience time in order for its state to evolve from one flavor to the the next.

The interior structure of the Sun. Credit: Wikipedia Commons/kelvinsong
Neutrinos are produced in abundance during fusion reactions at the center of our Sun, and oscillate between three different types, or flavors, on their way to Earth. Image: Wikipedia Commons/kelvinsong

Kajita and McDonald’s work showed that neutrinos must have a mass, albeit a very small one. But neutrinos are abundant in the Universe, and even a small mass has a large effect on all sorts of cosmic phenomena, from solar nuclear physics, where neutrinos are produced en masse, to the large-scale evolution of the cosmos, where neutrinos are ubiquitous.

The neutrino, no longer massless, is now considered to play a much larger role in these processes than scientists had originally believed.

What is more, the very existence of a massive neutrino undermines the theoretical basis of the Standard Model. In fact, Kajita’s and McDonald’s discovery provided some of the first evidence that the Standard Model might not be as airtight as had been previously believed, nudging scientists ever more in the direction of so-called “new physics.”

This is not the first time physicists have been awarded a Nobel prize for research into the nature of neutrinos. In 1988, Leon Lederman, Melvin Schwartz, and Jack Steinberger were awarded the prize for their discovery that neutrinos come in three flavors; in 1995, Frederick Reines won a Nobel for his detection of the neutrino along with Clyde Cowan; and in 2002, a Nobel was awarded to Raymond David Jr., the oldest person ever to receive a the prize in physics, and Masatoshi Koshiba for their detection of cosmic neutrinos.

Kajita, of the University of Tokyo, and McDonald, of Queen’s University in Canada, were awarded the prestigious prize this morning at a news conference in Stockholm.

A Universe of 10 Dimensions

When someone mentions “different dimensions,” we tend to think of things like parallel universes – alternate realities that exist parallel to our own, but where things work or happened differently. However, the reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization.

To break it down, dimensions are simply the different facets of what we perceive to be reality. We are immediately aware of the three dimensions that surround us on a daily basis – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).

Beyond these three visible dimensions, scientists believe that there may be many more. In fact, the theoretical framework of Superstring Theory posits that the universe exists in ten different dimensions. These different aspects are what govern the universe, the fundamental forces of nature, and all the elementary particles contained within.

The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities. Add to it a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).

The third dimension involves depth (the z-axis), and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume. Beyond these three lie the seven dimensions which are not immediately apparent to us, but which can be still be perceived as having a direct effect on the universe and reality as we know it.

The timeline of the universe, beginning with the Big Bang. Credit: NASA
The timeline of the universe, beginning with the Big Bang. According to String Theory, this is just one of many possible worlds. Credit: NASA

Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an objects position in time is essential to plotting its position in the universe. The other dimensions are where the deeper possibilities come into play, and explaining their interaction with the others is where things get particularly tricky for physicists.

According to Superstring Theory, the fifth and sixth dimensions are where the notion of possible worlds arises. If we could see on through to the fifth dimension, we would see a world slightly different from our own that would give us a means of measuring the similarity and differences between our world and other possible ones.

In the sixth, we would see a plane of possible worlds, where we could compare and position all the possible universes that start with the same initial conditions as this one (i.e. the Big Bang). In theory, if you could master the fifth and sixth dimension, you could travel back in time or go to different futures.

In the seventh dimension, you have access to the possible worlds that start with different initial conditions. Whereas in the fifth and sixth, the initial conditions were the same and subsequent actions were different, here, everything is different from the very beginning of time. The eighth dimension again gives us a plane of such possible universe histories, each of which begins with different initial conditions and branches out infinitely (hence why they are called infinities).

In the ninth dimension, we can compare all the possible universe histories, starting with all the different possible laws of physics and initial conditions. In the tenth and final dimension, we arrive at the point in which everything possible and imaginable is covered. Beyond this, nothing can be imagined by us lowly mortals, which makes it the natural limitation of what we can conceive in terms of dimensions.

String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson. String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.
The existence of extra dimensions is explained using the Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.

The existence of these additional six dimensions which we cannot perceive is necessary for String Theory in order for their to be consistency in nature. The fact that we can perceive only four dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted (aka. brane theory).

If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold (shown above). While imperceptible as far as our senses are concerned, they would have governed the formation of the universe from the very beginning. Hence why scientists believe that peering back through time, using telescopes to spot light from the early universe (i.e. billions of years ago), they might be able to see how the existence of these additional dimensions could have influenced the evolution of the cosmos.

Much like other candidates for a grand unifying theory – aka the Theory of Everything (TOE) – the belief that the universe is made up of ten dimensions (or more, depending on which model of string theory you use) is an attempt to reconcile the standard model of particle physics with the existence of gravity. In short, it is an attempt to explain how all known forces within our universe interact, and how other possible universes themselves might work.

For additional information, here’s an article on Universe Today about parallel universes, and another on a parallel universe scientists thought they found that doesn’t actually exist.

There are also some other great resources online. There is a great video that explains the ten dimensions in detail. You can also look at the PBS web site for the TV show Elegant universe. It has a great page on the ten dimensions.

You can also listen to Astronomy Cast. You might find episode 137 The Large Scale Structure of the Universe pretty interesting.

Source: PBS

 

Macro View Makes Dark Matter Look Even Stranger

We know dark matter exists. We know this because without it and dark energy, our Universe would be missing 95.4% of its mass. What’s more, scientists would be hard pressed to explain what accounts for the gravitational effects they routinely see at work in the cosmos.

For decades, scientists have sought to prove its existence by smashing protons together in the Large Hadron Collider. Unfortunately, these efforts have not provided any concrete evidence.

Hence, it might be time to rethink dark matter. And physicists David M. Jacobs, Glenn D. Starkman, and Bryan Lynn of Case Western Reserve University have a theory that does just that, even if it does sound a bit strange.

In their new study, they argue that instead of dark matter consisting of elementary particles that are invisible and do not emit or absorb light and electromagnetic radiation, it takes the form of chunks of matter that vary widely in terms of mass and size.

As it stands, there are many leading candidates for what dark matter could be, which range from Weakly-Interacting Massive Particles (aka WIMPs) to axions. These candidates are attractive, particularly WIMPs, because the existence of such particles might help confirm supersymmetry theory – which in turn could help lead to a working Theory of Everything (ToE).

According to supersymmetry, dark-matter particles known as neutralinos (which are often called WIMPs) annihilate each other, creating a cascade of particles and radiation that includes medium-energy gamma rays. If neutralinos exist, the LAT might see the gamma rays associated with their demise. Credit: Sky & Telescope / Gregg Dinderman.
According to supersymmetry, dark-matter particles known as neutralinos (aka WIMPs) annihilate each other, creating a cascade of particles and radiation. Credit: Sky & Telescope / Gregg Dinderman.

But so far, no evidence has been obtained that definitively proves the existence of either. Beyond being necessary in order for General Relativity to work, this invisible mass seems content to remain invisible to detection.

According to Jacobs, Starkman, and Lynn, this could indicate that dark matter exists within the realm of normal matter. In particular, they consider the possibility that dark matter consists of macroscopic objects – which they dub “Macros” – that can be characterized in units of grams and square centimeters respectively.

Macros are not only significantly larger than WIMPS and axions, but could potentially be assembled out of particles in the Standard Model of particle physics – such as quarks and leptons from the early universe – instead of requiring new physics to explain their existence. WIMPS and axions remain possible candidates for dark matter, but Jacobs and Starkman argue that there’s a reason to search elsewhere.

“The possibility that dark matter could be macroscopic and even emerge from the Standard Model is an old but exciting one,” Starkman told Universe Today, via email. “It is the most economical possibility, and in the face of our failure so far to find dark matter candidates in our dark matter detectors, or to make them in our accelerators, it is one that deserves our renewed attention.”

After eliminating most ordinary matter – including failed Jupiters, white dwarfs, neutron stars, stellar black holes, the black holes in centers of galaxies, and neutrinos with a lot of mass – as possible candidates, physicists turned their focus on the exotics.

Particle Collider
Ongoing experiments at the Large Hadron Collider have so far failed to produce evidence of WIMPs. Credit: CERN/LHC/GridPP

Nevertheless, matter that was somewhere in between ordinary and exotic – relatives of neutron stars or large nuclei – was left on the table, Starkman said. “We say relatives because they probably have a considerable admixture of strange quarks, which are made in accelerators and ordinarily have extremely short lives,” he said.

Although strange quarks are highly unstable, Starkman points out that neutrons are also highly unstable. But in helium, bound with stable protons, neutrons remain stable.

“That opens the possibility that stable strange nuclear matter was made in the early Universe and dark matter is nothing more than chunks of strange nuclear matter or other bound states of quarks, or of baryons, which are themselves made of quarks,” said Starkman.

Such dark matter would fit the Standard Model.

This is perhaps the most appealing aspect of the Macros theory: the notion that dark matter, which our cosmological model of the Universe depends upon, can be proven without the need for additional particles.

Still, the idea that the universe is filled with a chunky, invisible mass rather than countless invisible particles does make the universe seem a bit stranger, doesn’t it?

Further Reading: Case Western

Has the Cosmology Standard Model become a Rube Goldberg Device?

This week at the Royal Astronomical Society’s National Astronomy Meeting in the UK, physicists are challenging the evidence for the recent BICEP2 results regarding the inflation period of the Universe, announced just 90 days ago. New research is laying doubt upon the inclusion of inflation theory in the Standard Cosmological Model for understanding the forces of nature, the nature of elementary particles and the present state of the known Universe.

Back on March 17, 2014, it seemed the World was offered a glimpse of an ultimate order from eons ago … actually from the beginning of time. BICEP2, the single purpose machine at the South Pole delivered an image that after analysis, and subtraction of estimated background signal from the Milky Way, lead its researchers to conclude that they had found the earliest remnant from the birth of the Universe, a signature in ancient light that supported the theory of Inflation.

 BICEP2 Telescope at twilight at the South Pole, Antartica (Credit: Steffen Richter, Harvard University)
BICEP2 Telescope at twilight at the South Pole, Antarctica (Credit: Steffen Richter, Harvard University)

Thirty years ago, the Inflation theory was conceived by physicists Alan Guth and Andei Linde. Guth, Linde and others realized that a sudden expansion of the Universe at only 1/1000000000000000000000000000000000th of a second after the Big Bang could solve some puzzling mysteries of the Cosmos. Inflation could explain the uniformity of the cosmic background radiation. While images such as from the COBE satellite show a blotchy distribution of radiation, in actuality, these images accentuate extremely small variations in the background radiation, remnants from the Big Bang, variations on the order of 1/100,000th of the background level.

Note that the time of the Universe’s proposed Inflationary period immediately after the Big Bang would today permit light to travel only 1/1000000000000000th of the diameter of the Hydrogen atom. The Universe during this first moment of expansion was encapsulated in a volume far smaller than the a single atom.

Emotions ran very high when the BICEP2 team announced their findings on March 17 of this year. The inflation event that the background radiation data supported is described as a supercooling of the Cosmos however, there were physicists that simply remained cool and remained contrarians to the theory. Noted British Physicist Sir Roger Primrose was one who remained underwhelmed and stated that the incredible circular polarization of light that remained in the processed data from BICEP2 could be explained by the interaction of dust, light and magnetic fields in our own neighborhood, the Milky Way.

Illustration of the ESA Planck Telescope in Earth orbit (Credit: ESA)
Illustration of the ESA Planck Telescope in Earth orbit (Credit: ESA)

Now, new observations from another detector, one on the Planck Satellite orbiting the Earth, is revealing that the contribution of background radiation from local sources, the dust in the Milky Way, is appearing to have been under-estimated by the BICEP2 team. All the evidence is not yet laid out but the researchers are now showing reservations. At the same time, it does not dismiss the Inflation Theory. It means that more observations are needed and probably with greater sensitivity.

So why ask the question, are physicists constructing a Rube Goldberg device?

Our present understanding of the Universe stands upon what is called “the Standard Model” of Cosmology. At the Royal Astronomical Society meeting this week, the discussions underfoot could be revealing a Standard Model possibly in a state of collapse or simply needing new gadgets and mechanisms to remain the best theory of everything.

Also this week, new data further supports the discovery of the Higg’s Boson by the Large Hadron Collider in 2012, the elementary particle whose existence explains the mass of fundamental particles in nature and that supports the existence of the Higgs Field vital to robustness of the Standard Model. However, the Higgs related data is also revealing that if the inflationary period of the Universe did take place, then if taken with the Standard Model, one can conclude that the Universe should have collapsed upon itself and our very existence today would not be possible.

A Rube Goldberg Toothpaste dispenser as also the state of the Standard Model (Credit: R.Goldberg)
A Rube Goldberg Toothpaste dispenser as also the state of the Standard Model (Credit: R.Goldberg)

Dr. Brian Green, a researcher in the field of Super String Theory and M-Theory and others such as Dr. Stephen Hawking, are quick to state that the Standard Model is an intermediary step towards a Grand Unified Theory of everything, the Universe. The contortion of the Standard Model, into a sort of Rube Goldberg device can be explained by the undaunting accumulation of more acute and diverse observations at cosmic and quantum scales.

Discussions at the Royal Astronomical Society meeting are laying more doubts upon the inflation theory which just 90 days ago appeared so well supported by BICEP2 – data derived by truly remarkable cutting edge electronics developed by NASA and researchers at the California Institute of Technology. The trials and tribulations of these great theories to explain everything harken back to the period just prior to Einstein’s Miracle Year, 1905. Fragmented theories explaining separately the forces of nature were present but also the accumulation of observational data had reached a flash point.

Today, observations from BICEP2, NASA and ESA great space observatories, sensitive instruments buried miles underground and carefully contrived quantum experiments in laboratories are making the Standard Model more stressed in explaining everything, the same model so well supported by the Higg’s Boson discovery just two years ago. Cosmologists concede that we may never have a complete, proven theory of everything, one that is elegant; however, the challenges upon the Standard Model and inflation will surely embolden younger theorists to double the efforts in other theoretical work.

For further reading:
RAS NAM press release: Should the Higgs Boson Have Caused our Universe To Collapse?
We’ve Discovered Inflation!: Now What?
Cosmologists Cast Doubt on Inflation Evidence
Are the BICEP2 Results Invalid? Probably Not

Proton Mass

The mass of the proton, proton mass, is 1.672 621 637(83) x 10 -27 kg, or 938.272013(23) MeV/c2, or 1.007 276 466 77(10) u (that’s unified atomic mass units).

The most accurate measurements of the mass of the proton come from experiments involving Penning traps, which are used to study the properties of stable charged particles. Basically, the particle under study is confined by a combination of magnetic and electric fields in an evacuated chamber, and its velocity reduced by a variety of techniques, such as laser cooling. Once trapped, the mass-to-charge ratio of a proton, deuteron (nucleus of a deuterium atom), singly charged hydrogen molecule, etc can be measured to high precision, and from these the mass of the proton estimated.

It would be nice if the experimentally observed mass of a proton were the same as that derived from theory. But how to work out what the mass of a proton should be, from theory?

The theory is quantum chromodynamics, or QCD for short, and is the strong force counterpart to quantum electrodynamics (QED). As the proton is made up of three quarks – two up and one down – its mass is the mass of those quarks and the mass of binding energy. This is a very difficult calculation to perform, in part because there are so many ways the quarks and gluons in a proton interact, but published results agree with experiment to within a percent or two.

More fundamentally, the proton has mass because of the Higgs boson … at least, it does according to the highly successful Standard Model of particle physics. Only trouble is, the Higgs boson has yet to be detected (the Large Hadron Collider was built with finding the Higgs boson as a key objective!).

Want to know the “official” value? Check out CODATA. And how does the proton mass compare with the mass of the anti-proton? Click here to find out! And how to determine the proton mass from first (theoretical) principles? This article from CNRS explains how.

More to explore, with Universe Today stories: New Estimate for the Mass of the Higgs Boson, Are the Laws of Nature the Same Everywhere in the Universe?, and Forget Neutron Stars, Quark Stars Might be the Densest Bodies in the Universe are three good ones to get you started.

Astronomy Cast episodes The Strong and Weak Nuclear Forces, The Large Hadron Collider and the Search for the Higgs Boson, and Inside the Atom will give you more insight into proton mass; check them out!

Sources:
Newton Ask a Scientist
Wikipedia