CERN Declares War On The Standard Model

The LHCb collaboration was launched in 2016 to test explore the events that followed the Big Bang. Credit: CERN

Ever since the discovery of the Higgs Boson in 2012, the Large Hadron Collider has been dedicated to searching for the existence of physics that go beyond the Standard Model. To this end, the Large Hadron Collider beauty experiment (LHCb) was established in 1995, specifically for the purpose of exploring what happened after the Big Bang that allowed matter to survive and create the Universe as we know it.

Since that time, the LHCb has been doing some rather amazing things. This includes discovering five new particles, uncovering evidence of a new manifestation of matter-antimatter asymmetry, and (most recently) discovering unusual results when monitoring beta decay. These findings, which CERN announced in a recent press release, could be an indication of new physics that are not part of the Standard Model.

In this latest study, the LHCb collaboration team noted how the decay of B0 mesons resulted in the production of an excited kaon and a pair of electrons or muons. Muons, for the record, are subatomic particles that are 200 times more massive than electrons, but whose interactions are believed to be the same as those of electrons (as far as the Standard Model is concerned).

The LHCb collaboration team. Credit: lhcb-public.web.cern.ch

This is what is known as “lepton universality”, which not only predicts that electrons and muons behave the same, but should be produced with the same probability – with some constraints arising from their differences in mass. However, in testing the decay of B0 mesons, the team found that the decay process produced muons with less frequency. These results were collected during Run 1 of the LHC, which ran from 2009 to 2013.

The results of these decay tests were presented on Tuesday, April 18th, at a CERN seminar, where members of the LHCb collaboration team shared their latest findings. As they indicated during the course of the seminar, these findings are significant in that they appear to confirm results obtained by the LHCb team during previous decay studies.

This is certainly exciting news, as it hints at the possibility that new physics are being observed. With the confirmation of the Standard Model (made possible with the discovery of the Higgs boson in 2012), investigating theories that go beyond this (i.e. Supersymmetry) has been a major goal of the LHC. And with its upgrades completed in 2015, it has been one of the chief aims of Run 2 (which will last until 2018).

A typical LHCb event fully reconstructed. Particles identified as pions, kaon, etc. are shown in different colours. Credit: LHCb collaboration

Naturally, the LHCb team indicated that further studies will be needed before any conclusions can be drawn. For one, the discrepancy they noted between the creation of muons and electrons carries a low probability value (aka. p-value) of between 2.2. to 2.5 sigma. To put that in perspective, the first detection of the Higgs Boson occurred at a level of 5 sigma.

In addition, these results are inconsistent with previous measurements which indicated that there is indeed symmetry between electrons and muons. As a result, more decay tests will have to be conducted and more data collected before the LHCb collaboration team can say definitively whether this was a sign of new particles, or merely a statistical fluctuation in their data.

The results of this study will be soon released in a LHCb research paper. And for more information, check out the PDF version of the seminar.

Further Reading: CERN, LHCb

What are Leptons?

CERN visualization showing two electrons (green), one to two muons (red lines) resulting from a collision between two Z bosons. Credit: CERN

During the 19th and 20th centuries, physicists began to probe deep into the nature of matter and energy. In so doing, they quickly realized that the rules which govern them become increasingly blurry the deeper one goes. Whereas the predominant theory used to be that all matter was made up of indivisible atoms, scientists began to realize that atoms are themselves composed of even smaller particles.

From these investigations, the Standard Model of Particle Physics was born. According to this model, all matter in the Universe is composed of two kinds of particles: hadrons – from which Large Hadron Collider (LHC) gets its name – and leptons. Where hadrons are composed of other elementary particles (quarks, anti-quarks, etc), leptons are elementary particles that exist on their own.

Definition:

The word lepton comes from the Greek leptos, which means “small”, “fine”, or “thin”. The first recorded use of the word was by physicist Leon Rosenfeld in his book Nuclear Forces (1948). In the book, he attributed the use of the word to a suggestion made by Danish chemist and physicist Prof. Christian Moller.

The Standard Model of Elementary Particles. Image: By MissMJ - Own work by uploader, PBS NOVA [1], Fermilab, Office of Science, United States Department of Energy, Particle Data Group, CC BY 3.0
The Standard Model of Particle Physics, showing all known elementary particles. Credit: Wikipedia Commons/MissMJ/PBS NOVA/Fermilab/Particle Data Group
The term was chosen to refer to particles of small mass, since the only known leptons in Rosenfeld’s time were muons. These elementary particles are over 200 times more massive than electrons, but have only about one-ninth the the mass of a proton. Along with quarks, leptons are the basic building blocks of matter, and are therefore seen as “elementary particles”.

Types of Leptons:

According to the Standard Model, there are six different types of leptons. These include the Electron, the Muon, and Tau particles, as well as their associated neutrinos (i.e. electron neutrino, muon neutrino, and tau neutrino). Leptons have negative charge and a distinct mass, whereas their neutrinos have a neutral charge.

Electrons are the lightest, with a mass of 0.000511 gigaelectronvolts (GeV), while Muons have a mass of 0.1066 Gev and Tau particles (the heaviest) have a mass of 1.777 Gev. The different varieties of the elementary particles are commonly called “flavors”. While each of the three lepton flavors are different and distinct (in terms of their interactions with other particles), they are not immutable.

A neutrino can change its flavor, a process which is known as “neutrino flavor oscillation”. This can take a number of forms, which include solar neutrino, atmospheric neutrino, nuclear reactor, or beam oscillations. In all observed cases, the oscillations were confirmed by what appeared to be a deficit in the number of neutrinos being created.

Muons, a type of lepton, shown being produced by the Large Hadron Collider. Credit: CERN
Muons, a type of lepton, shown being produced by the Large Hadron Collider. Credit: CERN

One observed cause has to do with “muon decay” (see below), a process where muons change their flavor to become electron neutrinos  or  tau neutrinos – depending on the circumstances. In addition, all three leptons and their neutrinos have an associated antiparticle (antilepton).

For each, the antileptons have an identical mass, but all of the other properties are reversed. These pairings consist of the electron/positron, muon/antimuon, tau/antitau, electron neutrino/electron antineutrino, muon neutrino/muan antinuetrino, and tau neutrino/tau antineutrino.

The present Standard Model assumes that there are no more than three types (aka. “generations”) of leptons with their associated neutrinos in existence. This accords with experimental evidence that attempts to model the process of nucleosynthesis after the Big Bang, where the existence of more than three leptons would have affected the abundance of helium in the early Universe.

Properties:

All leptons possess a negative charge. They also possess an intrinsic rotation in the form of their spin, which means that electrons with an electric charge – i.e. “charged leptons” – will generate magnetic fields. They are able to interact with other matter only though weak electromagnetic forces. Ultimately, their charge determines the strength of these interactions, as well as the strength of their electric field and how they react to external electrical or magnetic fields.

None are capable of interacting with matter via strong forces, however. In the Standard Model, each lepton starts out with no intrinsic mass. Charged leptons obtain an effective mass through interactions with the Higgs field, while neutrinos either remain massless or have only very small masses.

History of Study:

The first lepton to be identified was the electron, which was discovered by British physicist J.J. Thomson and his colleagues in 1897 using a series of cathode ray tube experiments. The next discoveries came during the 1930s, which would lead to the creation of a new classification for weakly-interacting particles that were similar to electrons.

The first discovery was made by Austrian-Swiss physicist Wolfgang Pauli in 1930, who proposed the existence of the electron neutrino in order to resolve the ways in which beta decay contradicted the Conservation of Energy law, and Newton’s Laws of Motion (specifically the Conservation of Momentum and Conservation of Angular Momentum).

The positron and muon were discovered by Carl D. Anders in 1932 and 1936, respectively. Due to the mass of the muon, it was initially mistook for a meson. But due to its behavior (which resembled that of an electron) and the fact that it did not undergo strong interaction, the muon was reclassified. Along with the electron and the electron neutrino, it became part of a new group of particles known as “leptons”.

In 1962, a team of American physicists – consisting of Leon M. Lederman, Melvin Schwartz, and Jack Steinberger – were able to detect of interactions by the muon neutrino, thus showing that more than one type of neutrino existed. At the same time, theoretical physicists postulated the existence of many other flavors of neutrinos, which would eventually be confirmed experimentally.

The tau particle followed in the 1970s, thanks to experiments conducted by Nobel-Prize winning physicist Martin Lewis Perl and his colleagues at the SLAC National Accelerator Laboratory. Evidence of its associated neutrino followed thanks to the study of tau decay, which showed missing energy and momentum analogous to the missing energy and momentum caused by the beta decay of electrons.

In 2000, the tau neutrino was directly observed thanks to the Direct Observation of the NU Tau (DONUT) experiment at Fermilab. This would be the last particle of the Standard Model to be observed until 2012, when CERN announced that it had detected a particle that was likely the long-sought-after Higgs Boson.

Today, there are some particle physicists who believe that there are leptons still waiting to be found. These “fourth generation” particles, if they are indeed real, would exist beyond the Standard Model of particle physics, and would likely interact with matter in even more exotic ways.

We have written many interesting articles about Leptons and subatomic particles here at Universe Today. Here’s What are Subatomic Particles?, What are Baryons?First Collisions of the LHC, Two New Subatomic Particles Found, and Physicists Maybe, Just Maybe, Confirm the Possible Discovery of 5th Force of Nature.

For more information, SLAC’s Virtual Visitor Center has a good introduction to Leptons and be sure to check out the Particle Data Group (PDG) Review of Particle Physics.

Astronomy Cast also has episodes on the topic. Here’s Episode 106: The Search for the Theory of Everything, and Episode 393: The Standard Model – Leptons & Quarks.

Sources:

How Does Light Travel?

Light moves at different wavelengths, represented here by the different colors seen in a prism. Credit: NASA and ESA

Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.

These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?

Theory of Light to the 19th Century:

During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.

In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.

The first edition of Newton's Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
The first edition of Newton’s Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.

Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:

  • Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
  • These corpuscles are perfectly elastic, rigid, and weightless.

This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his Traité de la lumière (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.

Double-Slit Experiment:

By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.

The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.

According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.

The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.

Electromagnetism and Special Relativity:

Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).

Prof. Albert Einstein uses the blackboard as he delivers the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science in the auditorium of the Carnegie Institue of Technology Little Theater at Pittsburgh, Pa., on Dec. 28, 1934. Using three symbols, for matter, energy and the speed of light respectively, Einstein offers additional proof of a theorem propounded by him in 1905 that matter and energy are the same thing in different forms. (AP Photo)
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science on Dec. 28th, 1934. Credit: AP Photo

By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).

In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.

Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.

For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.

Einstein and the Photon:

In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).

At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).

At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.

Wave-Particle Duality:

Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.

In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).

In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect. The animation below shows the delay scientists had expected to observe. Credit: NASA/Sonoma State University/Aurore Simonnet
Artist’s impression of two photons travelling at different wavelengths, resulting in different- colored light. Credit: NASA/Sonoma State University/Aurore Simonnet

According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.

When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.

The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.

So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).

What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.

For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!

We have written many articles about light here at Universe Today. For example, here’s How Fast is the Speed of Light?, How Far is a Light Year?, What is Einstein’s Theory of Relativity?

If you’d like more info on light, check out these articles from The Physics Hypertextbook and NASA’s Mission Science page.

We’ve also recorded an entire episode of Astronomy Cast all about Interstellar Travel. Listen here, Episode 145: Interstellar Travel.

What Is The Electron Cloud Model?

3d model of electron orbitals, based on the electron cloud model. Credit: Wikipedia Commons/Particia.fidi

The early 20th century was a very auspicious time for the sciences. In addition to Ernest Rutherford and Niels Bohr giving birth to the Standard Model of particle physics, it was also a period of breakthroughs in the field of quantum mechanics. Thanks to ongoing studies on the behavior of electrons, scientists began to propose theories whereby these elementary particles behaved in ways that defied classical, Newtonian physics.

One such example is the Electron Cloud Model proposed by Erwin Schrodinger. Thanks to this model, electrons were no longer depicted as particles moving around a central nucleus in a fixed orbit. Instead, Schrodinger proposed a model whereby scientists could only make educated guesses as to the positions of electrons. Hence, their locations could only be described as being part of a ‘cloud’ around the nucleus where the electrons are likely to be found.

Atomic Physics To The 20th Century:

The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808). Credit: Public Domain
Various atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy (1808). Credit: Public Domain

It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800’s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory.

This theory expanded on the laws of conversation of mass and definite proportions and came down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.

Discovery Of The Electron:

By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, his would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.

Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.

The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com
The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com

This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.

These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’s Philosophical Magazine, to wide acclaim.

Development Of The Standard Model:

Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”

In this experiment, Geiger and Marsden measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.

A depiction of the atomic structure of the helium atom. Credit: Creative Commons
A depiction of the atomic structure of the helium atom. Credit: Creative Commons

Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.

By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.

Subsequent experiments by Antonius Van den Broek and Niels Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).

The Electron Cloud Model:

During the 1920s, Austrian physicist Erwin Schrodinger became fascinated by the theories Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and other physicists. During this time, he also became involved in the fields of atomic theory and spectra, researching at the University of Zurich and then the Friedrich Wilhelm University in Berlin (where he succeeded Planck in 1927).

Artist's concept of the Electron Cloud model, which described the likely location of electron orbitals. Credit: prezi.com
Artist’s concept of the Electron Cloud model, which described the likely location of electron orbitals over time. Credit: Pearson Prentice Hall

In 1926, Schrödinger tackled the issue of wave functions and electrons in a series of papers. In addition to describing what would come to be known as the Schrodinger equation – a partial differential equation that describes how the quantum state of a quantum system changes with time – he also used mathematical equations to describe the likelihood of finding an electron in a certain position.

This became the basis of what would come to be known as the Electron Cloud (or quantum mechanical) Model, as well as the Schrodinger equation. Based on quantum theory, which states that all matter has properties associated with a wave function, the Electron Cloud Model differs from the Bohr Model in that it does not define the exact path of an electron.

Instead, it predicts the likely position of the location of the electron based on a function of probabilities. The probability function basically describes a cloud-like region where the electron is likely to be found, hence the name. Where the cloud is most  dense, the probability of finding the electron is greatest; and where the  electron is less likely to be, the cloud is less dense.

These dense regions are known as “electron orbitals”, since they are the most likely location where an orbiting electron will be found. Extending this “cloud” model to a 3-dimensional space, we see a barbell or flower-shaped atom (as in image at the top). Here, the branching out regions are the ones where we are most likely to find the electrons.

Thanks to Schrodinger’s work, scientists began to understand that in the realm of quantum mechanics, it was impossible to know the exact position and momentum of an electron at the same time. Regardless of what the observer knows initially about a particle, they can only predict its succeeding location or momentum in terms of probabilities.

At no given time will they be able to ascertain either one. In fact, the more they know about the momentum of a particle, the less they will know about its location, and vice versa. This is what is known today as the “Uncertainty Principle”.

Note that the orbitals mentioned in the previous paragraph are formed by a hydrogen atom (i.e. with just one electron). When dealing with atoms that have more electrons, the electron orbital regions spread out evenly into a spherical fuzzy ball. This is where the term ‘electron cloud’ is most appropriate.

This contribution was universally recognized as being one of the cost important contributions of the 20th century, and one which triggered a revolution in the fields of physics, quantum mechanics and indeed all the sciences. Thenceforth, scientists were no longer working in a universe characterized by absolutes of time and space, but in quantum uncertainties and time-space relativity!

We have written many interesting articles about atoms and atomic models here at Universe Today. Here’s What Is John Dalton’s Atomic Model?, What Is The Plum Pudding Model?, What Is Bohr’s Atomic Model?, Who Was Democritus?, and What Are The Parts Of An Atom?

For more information, be sure to check What Is Quantum Mechanics? from Live Science.

Astronomy Cast also has episode on the topic, like Episode 130: Radio Astronomy, Episode 138: Quantum Mechanics, and Episode 252: Heisenberg Uncertainty Principle

What Is The Plum Pudding Atomic Model?

Diagram of J.J. Thomson's "Plum Pudding Model" of the atom. Credit: boundless.com

Ever since it was first proposed by Democritus in the 5th century BCE, the atomic model has gone through several refinements over the past few thousand years. From its humble beginnings as an inert, indivisible solid that interacts mechanically with other atoms, ongoing research and improved methods have led scientists to conclude that atoms are actually composed of even smaller particles that interact with each other electromagnetically.

This was the basis of the atomic theory devised by English physicist J.J. Thompson in the late 19th an early 20th centuries. As part of the revolution that was taking place at the time, Thompson proposed a model of the atom that consisted of more than one fundamental unit. Based on its appearance, which consisted of a “sea of uniform positive charge” with electrons distributed throughout, Thompson’s model came to be nicknamed the “Plum Pudding Model”.

Though defunct by modern standards, the Plum Pudding Model represents an important step in the development of atomic theory. Not only did it incorporate new discoveries, such as the existence of the electron, it also introduced the notion of the atom as a non-inert, divisible mass. Henceforth, scientists would understand that atoms were themselves composed of smaller units of matter and that all atoms interacted with each other through many different forces.

Atomic Theory to the 19th century:

The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.

Various atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy (1808). Credit: Public Domain

It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways.

Dalton began with the question of why elements reacted in ratios of small whole numbers and concluded that these reactions occurred in whole-number multiples of discrete units – i.e. atoms. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory. This theory expanded on the laws of conversation of mass and definite proportions – formulated by the end of the 18th century – and remains one of the cornerstones of modern physics and chemistry.

The theory comes down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.

By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, the situation would change drastically.

Lateral view of a sort of a Crookes tube with a standing cross. Credit: Wikipedia Commons/D-Kuru
Lateral view of a sort of a Crookes tube with a standing cross. Credit: Wikimedia Commons/D-Kuru

Thompson’s Experiments:

Sir Joseph John Thomson (aka. J.J. Thompson) was an English physicist and the Cavendish Professor of Physics at the University of Cambridge from 1884 onwards. During the 1880s and 1890s, his work largely revolved around developing mathematical models for chemical processes, the transformation of energy in mathematical and theoretical terms, and electromagnetism.

However, by the late 1890s, he began conducting experiments using a cathode ray tube known as the Crookes’ Tube. This consists of a sealed glass container with two electrodes that are separated by a vacuum. When voltage is applied across the electrodes, cathode rays are generated (which take the form of a glowing patch of gas that stretches to the far end of the tube).

Through experimentation, Thomson observed that these rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles he called “corpuscles”. Upon measuring the mass-to-charge ration of these particles, he discovered that they were 1ooo times smaller and 1800 times lighter than hydrogen.

This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged corpuscles were distributed in a uniform sea of positive charge.

A depiction of the atomic structure of the helium atom. Credit: Creative Commons
A depiction of the atomic structure of the helium atom. Credit: Creative Commons

These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’s Philosophical Magazine, to wide acclaim.

Problems With the Plum Pudding Model:

Unfortunately, subsequent experiments revealed a number of scientific problems with the model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil.

In what would come to be known as the “gold foil experiment“, they measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.

Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.

The anticipated results of the Gieger-Marsden experiment (left), compared to the actual results (right). Credit: Wikimedia Commons/Kurzon
The anticipated results of the Gieger-Marsden experiment (left), and the actual results (right). Credit: Wikimedia Commons/Kurzon

.

By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.

Subsequent experiments by Antonius Van den Broek and Neils Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).

Though it would come to be discredited in just five years time, Thomson’s “Plum Pudding Model” would prove to be a crucial step in the development of the Standard Model of particle physics. His work in determining that atom’s were divisible, as well as the existence of electromagnetic forces within the atom, would also prove to be major influence on the field of quantum physics.

We have written many interesting articles on the subject of atomic theory here at Universe Today. For instance, here is How Many Atoms Are There In The Universe?, John Dalton’s Atomic Model, What Are The Parts Of The Atom?, Bohr’s Atomic Model,

For more information, be sure to check out Physic’s Worlds pages on 100 years of the electron: from discovery to application and Proton and neutron masses calculated from first principles

Astronomy Cast also has some episodes on the subject: Episode 138: Quantum Mechanics, Episode 139: Energy Levels and Spectra, Episode 378: Rutherford and Atoms and Episode 392: The Standard Model – Intro.

What Are The Parts Of An Atom?

A depiction of the atomic structure of the helium atom. Credit: Creative Commons

Since the beginning of time, human beings have sought to understand what the universe and everything within it is made up of. And while ancient magi and philosophers conceived of a world composed of four or five elements – earth, air, water, fire (and metal, or consciousness) – by classical antiquity, philosophers began to theorize that all matter was actually made up of tiny, invisible, and indivisible atoms.

Since that time, scientists have engaged in a process of ongoing discovery with the atom, hoping to discover its true nature and makeup. By the 20th century, our understanding became refined to the point that we were able to construct an accurate model of it. And within the past decade, our understanding has advanced even further, to the point that we have come to confirm the existence of almost all of its theorized parts.

Continue reading “What Are The Parts Of An Atom?”

Cosmologist Thinks a Strange Signal May Be Evidence of a Parallel Universe

A simulation of galaxies during the era of deionization in the early Universe. Credit: M. Alvarez, R. Kaehler, and T. AbelCredit: M. Alvarez, R. Kaehler, and T. Abel

In the beginning, there was chaos.

Hot, dense, and packed with energetic particles, the early Universe was a turbulent, bustling place. It wasn’t until about 300,000 years after the Big Bang that the nascent cosmic soup had cooled enough for atoms to form and light to travel freely. This landmark event, known as recombination, gave rise to the famous cosmic microwave background (CMB), a signature glow that pervades the entire sky.

Now, a new analysis of this glow suggests the presence of a pronounced bruise in the background — evidence that, sometime around recombination, a parallel universe may have bumped into our own.

Although they are often the stuff of science fiction, parallel universes play a large part in our understanding of the cosmos. According to the theory of eternal inflation, bubble universes apart from our own are theorized to be constantly forming, driven by the energy inherent to space itself.

Like soap bubbles, bubble universes that grow too close to one another can and do stick together, if only for a moment. Such temporary mergers could make it possible for one universe to deposit some of its material into the other, leaving a kind of fingerprint at the point of collision.

Ranga-Ram Chary, a cosmologist at the California Institute of Technology, believes that the CMB is the perfect place to look for such a fingerprint.

This image, the best map ever of the Universe, shows the oldest light in the universe. This glow, left over from the beginning of the cosmos called the cosmic microwave background, shows tiny changes in temperature represented by color. Credit: ESA and the Planck Collaboration.
The cosmic microwave background (CMB), a pervasive glow made of light from the Universe’s infancy, as seen by the Planck satellite in 2013. Tiny deviations in average temperature are represented by color. Credit: ESA and the Planck Collaboration.

After careful analysis of the spectrum of the CMB, Chary found a signal that was about 4500x brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early Universe. Indeed, this particular signal — an emission line that arose from the formation of atoms during the era of recombination — is more consistent with a Universe whose ratio of matter particles to photons is about 65x greater than our own.

There is a 30% chance that this mysterious signal is just noise, and not really a signal at all; however, it is also possible that it is real, and exists because a parallel universe dumped some of its matter particles into our own Universe.

After all, if additional protons and electrons had been added to our Universe during recombination, more atoms would have formed. More photons would have been emitted during their formation. And the signature line that arose from all of these emissions would be greatly enhanced.

Chary himself is wisely skeptical.

“Unusual claims like evidence for alternate Universes require a very high burden of proof,” he writes.

Indeed, the signature that Chary has isolated may instead be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy.

SO is this just another case of BICEP2? Only time and further analysis will tell.

Chary has submitted his paper to the Astrophysical Journal. A preprint of the work is available here.

Measuring Fundamental Constants with Methanol

Diagram of the methanol molecule
Diagram of the methanol molecule

[/caption]

 

Key to the astronomical modeling process by which scientists attempt to understand our universe, is a comprehensive knowledge of the values making up these models. These are generally measured to exceptionally high confidence levels in laboratories. Astronomers then assume these constants are just that – constant. This generally seems to be a good assumption since models often produce mostly accurate pictures of our universe. But just to be sure, astronomers like to make sure these constants haven’t varied across space or time. Making sure, however, is a difficult challenge. Fortunately, a recent paper has suggested that we may be able to explore the fundamental masses of protons and electrons (or at least their ratio) by looking at the relatively common molecule of methanol.

The new report is based on the complex spectra of the methane molecule. In simple atoms, photons are generated from transitions between atomic orbitals since they have no other way to store and translate energy. But with molecules, the chemical bonds between the component atoms can store the energy in vibrational modes in much the same way masses connected to springs can vibrate. Additionally, molecules lack radial symmetry and can store energy by rotation. For this reason, the spectra of cool stars show far more absorption lines than hot ones since the cooler temperatures allow molecules to begin forming.

Many of these spectral features are present in the microwave portion of the spectra and some are extremely dependent on quantum mechanical effects which in turn depend on precise masses of the proton and electron. If those masses were to change, the position of some spectral lines would change as well. By comparing these variations to their expected positions, astronomers can gain valuable insights to how these fundamental values may change.

The primary difficulty is that, in the grand scheme of things, methanol (CH3OH) is rare since our universe is 98% hydrogen and helium. The last 2% is composed of every other element (with oxygen and carbon being the next most common). Thus, methanol is comprised of three of the four most common elements, but they have to find each other, to form the molecule in question. On top of that, they must also exist in the right temperature range; too hot and the molecule is broken apart; too cold and there’s not enough energy to cause emission for us to detect it. Due to the rarity of molecules with these conditions, you might expect that finding enough of it, especially across the galaxy or universe, would be challenging.

Fortunately, methanol is one of the few molecules which are prone to creating astronomical masers. Masers are the microwave equivalent of lasers in which a small input of light can cause a cascading effect in which it induces the molecules it strikes to also emit light at specific frequencies. This can greatly enhance the brightness of a cloud containing methanol, increasing the distance to which it could be readily detected.

By studying methanol masers within the Milky Way using this technique, the authors found that, if the ratio of the mass of an electron to that of a proton does change, it does so by less than three parts in one hundred million. Similar studies have also been conducted using ammonia as the tracer molecule (which can also form masers) and have come to similar conclusions.

What Is An Electron

Faraday's Constant

[/caption]

What is an electron? Easily put, an electron is a subatomic particle that carries a negative electric charge. There are no known components, so it is believed to be an elementary particle(basic building block of the universe). The mass of an electron is 1/1836 of its proton. Electrons have an antiparticle called a positron. Positrons are identical to electrons except that all of its properties are the exact opposite. When electrons and positrons collide, they can be destroyed and will produce a pair (or more) of gamma ray photons. Electrons have gravitational, electromagnetic, and weak interactions.

In 1913, Niels Bohr postulated that electrons resided in quantized energy states, with the energy determined by the spin(angular momentum)of the electron’s orbits and that the electrons could move between these orbits by the emission or absorption of photons. These orbits explained the spectral lines of the hydrogen atom. The Bohr model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atom. Gilbert Lewis proposed in 1916 that a ‘covalent bond’ between two atoms is maintained by a pair of shared electrons. In 1919, Irving Langmuir improved on Lewis’ static model and suggested that all electrons were distributed in successive “concentric(nearly) spherical shells, all of equal thickness”. The shells were divided into a number of cells containing one pair of electrons. This model was able to qualitatively explain the chemical properties of all elements in the periodic table.

The invariant mass of an electron is 9.109×10-31 or 5.489×10-4 of the atomic mass unit. According to Einstein’s principle of mass-energy equivalence, this mass corresponds to a rest energy of .511MeV. Electrons have an electric charge of -1.602×10 coulomb. This a standard unit of charge for subatomic particles. The electron charge is identical to the charge of a proton. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton. The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. Observing a single electron shows the upper limit of the particle’s radius is 10-22 meters. Some elementary particles decay into less massive particles. But an electron is thought to be stable on the grounds that it is the least massive particle with non-zero electric charge.

Understanding what is an electron is to begin to understand the basic building blocks of the universe. A very elementary understanding, but a building block to great scientific thought.

We have written many articles about the electron for Universe Today. Here’s an article about the Electron Cloud Model, and here’s an article about the charge of electron.

If you’d like more info on the Electron, check out the History of the Electron Page, and here’s a link to the article about Killer Electrons.

We’ve also recorded an entire episode of Astronomy Cast all about the Composition of the Atom. Listen here, Episode 164: Inside the Atom.