Who was Max Planck?

Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.

If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!

Early Life and Education:

Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.

Illustration of Friedrich Wilhelms University, with the statue of Frederick the Great (ca. 1850). Credit: Wikipedia Commons/A. Carse

He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.

In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”

For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.

With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.

Professors Michael Bonitz and Frank Hohmann, holding a facsimile of Planck’s Nobel prize certificate, which was given to the University of Kiel in 2013. Credit and Copyright: CAU/Schimmelpfennig

In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.

Black Body Radiation:

It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).

In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.

A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).

From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in 1931. Credit: Wikipedia Commons
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in Berlin, 1931. Credit: Wikipedia Commons

Quantum Mechanics:

By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.

The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).

However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.

During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.

Photograph of the first Solvay Conference in 1911 at the Hotel Metropole in Brussels, Belgium. Credit: International Solvay Institutes/Benjamin Couprie

This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.

World War I and World War II:

In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.

After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.

The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.

Entrance of the administrative headquarters of the Max Planck Society in Munich. Credit: Wikipedia Commons/Maximilian Dörrbecker

Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.

Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.

Death and Legacy:

Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.

In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.

The Max Planck Medal, issued by the German Physical Society in recognition of scientific contributions. Credit: dpg-physik.de

He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.

The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.

In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.

Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.

We have written many articles about Max Planck for Universe Today. Here’s What is Planck Time?, Planck’s First Light?, All-Sky Stunner from Planck, What is Schrodinger’s Cat?, What is the Double Slit Experiment?, and here’s a list of stories about the spacecraft that bears his name.

If you’d like more info on Max Planck, check out Max Planck’s biography from Science World and Space and Motion.

We’ve also recorded an entire episode of Astronomy Cast all about Max Planck. Listen here, Episode 218: Max Planck.

Sources:

What is the CERN Particle Accelerator?

Particle Collider

What if it were possible to observe the fundamental building blocks upon which the Universe is based? Not a problem! All you would need is a massive particle accelerator, an underground facility large enough to cross a border between two countries, and the ability to accelerate particles to the point where they annihilate each other – releasing energy and mass which you could then observe with a series of special monitors.

Well, as luck would have it, such a facility already exists, and is known as the CERN Large Hardron Collider (LHC), also known as the CERN Particle Accelerator. Measuring roughly 27 kilometers in circumference and located deep beneath the surface near Geneva, Switzerland, it is the largest particle accelerator in the world. And since CERN flipped the switch, the LHC has shed some serious light on some deeper mysteries of the Universe.

Purpose:

Colliders, by definition, are a type of a particle accelerator that rely on two directed beams of particles. Particles are accelerated in these instruments to very high kinetic energies and then made to collide with each other. The byproducts of these collisions are then analyzed by scientists in order ascertain the structure of the subatomic world and the laws which govern it.

The Large Hadron Collider is the most powerful particle accelerator in the world. Image: CERN
The Large Hadron Collider is the most powerful particle accelerator in the world. Credit: CERN

The purpose of colliders is to simulate the kind of high-energy collisions to produce particle byproducts that would otherwise not exist in nature. What’s more, these sorts of particle byproducts decay after very short period of time, and are are therefor difficult or near-impossible to study under normal conditions.

The term hadron refers to composite particles composed of quarks that are held together by the strong nuclear force, one of the four forces governing particle interaction (the others being weak nuclear force, electromagnetism and gravity). The best-known hadrons are baryons – protons and neutrons – but also include mesons and unstable particles composed of one quark and one antiquark.

Design:

The LHC operates by accelerating two beams of “hadrons” – either protons or lead ions – in opposite directions around its circular apparatus. The hadrons then collide after they’ve achieved very high levels of energy, and the resulting particles are analyzed and studied. It is the largest high-energy accelerator in the world, measuring 27 km (17 mi) in circumference and at a depth of 50 to 175 m (164 to 574 ft).

The tunnel which houses the collider is 3.8-meters (12 ft) wide, and was previously used to house the Large Electron-Positron Collider (which operated between 1989 and 2000). This tunnel contains two adjacent parallel beamlines that intersect at four points, each containing a beam that travels in opposite directions around the ring. The beam is controlled by 1,232 dipole magnets while 392 quadrupole magnets are used to keep the beams focused.

Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place. Credit: Wikipedia Commons/gamsiz
Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place.Credit: Wikipedia Commons/gamsiz

About 10,000 superconducting magnets are used in total, which are kept at an operational temperature of -271.25 °C (-456.25 °F) – which is just shy of absolute zero – by approximately 96 tonnes of liquid helium-4. This also makes the LHC the largest cryogenic facility in the world.

When conducting proton collisions, the process begins with the linear particle accelerator (LINAC 2). After the LINAC 2 increases the energy of the protons, these particles are then injected into the Proton Synchrotron Booster (PSB), which accelerates them to high speeds.

They are then injected into the Proton Synchrotron (PS), and then onto the Super Proton Synchrtron (SPS), where they are sped up even further before being injected into the main accelerator. Once there, the proton bunches are accumulated and accelerated to their peak energy over a period of 20 minutes. Last, they are circulated for a period of 5 to 24 hours, during which time collisions occur at the four intersection points.

During shorter running periods, heavy-ion collisions (typically lead ions) are included the program. The lead ions are first accelerated by the linear accelerator LINAC 3, and the Low Energy Ion Ring (LEIR) is used as an ion storage and cooler unit. The ions are then further accelerated by the PS and SPS before being injected into LHC ring.

While protons and lead ions are being collided, seven detectors are used to scan for their byproducts. These include the A Toroidal LHC ApparatuS (ATLAS) experiment and the Compact Muon Solenoid (CMS), which are both general purpose detectors designed to see many different types of subatomic particles.

Then there are the more specific A Large Ion Collider Experiment (ALICE) and Large Hadron Collider beauty (LHCb) detectors. Whereas ALICE is a heavy-ion detector that studies strongly-interacting matter at extreme energy densities, the LHCb records the decay of particles and attempts to filter b and anti-b quarks from the products of their decay.

Then there are the three small and highly-specialized detectors – the TOTal Elastic and diffractive cross section Measurement (TOTEM) experiment, which measures total cross section, elastic scattering, and diffractive processes; the Monopole & Exotics Detector (MoEDAL), which searches magnetic monopoles or massive (pseudo-)stable charged particles; and the Large Hadron Collider forward (LHCf) that monitor for astroparticles (aka. cosmic rays).

History of Operation:

CERN, which stands for Conseil Européen pour la Recherche Nucléaire (or European Council for Nuclear Research in English) was established on Sept 29th, 1954, by twelve western European signatory nations. The council’s main purpose was to oversee the creation of a particle physics laboratory in Geneva where nuclear studies would be conducted.

Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN
Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN

Soon after its creation, the laboratory went beyond this and began conducting high-energy physics research as well. It has also grown to include twenty European member states: France, Switzerland, Germany, Belgium, the Netherlands, Denmark, Norway, Sweden, Finland, Spain, Portugal, Greece, Italy, the UK, Poland, Hungary, the Czech Republic, Slovakia, Bulgaria and Israel.

Construction of the LHC was approved in 1995 and was initially intended to be completed by 2005. However, cost overruns, budget cuts, and various engineering difficulties pushed the completion date to April of 2007. The LHC first went online on September 10th, 2008, but initial testing was delayed for 14 months following an accident that caused extensive damage to many of the collider’s key components (such as the superconducting magnets).

On November 20th, 2009, the LHC was brought back online and its First Run ran from 2010 to 2013. During this run, it collided two opposing particle beams of protons and lead nuclei at energies of 4 teraelectronvolts (4 TeV) and 2.76 TeV per nucleon, respectively. The main purpose of the LHC is to recreate conditions just after the Big Bang when collisions between high-energy particles was taking place.

Major Discoveries:

During its First Run, the LHCs discoveries included a particle thought to be the long sought-after Higgs Boson, which was announced on July 4th, 2012. This particle, which gives other particles mass, is a key part of the Standard Model of physics. Due to its high mass and elusive nature, the existence of this particle was based solely in theory and had never been previously observed.

The discovery of the Higgs Boson and the ongoing operation of the LHC has also allowed researchers to investigate physics beyond the Standard Model. This has included tests concerning supersymmetry theory. The results show that certain types of particle decay are less common than some forms of supersymmetry predict, but could still match the predictions of other versions of supersymmetry theory.

In May of 2011, it was reported that quark–gluon plasma (theoretically, the densest matter besides black holes) had been created in the LHC. On November 19th, 2014, the LHCb experiment announced the discovery of two new heavy subatomic particles, both of which were baryons composed of one bottom, one down, and one strange quark. The LHCb collaboration also observed multiple exotic hadrons during the first run, possibly pentaquarks or tetraquarks.

Since 2015, the LHC has been conducting its Second Run. In that time, it has been dedicated to confirming the detection of the Higgs Boson, and making further investigations into supersymmetry theory and the existence of exotic particles at higher-energy levels.

The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN
The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN

In the coming years, the LHC is scheduled for a series of upgrades to ensure that it does not suffer from diminished returns. In 2017-18, the LHC is scheduled to undergo an upgrade that will increase its collision energy to 14 TeV. In addition, after 2022, the ATLAS detector is to receive an upgrade designed to increase the likelihood of it detecting rare processes, known as the High Luminosity LHC.

The collaborative research effort known as the LHC Accelerator Research Program (LARP) is currently conducting research into how to upgrade the LHC further. Foremost among these are increases in the beam current and the modification of the two high-luminosity interaction regions, and the ATLAS and CMS detectors.

Who knows what the LHC will discover between now and the day when they finally turn the power off? With luck, it will shed more light on the deeper mysteries of the Universe, which could include the deep structure of space and time, the intersection of quantum mechanics and general relativity, the relationship between matter and antimatter, and the existence of “Dark Matter”.

We have written many articles about CERN and the LHC for Universe Today. Here’s What is the Higgs Boson?, The Hype Machine Deflates After CERN Data Shows No New Particle, BICEP2 All Over Again? Researchers Place Higgs Boson Discovery in Doubt, Two New Subatomic Particles Found, Is a New Particle about to be Announced?, Physicists Maybe, Just Maybe, Confirm the Possible Discovery of 5th Force of Nature.

If you’d like more info on the Large Hadron Collider, check out the LHC Homepage, and here’s a link to the CERN website.

Astronomy Cast also has some episodes on the subject. Listen here, Episode 69: The Large Hadron Collider and The Search for the Higgs Boson and Episode 392: The Standard Model – Intro.

Sources:

New Theory of Gravity Does Away With Need for Dark Matter


Erik Verlinde explains his new view of gravity

Let’s be honest. Dark matter’s a pain in the butt. Astronomers have gone to great lengths to explain why is must exist and exist in huge quantities, yet it remains hidden. Unknown. Emitting no visible energy yet apparently strong enough to keep galaxies in clusters from busting free like wild horses, it’s everywhere in vast quantities. What is the stuff – axions, WIMPS, gravitinos, Kaluza Klein particles?

Estimated distribution of matter and energy in the universe. Credit: NASA
Estimated distribution of matter and energy in the universe. Credit: NASA

It’s estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%.  But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff.

formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
Snowflakes exemplify the concept of emergence with their complex symmetrical and fractal patterns created when much simpler pieces join together. Credit: Bob King

Unlike the traditional view of gravity as a fundamental force of nature, Verlinde sees it as an emergent property of space.  Emergence is a process where nature builds something large using small, simple pieces such that the final creation exhibits properties that the smaller bits don’t. Take a snowflake. The complex symmetry of a snowflake begins when a water droplet freezes onto a tiny dust particle. As the growing flake falls, water vapor freezes onto this original crystal, naturally arranging itself into a hexagonal (six-sided) structure of great beauty. The sensation of temperature is another emergent phenomenon, arising from the motion of molecules and atoms.

So too with gravity, which according to Verlinde, emerges from entropy. We all know about entropy and messy bedrooms, but it’s a bit more subtle than that. Entropy is a measure of disorder in a system or put another way, the number of different microscopic states a system can be in. One of the coolest descriptions of entropy I’ve heard has to do with the heat our bodies radiate. As that energy dissipates in the air, it creates a more disordered state around us while at the same time decreasing our own personal entropy to ensure our survival. If we didn’t get rid of body heat, we would eventually become disorganized (overheat!) and die.

The more massive the object, the more it distorts spacetime. Credit: LIGO/T. Pyle
The more massive the object, the more it distorts space-time, shown here as the green mesh. Earth orbits the Sun by rolling around the dip created by the Sun’s mass in the fabric of space-time. It doesn’t fall into the Sun because it also possesses forward momentum. Credit: LIGO/T. Pyle

Emergent or entropic gravity, as the new theory is called, predicts the exact same deviation in the rotation rates of stars in galaxies currently attributed to dark matter. Gravity emerges in Verlinde’s view from changes in fundamental bits of information stored in the structure of space-time, that four-dimensional continuum revealed by Einstein’s general theory of relativity. In a word, gravity is a consequence of entropy and not a fundamental force.

Space-time, comprised of the three familiar dimensions in addition to time, is flexible. Mass warps the 4-D fabric into hills and valleys that direct the motion of smaller objects nearby. The Sun doesn’t so much “pull” on the Earth as envisaged by Isaac Newton but creates a great pucker in space-time that Earth rolls around in.

In a 2010 article, Verlinde showed how Newton’s law of gravity, which describes everything from how apples fall from trees to little galaxies orbiting big galaxies, derives from these underlying microscopic building blocks.

His latest paper, titled Emergent Gravity and the Dark Universe, delves into dark energy’s contribution to the mix.  The entropy associated with dark energy, a still-unknown form of energy responsible for the accelerating expansion of the universe, turns the geometry of spacetime into an elastic medium.

“We find that the elastic response of this ‘dark energy’ medium takes the form of an extra ‘dark’ gravitational force that appears to be due to ‘dark matter’,” writes Verlinde. “So the observed dark matter phenomena is a remnant, a memory effect, of the emergence of spacetime together with the ordinary matter in it.”

Rotation curve of the typical spiral galaxy M 33 (yellow and blue points with errorbars) and the predicted one from distribution of the visible matter (white line). The discrepancy between the two curves is accounted for by adding a dark matter halo surrounding the galaxy. Credit: Public domain / Wikipedia
This diagram shows rotation curves of stars in M33, a typical spiral galaxy. The vertical scale is speed and the horizontal is distance from the galaxy’s nucleus. Normally, we expect stars to slow down the farther they are from galactic center (bottom curve), but in fact they revolve much faster (top curve). The discrepancy between the two curves is accounted for by adding a dark matter halo surrounding the galaxy. Credit: Public domain / Wikipedia

I’ll be the first one to say how complex Verlinde’s concept is, wrapped in arcane entanglement entropy, tensor fields and the holographic principal, but the basic idea, that gravity is not a fundamental force, makes for a fascinating new way to look at an old face.

Physicists have tried for decades to reconcile gravity with quantum physics with little success. And while Verlinde’s theory should be rightly be taken with a grain of salt, he may offer a way to combine the two disciplines into a single narrative that describes how everything from falling apples to black holes are connected in one coherent theory.

Detector With Real-time Alert Capability Waits Patiently For Supernova Neutrinos

Under Mount Ikeno, Japan, in an old mine that sits one-thousand meters (3,300 feet) beneath the surface, lies the Super-Kamiokande Observatory (SKO). Since 1996, when it began conducting observations, researchers have been using this facility’s Cherenkov detector to look for signs of proton decay and neutrinos in our galaxy. This is no easy task, since neutrinos are very difficult to detect.

But thanks to a new computer system that will be able to monitor neutrinos in real-time, the researchers at the SKO will be able to research these mysteries particles more closely in the near future. In so doing, they hope to understand how stars form and eventually collapse into black holes, and sneak a peak at how matter was created in the early Universe.

Neutrinos, put simply, are one of the fundamental particles that make up the Universe. Compared to other fundamental particles, they have very little mass, no charge, and only interact with other types of particles via the weak nuclear force and gravity. They are created in a number of ways, most notably through radioactive decay, the  nuclear reactions that power a star, and in supernovae.

The Big Bang timeline of the Universe. Cosmic neutrinos affect the CMB at the time it was emitted, and physics takes care of the rest of their evolution until today. Image credit: NASA / JPL-Caltech / A. Kashlinsky (GSFC).
Timeline of the Big Bang, which unleashed cosmic neutrinos that can still be detected today. Credit: NASA / JPL-Caltech / A. Kashlinsky (GSFC).

In accordance with the standard Big Bang model, the neutrinos left over from the creation of the Universe are the most abundant particles in existence. At any given moment, trillions of these particles are believed to be moving around us and through us. But because of the way they interact with matter (i.e. only weakly) they are extremely difficult to detect.

For this reason, neutrino observatories are built deep underground to avoid interference from cosmic rays. They also rely on Cherenkov detectors, which are essentially massive water tanks that have thousands of sensors lining their walls. These attempt to detect particles as they are slowed down to the local speed of light (i.e. the speed of light in water), which is made evident by the presence of a glow – known as Cherenkov radiation.

The detector at the SKO is currently the largest in the world. It consists of a cylindrical stainless steel tank that is 41.4 m (136 ft) tall and 39.3 m (129 ft) in diameter, and holds over 45,000 metric tons (50,000 US tons) of ultra-pure water. In the interior, 11,146 photomultiplier tubes are mounted, which detect light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum with extreme sensitivity.

For years, researchers at the SKO have used the facility to examine solar neutrinos, atmospheric neutrinos and man-made neutrinos. However, those that are created by supernovas are very difficult to detect, since they appear suddenly and difficult to distinguish from other kinds. However, with the newly-added computer system, the Super Komiokande researchers are hoping that will change.

Cherenkov radiation glowing in the core of the Advanced Test Reactor at the Idaho National Laboratory Credit: Wikipedia Commons/Argonne National Laboratory
Cherenkov radiation glowing in the core of the Advanced Test Reactor at the Idaho National Laboratory Credit: Wikipedia Commons/Argonne National Laboratory

As Luis Labarga, a physicist at the Autonomous University of Madrid (Spain) and a member of the collaboration, explained in a recent statement to the Scientific News Service (SINC):

“Supernova explosions are one of the most energetic phenomena in the universe and most of this energy is released in the form of neutrinos. This is why detecting and analyzing neutrinos emitted in these cases, other than those from the Sun or other sources, is very important for understanding the mechanisms in the formation of neutron stars –a type of stellar remnant– and black holes”.

Basically, the new computer system is designed to analyze the events recorded in the depths of the observatory in real-time. If it detects an abnormally large flows of neutrinos, it will quickly alert the experts manning the controls. They will then be able to assess the significance of the signal within minutes and see if it is actually coming from a nearby supernova.

“During supernova explosions an enormous number of neutrinos is generated in an extremely small space of time – a few seconds – and this why we need to be ready,” Labarga added. “This allows us to research the fundamental properties of these fascinating particles, such as their interactions, their hierarchy and the absolute value of their mass, their half-life, and surely other properties that we still cannot even imagine.”

The Super-Kamiokande experiment is located at the Kamioka Observatory, 1,000 m below ground in a mine near the Japanese city of Kamioka. Credit: Kamioka Observatory/ICRR/University of Tokyo
The Super-Kamiokande experiment is located at the Kamioka Observatory, 1,000 m below ground in a mine near the Japanese city of Kamioka. Credit: Kamioka Observatory/ICRR/University of Toky

Equally as important is the fact this system will give the SKO the ability to issue early warnings to research centers around the world. Ground-based observatories, where astronomers are keen to watch the creation of cosmic neutrinos by supernova, will then be able to point all of their optical instruments towards the source in advance (since the electromagnetic signal will take longer to arrive).

Through this collaborative effort, astrophysicists may be able to better understand some of the most elusive neutrinos of all. Discerning how these fundamental particles interact with others could bring us one step closer to a Grand Unified Theory – one of the major goals of the Super-Kamiokande Observatory.

To date, only a few neutrino detectors exist in the world. These include the Irvine-Michigan-Brookhaven (IMB) detector in Ohio, the Subdury Neutrino Observatory (SNOLAB) in Ontario, Canada, and the Super Kamiokande Observatory in Japan.

Further Reading: SINC

What is a Magnetic Field?

Everyone knows just how fun magnets can be. As a child, who among us didn’t love to see if we could make our silverware stick together? And how about those little magnetic rocks that we could arrange to form just about any shape because they stuck together? Well, magnetism is not just an endless source of fun or good for scientific experiments; it’s also one of basic physical laws upon which the universe is based.

The attraction known as magnetism occurs when a magnetic field is present, which is a field of force produced by a magnetic object or particle. It can also be produced by a changing electric field and is detected by the force it exerts on other magnetic materials. Hence why the area of study dealing with magnets is known as electromagnetism.

Definition:

Magnetic fields can be defined in a number of ways, depending on the context. However, in general terms, it is an invisible field that exerts magnetic force on substances which are sensitive to magnetism. Magnets also exert forces and torques on each other through the magnetic fields they create.

Visualization of the solar wind encountering Earth's magnetic "defenses" known as the magnetosphere. Clouds of southward-pointing plasma are able to peel back layers of the Sun-facing bubble and stack them into layers on the planet's nightside (center, right). The layers can be squeezed tightly enough to reconnect and deliver solar electrons (yellow sparkles) directly into the upper atmosphere to create the aurora. Credit: JPL
Visualization of the solar wind encountering Earth’s magnetosphere. Like a dipole magnet, it has field lines and a northern and southern pole. Credit: JPL

They can be generated within the vicinity of a magnet, by an electric current, or a changing electrical field. They are dipolar in nature, which means that they have both a north and south magnetic pole. The Standard International (SI) unit used to measure magnetic fields is the Tesla, while smaller magnetic fields are measured in terms of Gauss (1 Tesla = 10,000 Guass).

Mathematically, a magnetic field is defined in terms of the amount of force it exerted on a moving charge. The measurement of this force is consistent with the Lorentz Force Law, which can be expressed as F= qvB, where F is the magnetic force, q is the charge, v is the velocity, and the magnetic field is B. This relationship is a vector product, where F is perpendicular (->) to all other values.

Field Lines:

Magnetic fields may be represented by continuous lines of force (or magnetic flux) that emerge from north-seeking magnetic poles and enter south-seeking poles. The density of the lines indicate the magnitude of the field, being more concentrated at the poles (where the field is strong) and fanning out and weakening the farther they get from the poles.

A uniform magnetic field is represented by equally-spaced, parallel straight lines. These lines are continuous, forming closed loops that run from north to south, and looping around again. The direction of the magnetic field at any point is parallel to the direction of nearby field lines, and the local density of field lines can be made proportional to its strength.

Magnetic field lines resemble a fluid flow, in that they are streamlined and continuous, and more (or fewer lines) appear depending on how closely a field is observed. Field lines are useful as a representation of magnetic fields, allowing for many laws of magnetism (and electromagnetism) to be simplified and expressed in mathematical terms.

A simple way to observe a magnetic field is to place iron filings around an iron magnet. The arrangements of these filings will then correspond to the field lines, forming streaks that connect at the poles. They also appear during polar auroras, in which visible streaks of light line up with the local direction of the Earth’s magnetic field.

History of Study:

The study of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field of a spherical magnet using iron needles. The places where these lines crossed he named “poles” (in reference to Earth’s poles), which he would go on to claim that all magnets possessed.

During the 16th century, English physicist and natural philosopher William Gilbert of Colchester replicated Peregrinus’ experiment. In 1600, he published his findings in a treaties (De Magnete) in which he stated that the Earth is a magnet. His work was intrinsic to establishing magnetism as a science.

View of the eastern sky during the peak of this morning's aurora. Credit: Bob King
View of the eastern sky during the peak of this morning’s aurora. Credit: Bob King

In 1750, English clergyman and philosopher John Michell stated that magnetic poles attract and repel each other. The force with which they do this, he observed, is inversely proportional to the square of the distance, otherwise known as the inverse square law.

In 1785, French physicist Charles-Augustin de Coulomb experimentally verified Earths’ magnetic field. This was followed by 19th century French mathematician and geometer Simeon Denis Poisson created the first model of the magnetic field, which he presented in 1824.

By the 19th century, further revelations refined and challenged previously-held notions. For example, in 1819, Danish physicist and chemist Hans Christian Orsted discovered that an electric current creates a magnetic field around it. In 1825, André-Marie Ampère proposed a model of magnetism where this force was due to perpetually flowing loops of current, instead of the dipoles of magnetic charge.

In 1831, English scientist Michael Faraday showed that a changing magnetic field generates an encircling electric field. In effect, he discovered electromagnetic induction, which was characterized by Faraday’s law of induction (aka. Faraday’s Law).

A Faraday cage in power plant in Heimbach, Germany. Credit: Wikipedia Commons/Frank Vincentz
A Faraday cage in power plant in Heimbach, Germany. Credit: Wikipedia Commons/Frank Vincentz

Between 1861 and 1865, Scottish scientist James Clerk Maxwell published his theories on electricity and magnetism – known as the Maxwell’s Equations. These equations not only pointed to the interrelationship between electricity and magnetism, but showed how light itself is an electromagnetic wave.

The field of electrodynamics was extended further during the late 19th and 20th centuries. For instance, Albert Einstein (who proposed the Law of Special Relativity in 1905), showed that electric and magnetic fields are part of the same phenomena viewed from different reference frames. The emergence of quantum mechanics also led to the development of quantum electrodynamics (QED).

Examples:

A classic example of a magnetic field is the field created by an iron magnet. As previously mentioned, the magnetic field can be illustrated by surrounding it with iron filings, which will be attracted to its field lines and form in a looping formation around the poles.

Larger examples of magnetic fields include the Earth’s magnetic field, which resembles the field produced by a simple bar magnet. This field is believed to be the result of movement in the Earth’s core, which is divided between a solid inner core and molten outer core which rotates in the opposite direction of Earth. This creates a dynamo effect, which is believed to power Earth’s magnetic field (aka. magnetosphere).

Computer simulation of the Earth's field in a period of normal polarity between reversals.[1] The lines represent magnetic field lines, blue when the field points towards the center and yellow when away. The rotation axis of the Earth is centered and vertical. The dense clusters of lines are within the Earth's core
Computer simulation of the Earth’s field in a period of normal polarity between reversals.[1] The lines represent magnetic field lines, blue when the field points towards the center and yellow when away. Credit: NASA
Such a field is called a dipole field because it has two poles – north and south, located at either end of the magnet – where the strength of the field is at its maximum. At the midpoint between the poles the strength is half of its polar value, and extends tens of thousands of kilometers into space, forming the Earth’s magnetosphere.

Other celestial bodies have been shown to have magnetic fields of their own. This includes the gas and ice giants of the Solar System – Jupiter, Saturn, Uranus and Neptune. Jupiter’s magnetic field is 14 times as powerful as that of Earth, making it the strongest magnetic field of any planetary body. Jupiter’s moon Ganymede also has a magnetic field, and is the only moon in the Solar System known to have one.

Mars is believed to have once had a magnetic field similar to Earth’s, which was also the result of a dynamo effect in its interior. However, due to either a massive collision, or rapid cooling in its interior, Mars lost its magnetic field billions of years ago. It is because of this that Mars is believed to have lost most of its atmosphere, and the ability to maintain liquid water on its surface.

When it comes down to it, electromagnetism is a fundamental part of our Universe, right up there with nuclear forces and gravity. Understanding how it works, and where magnetic fields occur, is not only key to understanding how the Universe came to be, but may also help us to find life beyond Earth someday.

We have written many articles about the magnetic field for Universe Today. Here’s What is Earth’s Magnetic Field, Is Earth’s Magnetic Field Ready to Flip?, How Do Magnets Work?, Mapping The Milky Way’s Magnetic Fields – The Faraday Sky, Magnetic Fields in Spiral Galaxies – Explained at Last?, Astronomy Without A Telescope – Cosmic Magnetic Fields.

If you’d like more info on Earth’s magnetic field, check out NASA’s Solar System Exploration Guide on Earth. And here’s a link to NASA’s Earth Observatory.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:

What is Binding Energy?

Have you ever taken a look at a piece of firewood and said to yourself, “gee, I wonder how much energy it would take to split that thing apart”? Chances are, no you haven’t, few people do. But for physicists, asking how much energy is needed to separate something into its component pieces is actually a pretty important question.

In the field of physics, this is what is known as binding energy, or the amount of mechanical energy it would take to disassemble an atom into its separate parts. This concept is used by scientists on many different levels, which includes the atomic level, the nuclear level, and in astrophysics and chemistry.

Nuclear Force:

As anyone who remembers their basic chemistry or physics surely knows, atoms are composed of subatomic particles known as nucleons. These consist of positively-charged particles (protons) and neutral particles (neutrons) that are arranged in the center (in the nucleus). These are surrounded by electrons which orbit the nucleus and are arranged in different energy levels.

Neils Bohr's model a nitrogen atom. Credit: britannica.com
Neils Bohr’s model a nitrogen atom. Credit: britannica.com

The reason why subatomic particles that have fundamentally different charges are able to exist so close together is because of the presence of Strong Nuclear Force – a fundamental force of the universe that allows subatomic particles to be attracted at short distances. It is this force that counteracts the repulsive force (known as the Coulomb Force) that causes particles to repel each other.

Therefore, any attempt to divide the nucleus into the same number of free unbound neutrons and protons – so that they are far/distant enough from each other that the strong nuclear force can no longer cause the particles to interact – will require enough energy to break these nuclear bonds.

Thus, binding energy is not only the amount of energy required to break strong nuclear force bonds, it is also a measure of the strength of the bonds holding the nucleons together.

Nuclear Fission and Fusion:

In order to separate nucleons, energy must be supplied to the nucleus, which is usually accomplished by bombarding the nucleus with high energy particles. In the case of bombarding heavy atomic nuclei (like uranium or plutonium atoms) with protons, this is known as nuclear fission.

Nuclear fission, where an atom of Uranium 96 is split by a free neutron to produce barium and krypton. Credit: physics.stackexchange.com
Nuclear fission, where an atom of Uranium 96 is split by a free neutron to produce barium and krypton. Credit: physics.stackexchange.com

However, binding energy also plays a role in nuclear fusion, where light nuclei together (such as hydrogen atoms), are bound together under high energy states. If the binding energy for the products is higher when light nuclei fuse, or when heavy nuclei split, either of these processes will result in a release of the “extra” binding energy. This energy is referred to as nuclear energy, or loosely as nuclear power.

It is observed that the mass of any nucleus is always less than the sum of the masses of the individual constituent nucleons which make it up. The “loss” of mass which results when nucleons are split to form smaller nucleus, or merge to form a larger nucleus, is also attributed to a binding energy. This missing mass may be lost during the process in the form of heat or light.

Once the system cools to normal temperatures and returns to ground states in terms of energy levels, there is less mass remaining in the system. In that case, the removed heat represents exactly the mass “deficit”, and the heat itself retains the mass which was lost (from the point of view of the initial system). This mass appears in any other system which absorbs the heat and gains thermal energy.

Types of Binding Energy:

Strictly speaking, there are several different types of binding energy, which is based on the particular field of study. When it comes to particle physics, binding energy refers to the energy an atom derives from electromagnetic interaction, and is also the amount of energy required to disassemble an atom into free nucleons.

Nuclear Physics
Diagram showing the process of nuclear fusion. Credit: Lancaster University

In the case of removing electrons from an atom, a molecule, or an ion, the energy required is known as “electron binding energy” (aka. ionization potential). In general, the binding energy of a single proton or neutron in a nucleus is approximately a million times greater than the binding energy of a single electron in an atom.

In astrophysics, scientists employ the term “gravitational binding energy” to refer to the amount of energy it would take to pull apart (to infinity) an object held together by gravity alone – i.e. any stellar object like a star, a planet, or a comet. It also refers to the amount of energy that is liberated (usually in the form of heat) during the accretion of such an object from material falling from infinity.

Finally, there is what is known as “bond” energy, which is a measure of the bond strength in chemical bonds, and is also the amount of energy (heat) it would take to break a chemical compound down into its constituent atoms. Basically, binding energy is the very thing that binds our Universe together. And when various parts of it are broken apart, it is the amount of energy needed to carry it out.

The study of binding energy has numerous applications, not the least of which are nuclear power, electricity, and chemical manufacture. And in the coming years and decades, it will be intrinsic in the development of nuclear fusion!

We have written many articles about binding energy for Universe Today. Here’s What is Bohr’s Atomic Model?, What is John Dalton’s Atomic Model?, What is the Plum Pudding Atomic Model?, What is Atomic Mass?, and Nuclear Fusion in Stars.

If you’d like more info on binding energy, check out Hyperphysics article on Nuclear Binding Energy.

We’ve also recorded an entire episode of Astronomy Cast all about the Important Numbers in the Universe. Listen here, Episode 45: The Important Numbers in the Universe.

Sources:

What is the Speed of Light?

Since ancient times, philosophers and scholars have sought to understand light. In addition to trying to discern its basic properties (i.e. what is it made of – particle or wave, etc.) they have also sought to make finite measurements of how fast it travels. Since the late-17th century, scientists have been doing just that, and with increasing accuracy.

In so doing, they have gained a better understanding of light’s mechanics and the important role it plays in physics, astronomy and cosmology. Put simply, light moves at incredible speeds and is the fastest moving thing in the Universe. Its speed is considered a constant and an unbreakable barrier, and is used as a means of measuring distance. But just how fast does it travel?

Speed of Light (c):

Light travels at a constant speed of 1,079,252,848.8 (1.07 billion) km per hour. That works out to 299,792,458 m/s, or about 670,616,629 mph (miles per hour). To put that in perspective, if you could travel at the speed of light, you would be able to circumnavigate the globe approximately seven and a half times in one second. Meanwhile, a person flying at an average speed of about 800 km/h (500 mph), would take over 50 hours to circle the planet just once.

Illustration showing the distance between Earth and the Sun. Credit: LucasVB/Public Domain
Illustration showing the distance light travels between the Earth and the Sun. Credit: LucasVB/Public Domain

To put that into an astronomical perspective, the average distance from the Earth to the Moon is 384,398.25 km (238,854 miles ). So light crosses that distance in about a second. Meanwhile, the average distance from the Sun to the Earth is ~149,597,886 km (92,955,817 miles), which means that light only takes about 8 minutes to make that journey.

Little wonder then why the speed of light is the metric used to determine astronomical distances. When we say a star like Proxima Centauri is 4.25 light years away, we are saying that it would take – traveling at a constant speed of 1.07 billion km per hour (670,616,629 mph) – about 4 years and 3 months to get there. But just how did we arrive at this highly specific measurement for “light-speed”?

History of Study:

Until the 17th century, scholars were unsure whether light traveled at a finite speed or instantaneously. From the days of the ancient Greeks to medieval Islamic scholars and scientists of the early modern period, the debate went back and forth. It was not until the work of Danish astronomer Øle Rømer (1644-1710) that the first quantitative measurement was made.

In 1676, Rømer observed that the periods of Jupiter’s innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when it was receding from it. From this, he concluded that light travels at a finite speed, and estimated that it takes about 22 minutes to cross the diameter of Earth’s orbit.

Prof. Albert Einstein uses the blackboard as he delivers the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science in the auditorium of the Carnegie Institue of Technology Little Theater at Pittsburgh, Pa., on Dec. 28, 1934. Using three symbols, for matter, energy and the speed of light respectively, Einstein offers additional proof of a theorem propounded by him in 1905 that matter and energy are the same thing in different forms. (AP Photo)
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the Carnegie Institute of Technology on Dec. 28th, 1934, where he expounded on his theory of how matter and energy are the same thing in different forms. Credit: AP Photo

Christiaan Huygens used this estimate and combined it with an estimate of the diameter of the Earth’s orbit to obtain an estimate of 220,000 km/s. Isaac Newton also spoke about Rømer’s calculations in his seminal work Opticks (1706). Adjusting for the distance between the Earth and the Sun, he calculated that it would take light seven or eight minutes to travel from one to the other. In both cases, they were off by a relatively small margin.

Later measurements made by French physicists Hippolyte Fizeau (1819 – 1896) and Léon Foucault (1819 – 1868) refined these measurements further – resulting in a value of 315,000 km/s (192,625 mi/s). And by the latter half of the 19th century, scientists became aware of the connection between light and electromagnetism.

This was accomplished by physicists measuring electromagnetic and electrostatic charges, who then found that the numerical value was very close to the speed of light (as measured by Fizeau). Based on his own work, which showed that electromagnetic waves propagate in empty space, German physicist Wilhelm Eduard Weber proposed that light was an electromagnetic wave.

The next great breakthrough came during the early 20th century/ In his 1905 paper, titled “On the Electrodynamics of Moving Bodies”, Albert Einstein asserted that the speed of light in a vacuum, measured by a non-accelerating observer, is the same in all inertial reference frames and independent of the motion of the source or observer.

A laser shining through a glass of water demonstrates how many changes in speed it undergoes - from 186,222 mph in air to 124,275 mph through the glass. It speeds up again to 140,430 mph in water, slows again through glass and then speeds up again when leaving the glass and continuing through the air. Credit: Bob King
A laser shining through a glass of water demonstrates how many changes in speed (in mph) it undergoes as it passes from air, to glass, to water, and back again. Credit: Bob King

Using this and Galileo’s principle of relativity as a basis, Einstein derived the Theory of Special Relativity, in which the speed of light in vacuum (c) was a fundamental constant. Prior to this, the working consensus among scientists held that space was filled with a “luminiferous aether” that was responsible for its propagation – i.e. that light traveling through a moving medium would be dragged along by the medium.

This in turn meant that the measured speed of the light would be a simple sum of its speed through the medium plus the speed of that medium. However, Einstein’s theory effectively  made the concept of the stationary aether useless and revolutionized the concepts of space and time.

Not only did it advance the idea that the speed of light is the same in all inertial reference frames, it also introduced the idea that major changes occur when things move close the speed of light. These include the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer (i.e. time dilation, where time slows as the speed of light approaches).

His observations also reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations by doing away with extraneous explanations used by other scientists, and accorded with the directly observed speed of light.

During the second half of the 20th century, increasingly accurate measurements using laser inferometers and cavity resonance techniques would further refine estimates of the speed of light. By 1972, a group at the US National Bureau of Standards in Boulder, Colorado, used the laser inferometer technique to get the currently-recognized value of 299,792,458 m/s.

Role in Modern Astrophysics:

Einstein’s theory that the speed of light in vacuum is independent of the motion of the source and the inertial reference frame of the observer has since been consistently confirmed by many experiments. It also sets an upper limit on the speeds at which all massless particles and waves (which includes light) can travel in a vacuum.

One of the outgrowths of this is that cosmologists now treat space and time as a single, unified structure known as spacetime – in which the speed of light can be used to define values for both (i.e. “lightyears”, “light minutes”, and “light seconds”). The measurement of the speed of light has also become a major factor when determining the rate of cosmic expansion.

Beginning in the 1920’s with observations of Lemaitre and Hubble, scientists and astronomers became aware that the Universe is expanding from a point of origin. Hubble also observed that the farther away a galaxy is, the faster it appears to be moving. In what is now referred to as the Hubble Parameter, the speed at which the Universe is expanding is calculated to 68 km/s per megaparsec.

This phenomena, which has been theorized to mean that some galaxies could actually be moving faster than the speed of light, may place a limit on what is observable in our Universe. Essentially, galaxies traveling faster than the speed of light would cross a “cosmological event horizon”, where they are no longer visible to us.

Also, by the 1990’s, redshift measurements of distant galaxies showed that the expansion of the Universe has been accelerating for the past few billion years. This has led to theories like “Dark Energy“, where an unseen force is driving the expansion of space itself instead of objects moving through it (thus not placing constraints on the speed of light or violating relativity).

Along with special and general relativity, the modern value of the speed of light in a vacuum has gone on to inform cosmology, quantum physics, and the Standard Model of particle physics. It remains a constant when talking about the upper limit at which massless particles can travel, and remains an unachievable barrier for particles that have mass.

Perhaps, someday, we will find a way to exceed the speed of light. While we have no practical ideas for how this might happen, the smart money seems to be on technologies that will allow us to circumvent the laws of spacetime, either by creating warp bubbles (aka. the Alcubierre Warp Drive), or tunneling through it (aka. wormholes).

Until that time, we will just have to be satisfied with the Universe we can see, and to stick to exploring the part of it that is reachable using conventional methods.

We have written many articles about the speed of light for Universe Today. Here’s How Fast is the Speed of Light?, How are Galaxies Moving Away Faster than Light?, How Can Space Travel Faster than the Speed of Light?, and Breaking the Speed of Light.

Here’s a cool calculator that lets you convert many different units for the speed of light, and here’s a relativity calculator, in case you wanted to travel nearly the speed of light.

Astronomy Cast also has an episode that addresses questions about the speed of light – Questions Show: Relativity, Relativity, and more Relativity.

Sources:

Second Gravitational Wave Source Found By LIGO

Lightning has struck twice – maybe three times – and scientists from the Laser Interferometer Gravitational-wave Observatory, or LIGO, hope this is just the beginning of a new era of understanding our Universe. This “lightning” came in the form of the elusive, hard-to-detect gravitational waves, produced by gigantic events, such as a pair of black holes colliding. The energy released from such an event disturbs the very fabric of space and time, much like ripples in a pond. Today’s announcement is the second set of gravitational wave ripples detected by LIGO, following the historic first detection announced in February of this year.

“This collision happened 1.5 billion years ago,” said Gabriela Gonzalez of Louisiana State University at a press conference to announce the new detection, “and with this we can tell you the era of gravitational wave astronomy has begun.”

LIGO’s first detection of gravitational waves from merging black holes occurred Sept. 14, 2015 and it confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity. The second detection occurred on Dec. 25, 2015, and was recorded by both of the twin LIGO detectors.

While the first detection of the gravitational waves released by the violent black hole merger was just a little “chirp” that lasted only one-fifth of a second, this second detection was more of a “whoop” that was visible for an entire second in the data. Listen in this video:

“This is what we call gravity’s music,” said González as she played the video at today’s press conference.

While gravitational waves are not sound waves, the researchers converted the gravitational wave’s oscillation and frequency to a sound wave with the same frequency. Why were the two events so different?

From the data, the researchers concluded the second set of gravitational waves were produced during the final moments of the merger of two black holes that were 14 and 8 times the mass of the Sun, and the collision produced a single, more massive spinning black hole 21 times the mass of the Sun. In comparison, the black holes detected in September 2015 were 36 and 29 times the Sun’s mass, merging into a black hole of 62 solar masses.

The scientists said the higher-frequency gravitational waves from the lower-mass black holes hit the LIGO detectors’ “sweet spot” of sensitivity.

“It is very significant that these black holes were much less massive than those observed in the first detection,” said Gonzalez. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”

An aerial view of LIGO Hanford. (Credit:  Gary White/Mark Coles/California Institue of Technology/LIGO/NSF).
An aerial view of LIGO Hanford. (Credit: Gary White/Mark Coles/California Institue of Technology/LIGO/NSF).

LIGO allows scientists to study the Universe in a new way, using gravity instead of light. LIGO uses lasers to precisely measure the position of mirrors separated from each other by 4 kilometers, about 2.5 miles, at two locations that are over 3,000 km apart, in Livingston, Louisiana, and Hanford, Washington. So, LIGO doesn’t detect the black hole collision event directly, it detects the stretching and compressing of space itself. The detections so far are the result of LIGO’s ability to measure the perturbation of space with an accuracy of 1 part in a thousand billion billion. The signal from the lastest event, named GW151226, was produced by matter being converted into energy, which literally shook spacetime like Jello.

LIGO team member Fulvio Ricci, a physicist at the University of Rome La Sapienzaa said there was a third “candidate” detection of an event in October, which Ricci said he prefers to call a “trigger,” but it was much less significant and the signal to noise not large enough to officially count as a detection.

But still, the team said, the two confirmed detections point to black holes being much more common in the Universe than previously believed, and they might frequently come in pairs.

“The second discovery “has truly put the ‘O’ for Observatory in LIGO,” said Albert Lazzarini, deputy director of the LIGO Laboratory at Caltech. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future. LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”

LIGO is now offline for improvements. Its next data-taking run will begin this fall and the improvements in detector sensitivity could allow LIGO to reach as much as 1.5 to two times more of the volume of the universe compared with the first run. A third site, the Virgo detector located near Pisa, Italy, with a design similar to the twin LIGO detectors, is expected to come online during the latter half of LIGO’s upcoming observation run. Virgo will improve physicists’ ability to locate the source of each new event, by comparing millisecond-scale differences in the arrival time of incoming gravitational wave signals.

In the meantime, you can help the LIGO team with the Gravity Spy citizen science project through Zooniverse.

Sources for further reading:
Press releases:
University of Maryland
Northwestern University
West Virginia University
Pennsylvania State University
Physical Review Letters: GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence
LIGO facts page, Caltech

For an excellent overview of gravitational waves, their sources, and their detection, check out Markus Possel’s excellent series of articles we featured on UT in February:

Gravitational Waves and How They Distort Space

Gravitational Wave Detectors and How They Work

Sources of Gravitational Waves: The Most Violent Events in the Universe

How Does Light Travel?

Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern, the 20th century led to breakthroughs that showed that it behaves as both.

These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many fascinating and unanswered questions when it comes to light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?

Theory of Light to the 19th Century:

During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.

In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.

The first edition of Newton's Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
The first edition of Newton’s Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.

Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:

  • Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
  • These corpuscles are perfectly elastic, rigid, and weightless.

This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his Traité de la lumière (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.

Double-Slit Experiment:

By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.

The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.

According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.

The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.

Electromagnetism and Special Relativity:

Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).

Prof. Albert Einstein uses the blackboard as he delivers the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science in the auditorium of the Carnegie Institue of Technology Little Theater at Pittsburgh, Pa., on Dec. 28, 1934. Using three symbols, for matter, energy and the speed of light respectively, Einstein offers additional proof of a theorem propounded by him in 1905 that matter and energy are the same thing in different forms. (AP Photo)
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science on Dec. 28th, 1934. Credit: AP Photo

By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).

In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.

Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.

For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.

Einstein and the Photon:

In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).

At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).

At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.

Wave-Particle Duality:

Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.

In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).

In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect. The animation below shows the delay scientists had expected to observe. Credit: NASA/Sonoma State University/Aurore Simonnet
Artist’s impression of two photons travelling at different wavelengths, resulting in different- colored light. Credit: NASA/Sonoma State University/Aurore Simonnet

According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.

When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.

The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.

So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).

What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.

For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!

We have written many articles about light here at Universe Today. For example, here’s How Fast is the Speed of Light?, How Far is a Light Year?, What is Einstein’s Theory of Relativity?

If you’d like more info on light, check out these articles from The Physics Hypertextbook and NASA’s Mission Science page.

We’ve also recorded an entire episode of Astronomy Cast all about Interstellar Travel. Listen here, Episode 145: Interstellar Travel.

What Is Air Resistance?

Space Travel

Here on Earth, we tend to take air resistance (aka. “drag”) for granted. We just assume that when we throw a ball, launch an aircraft, deorbit a spacecraft, or fire a bullet from a gun, that the act of it traveling through our atmosphere will naturally slow it down. But what is the reason for this? Just how is air able to slow an object down, whether it is in free-fall or in flight?

Because of our reliance on air travel, our enthusiasm for space exploration, and our love of sports and making things airborne (including ourselves), understanding air resistance is key to understanding physics, and an integral part of many scientific disciplines. As part of the subdiscipline known as fluid dynamics, it applies to fields of aerodynamics, hydrodynamics, astrophysics, and nuclear physics (to name a few).

Definition:

By definition, air resistance describes the forces that are in opposition to the relative motion of an object as it passes through the air. These drag forces act opposite to the oncoming flow velocity, thus slowing the object down. Unlike other resistance forces, drag depends directly on velocity, since it is the component of the net aerodynamic force acting opposite to the direction of the movement.

Another way to put it would be to say that air resistance is the result of collisions of the object’s leading surface with air molecules. It can therefore be said that the two most common factors that have a direct effect upon the amount of air resistance are the speed of the object and the cross-sectional area of the object. Ergo, both increased speeds and cross-sectional areas will result in an increased amount of air resistance.

This picture shows a bullet and the air flowing around it, giving visual representation to air resistance. Credits: Andrew Davidhazy/Rochester Institute of Technology
Picture showing a bullet and the air flowing around it, giving visual representation to air resistance. Credits: Andrew Davidhazy/Rochester Institute of Technology

In terms of aerodynamics and flight, drag refers to both the forces acting opposite of thrust, as well as the forces working perpendicular to it (i.e. lift). In astrodynamics, atmospheric drag is both a positive and a negative force depending on the situation. It is both a drain on fuel and efficiency during lift-off and a fuel savings when a spacecraft is returning to Earth from orbit.

Calculating Air Resistance:

Air resistance is usually calculated using the “drag equation”, which determines the force experienced by an object moving through a fluid or gas at relatively large velocity. This can be expressed mathematically as:

F_D\, =\, \tfrac12\, \rho\, v^2\, C_D\, A

In this equation, FD represents the drag force, p is the density of the fluid, v is the speed of the object relative to sound, A is the cross-section area, and CD is the the drag coefficient. The result is what is called “quadratic drag”. Once this is determined, calculating the amount of power needed to overcome the drag involves a similar process, which can be expressed mathematically as:

 P_d = \mathbf{F}_d \cdot \mathbf{v} = \tfrac12 \rho v^3 A C_d

Here, Pd is the power needed to overcome the force of drag, Fd is the drag force, v is the velocity, p is the density of the fluid, v is the speed of the object relative to sound, A is the cross-section area, and Cd is the the drag coefficient. As it shows, power needs are the cube of the velocity, so if it takes 10 horsepower to go 80 kph, it will take 80 horsepower to go 160 kph. In short, a doubling of speed requires an application of eight times the amount of power.

An F-22 Raptor reaching a velocity high enough to achieve a sonic boom. Credit: strangesounds.org
An F-22 Raptor reaching a velocity high enough to achieve a sonic boom. Credit: strangesounds.org

Types of Air Resistance:

There are three main types of drag in aerodynamics – Lift Induced, Parasitic, and Wave. Each affects an objects ability to stay aloft as well as the power and fuel needed to keep it there. Lift induced (or just induced) drag occurs as the result of the creation of lift on a three-dimensional lifting body (wing or fuselage). It has two primary components: vortex drag and lift-induced viscous drag.

The vortices derive from the turbulent mixing of air of varying pressure on the upper and lower surfaces of the body. These are needed to create lift. As the lift increases, so does the lift-induced drag. For an aircraft this means that as the angle of attack and the lift coefficient increase to the point of stall, so does the lift-induced drag.

By contrast, parasitic drag is caused by moving a solid object through a fluid. This type of drag is made up of multiple components, which includes “form drag” and “skin friction drag”. In aviation, induced drag tends to be greater at lower speeds because a high angle of attack is required to maintain lift, so as speed increases this drag becomes much less, but parasitic drag increases because the fluid is flowing faster around protruding objects increasing friction. The combined overall drag curve is minimal at some airspeeds and will be at or close to its optimal efficiency.

Space Shuttle Columbia launching on its maiden voyage on April 12th, 1981. Credit: NASA
Space Shuttle Columbia launching on its maiden voyage on April 12th, 1981. Credit: NASA

Wave drag (compressibility drag) is created by the presence of a body moving at high speed through a compressible fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight. In transonic flight – at speeds of Mach 0.5 or greater, but still less than Mach 1.0 (aka. speed of sound) – wave drag is the result of local supersonic flow.

Supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air on a body increases when it accelerates over the body. In short, aircraft flying at transonic speeds often incur wave drag as a result. This increases as the speed of the aircraft nears the sound barrier of Mach 1.0, before becoming a supersonic object.

In supersonic flight, wave drag is the result of oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows bow waves will form instead. At supersonic speeds, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.

Understanding the role air frictions plays with flight, knowing its mechanics, and knowing the kinds of power needed to overcome it, are all crucial when it comes to aerospace and space exploration. Knowing all this will also be critical when it comes time to explore other planets in our Solar System, and in other star systems altogether!

We have written many articles about air resistance and flight here at Universe Today. Here’s an article on What Is Terminal Velocity?, How Do Planes Fly?, What is the Coefficient of Friction?, and What is the Force of Gravity?

If you’d like more information on NASA’s aircraft programs, check out the Beginner’s Guide to Aerodynamics, and here’s a link to the Drag Equation.

We’ve also recorded many related episodes of Astronomy Cast. Listen here, Episode 102: Gravity.