Six and a half decades after he passed away, famed theoretical physicist Albert Einstein is still being proven right! In addition to General Relativity (GR) being tested under the most extreme conditions, lesser-known aspects of his theories are still being validated as well. For example, GR predicts that gravity and inertia are often indistinguishable, in what is known as the gravitational Strong Equivalence Principle (SEP).
Thanks to an international team of researchers, it has been proven under the strongest conditions to date. By precisely tracking the motion of a pulsar, the team demonstrated that gravity causes neutron stars and white dwarf stars to fall with equal accelerations. This confirms Einstein’s prediction that freefall accurately simulates zero-gravity conditions in all inertial reference frames.
During the 1930s, venerable theoretical physicist Albert Einstein returned to the field of quantum mechanics, which his theories of relativity helped to create. Hoping to develop a more complete theory of how particles behave, Einstein was instead horrified by the prospect of quantum entanglement – something he described as “spooky action at a distance”.
Despite Einstein’s misgivings, quantum entanglement has gone on to become an accepted part of quantum mechanics. And now, for the first time ever, a team of physicists from the University of Glasgow took an image of a form of quantum entanglement (aka. Bell entanglement) at work. In so doing, they managed to capture the first piece of visual evidence of a phenomenon that baffled even Einstein himself.
Special Relativity. It’s been the bane of space explorers, futurists and science fiction authors since Albert Einstein first proposed it in 1905. For those of us who dream of humans one-day becoming an interstellar species, this scientific fact is like a wet blanket. Luckily, there are a few theoretical concepts that have been proposed that indicate that Faster-Than-Light (FTL) travel might still be possible someday.
A popular example is the idea of a wormhole: a speculative structure that links two distant points in space time that would enable interstellar space travel. Recently, a team of Ivy League scientists conducted a study that indicated how “traversable wormholes” could actually be a reality. The bad news is that their results indicate that these wormholes aren’t exactly shortcuts, and could be the cosmic equivalent of “taking the long way”!
Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.
If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!
Early Life and Education:
Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.
He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.
In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”
For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.
With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.
In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.
Black Body Radiation:
It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).
In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.
A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).
By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.
The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).
However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.
During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.
This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.
World War I and World War II:
In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.
After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.
The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.
Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.
Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.
Death and Legacy:
Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.
In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.
He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.
The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.
In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.
Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.
Since ancient times, philosophers and scholars have sought to understand light. In addition to trying to discern its basic properties (i.e. what is it made of – particle or wave, etc.) they have also sought to make finite measurements of how fast it travels. Since the late-17th century, scientists have been doing just that, and with increasing accuracy.
In so doing, they have gained a better understanding of light’s mechanics and the important role it plays in physics, astronomy and cosmology. Put simply, light moves at incredible speeds and is the fastest moving thing in the Universe. Its speed is considered a constant and an unbreakable barrier, and is used as a means of measuring distance. But just how fast does it travel?
Speed of Light (c):
Light travels at a constant speed of 1,079,252,848.8 (1.07 billion) km per hour. That works out to 299,792,458 m/s, or about 670,616,629 mph (miles per hour). To put that in perspective, if you could travel at the speed of light, you would be able to circumnavigate the globe approximately seven and a half times in one second. Meanwhile, a person flying at an average speed of about 800 km/h (500 mph), would take over 50 hours to circle the planet just once.
To put that into an astronomical perspective, the average distance from the Earth to the Moon is 384,398.25 km (238,854 miles ). So light crosses that distance in about a second. Meanwhile, the average distance from the Sun to the Earth is ~149,597,886 km (92,955,817 miles), which means that light only takes about 8 minutes to make that journey.
Little wonder then why the speed of light is the metric used to determine astronomical distances. When we say a star like Proxima Centauri is 4.25 light years away, we are saying that it would take – traveling at a constant speed of 1.07 billion km per hour (670,616,629 mph) – about 4 years and 3 months to get there. But just how did we arrive at this highly specific measurement for “light-speed”?
History of Study:
Until the 17th century, scholars were unsure whether light traveled at a finite speed or instantaneously. From the days of the ancient Greeks to medieval Islamic scholars and scientists of the early modern period, the debate went back and forth. It was not until the work of Danish astronomer Øle Rømer (1644-1710) that the first quantitative measurement was made.
In 1676, Rømer observed that the periods of Jupiter’s innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when it was receding from it. From this, he concluded that light travels at a finite speed, and estimated that it takes about 22 minutes to cross the diameter of Earth’s orbit.
Christiaan Huygens used this estimate and combined it with an estimate of the diameter of the Earth’s orbit to obtain an estimate of 220,000 km/s. Isaac Newton also spoke about Rømer’s calculations in his seminal work Opticks (1706). Adjusting for the distance between the Earth and the Sun, he calculated that it would take light seven or eight minutes to travel from one to the other. In both cases, they were off by a relatively small margin.
Later measurements made by French physicists Hippolyte Fizeau (1819 – 1896) and Léon Foucault (1819 – 1868) refined these measurements further – resulting in a value of 315,000 km/s (192,625 mi/s). And by the latter half of the 19th century, scientists became aware of the connection between light and electromagnetism.
This was accomplished by physicists measuring electromagnetic and electrostatic charges, who then found that the numerical value was very close to the speed of light (as measured by Fizeau). Based on his own work, which showed that electromagnetic waves propagate in empty space, German physicist Wilhelm Eduard Weber proposed that light was an electromagnetic wave.
The next great breakthrough came during the early 20th century/ In his 1905 paper, titled “On the Electrodynamics of Moving Bodies”, Albert Einstein asserted that the speed of light in a vacuum, measured by a non-accelerating observer, is the same in all inertial reference frames and independent of the motion of the source or observer.
Using this and Galileo’s principle of relativity as a basis, Einstein derived the Theory of Special Relativity, in which the speed of light in vacuum (c) was a fundamental constant. Prior to this, the working consensus among scientists held that space was filled with a “luminiferous aether” that was responsible for its propagation – i.e. that light traveling through a moving medium would be dragged along by the medium.
This in turn meant that the measured speed of the light would be a simple sum of its speed through the medium plus the speed of that medium. However, Einstein’s theory effectively made the concept of the stationary aether useless and revolutionized the concepts of space and time.
Not only did it advance the idea that the speed of light is the same in all inertial reference frames, it also introduced the idea that major changes occur when things move close the speed of light. These include the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer (i.e. time dilation, where time slows as the speed of light approaches).
His observations also reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations by doing away with extraneous explanations used by other scientists, and accorded with the directly observed speed of light.
During the second half of the 20th century, increasingly accurate measurements using laser inferometers and cavity resonance techniques would further refine estimates of the speed of light. By 1972, a group at the US National Bureau of Standards in Boulder, Colorado, used the laser inferometer technique to get the currently-recognized value of 299,792,458 m/s.
Role in Modern Astrophysics:
Einstein’s theory that the speed of light in vacuum is independent of the motion of the source and the inertial reference frame of the observer has since been consistently confirmed by many experiments. It also sets an upper limit on the speeds at which all massless particles and waves (which includes light) can travel in a vacuum.
One of the outgrowths of this is that cosmologists now treat space and time as a single, unified structure known as spacetime – in which the speed of light can be used to define values for both (i.e. “lightyears”, “light minutes”, and “light seconds”). The measurement of the speed of light has also become a major factor when determining the rate of cosmic expansion.
Beginning in the 1920’s with observations of Lemaitre and Hubble, scientists and astronomers became aware that the Universe is expanding from a point of origin. Hubble also observed that the farther away a galaxy is, the faster it appears to be moving. In what is now referred to as the Hubble Parameter, the speed at which the Universe is expanding is calculated to 68 km/s per megaparsec.
This phenomena, which has been theorized to mean that some galaxies could actually be moving faster than the speed of light, may place a limit on what is observable in our Universe. Essentially, galaxies traveling faster than the speed of light would cross a “cosmological event horizon”, where they are no longer visible to us.
Also, by the 1990’s, redshift measurements of distant galaxies showed that the expansion of the Universe has been accelerating for the past few billion years. This has led to theories like “Dark Energy“, where an unseen force is driving the expansion of space itself instead of objects moving through it (thus not placing constraints on the speed of light or violating relativity).
Along with special and general relativity, the modern value of the speed of light in a vacuum has gone on to inform cosmology, quantum physics, and the Standard Model of particle physics. It remains a constant when talking about the upper limit at which massless particles can travel, and remains an unachievable barrier for particles that have mass.
Perhaps, someday, we will find a way to exceed the speed of light. While we have no practical ideas for how this might happen, the smart money seems to be on technologies that will allow us to circumvent the laws of spacetime, either by creating warp bubbles (aka. the Alcubierre Warp Drive), or tunneling through it (aka. wormholes).
Until that time, we will just have to be satisfied with the Universe we can see, and to stick to exploring the part of it that is reachable using conventional methods.
Since it was first discovered in 1974, astronomers have been dying to get a better look at the Supermassive Black Hole (SBH) at the center of our galaxy. Known as Sagittarius A*, scientists have only been able to gauge the position and mass of this SBH by measuring the effect it has on the stars that orbit it. But so far, more detailed observations have eluded them, thanks in part to all the gas and dust that obscures it.
Luckily, the European Southern Observatory (ESO) recently began work with the GRAVITY interferometer, the latest component in their Very Large Telescope (VLT). Using this instrument, which combines near-infrared imaging, adaptive-optics, and vastly improved resolution and accuracy, they have managed to capture images of the stars orbiting Sagittarius A*. And what they have observed was quite fascinating.
One of the primary purposes of GRAVITY is to study the gravitational field around Sagittarius A* in order to make precise measurements of the stars that orbit it. In so doing, the GRAVITY team – which consists of astronomers from the ESO, the Max Planck Institute, and multiple European research institutes – will be able to test Einstein’s theory of General Relativity like never before.
In what was the first observation conducted using the new instrument, the GRAVITY team used its powerful interferometric imaging capabilities to study S2, a faint star which orbits Sagittarius A* with a period of only 16 years. This test demonstrated the effectiveness of the GRAVITY instrument – which is 15 times more sensitive than the individual 8.2-metre Unit Telescopes the VLT currently relies on.
This was an historic accomplishment, as a clear view of the center of our galaxy is something that has eluded astronomers in the past. As GRAVITY’s lead scientist, Frank Eisenhauer – from the Max Planck Institute for Extraterrestrial Physics in Garching, Germany – explained to Universe Today via email:
“First, the Galactic Center is hidden behind a huge amount of interstellar dust, and it is practically invisible at optical wavelengths. The stars are only observable in the infrared, so we first had to develop the necessary technology and instruments for that. Second, there are so many stars concentrated in the Galactic Center that a normal telescope is not sharp enough to resolve them. It was only in the late 1990′ and in the beginning of this century when we learned to sharpen the images with the help of speckle interferometry and adaptive optics to see the stars and observe their dance around the central black hole.”
But more than that, the observation of S2 was very well timed. In 2018, the star will be at the closest point in its orbit to the Sagittarius A* – just 17 light-hours from it. As you can see from the video below, it is at this point that S2 will be moving much faster than at any other point in its orbit (the orbit of S2 is highlighted in red and the position of the central black hole is marked with a red cross).
When it makes its closest approach, S2 will accelerate to speeds of almost 30 million km per hour, which is 2.5% the speed of light. Another opportunity to view this star reach such high speeds will not come again for another 16 years – in 2034. And having shown just how sensitive the instrument is already, the GRAVITY team expects to be able make very precise measurements of the star’s position.
In fact, they anticipate that the level of accuracy will be comparable to that of measuring the positions of objects on the surface of the Moon, right down to the centimeter-scale. As such, they will be able to determine whether the motion of the star as it orbits the black hole are consistent with Einstein’s theories of general relativity.
“[I]t is not the speed itself to cause the general relativistic effects,” explained Eisenhauer, “but the strong gravitation around the black hole. But the very high orbital speed is a direct consequence and measure of the gravitation, so we refer to it in the press release because the comparison with the speed of light and the ISS illustrates so nicely the extreme conditions.
As recent simulations of the expansion of galaxies in the Universe have shown, Einstein’s theories are still holding up after many decades. However, these tests will offer hard evidence, obtained through direct observation. A star traveling at a portion of the speed of light around a supermassive black hole at the center of our galaxy will certainly prove to be a fitting test.
And Eisenhauer and his colleagues expect to see some very interesting things. “We hope to see a “kick” in the orbit.” he said. “The general relativistic effects increase very strongly when you approach the black hole, and when the star swings by, these effects will slightly change the direction of the orbit.”
While those of us here at Earth will not be able to “star gaze” on this occasion and see R2 whipping past Sagittarius A*, we will still be privy to all the results. And then, we just might see if Einstein really was correct when he proposed what is still the predominant theory of gravitation in physics, over a century later.
Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.
These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?
Theory of Light to the 19th Century:
During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.
In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.
Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
These corpuscles are perfectly elastic, rigid, and weightless.
This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his “Traité de la lumière“ (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.
By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.
The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.
According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.
The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.
Electromagnetism and Special Relativity:
Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).
By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).
In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.
Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.
For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.
Einstein and the Photon:
In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).
At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).
At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.
Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.
In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).
According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.
When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.
The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.
So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).
What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.
For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!
On June 30th, 1905, Albert Einstein started a revolution with the publication of theory of Special Relativity. This theory, among other things, stated that the speed of light in a vacuum is the same for all observers, regardless of the source. In 1915, he followed this up with the publication of his theory of General Relativity, which asserted that gravity has a warping effect on space-time. For over a century, these theories have been an essential tool in astrophysics, explaining the behavior of the Universe on the large scale.
However, since the 1990s, astronomers have been aware of the fact that the Universe is expanding at an accelerated rate. In an effort to explain the mechanics behind this, suggestions have ranged from the possible existence of an invisible energy (i.e. Dark Energy) to the possibility that Einstein’s field equations of General Relativity could be breaking down. But thanks to the recent work of an international research team, it is now known that Einstein had it right all along.
Four fundamental forces govern all interactions within the Universe. They are weak nuclear forces, strong nuclear forces, electromagnetism, and gravity. Of these, gravity is perhaps the most mysterious. While it has been understood for some time how this law of physics operates on the macro-scale – governing our Solar System, galaxies, and superclusters – how it interacts with the three other fundamental forces remains a mystery.
Naturally, human beings have had a basic understanding of this force since time immemorial. And when it comes to our modern understanding of gravity, credit is owed to one man who deciphered its properties and how it governs all things great and small – Sir Isaac Newton. Thanks to this 17th century English physicist and mathematician, our understanding of the Universe and the laws that govern it would forever be changed.
While we are all familiar with the iconic image of a man sitting beneath an apple tree and having one fall on his head, Newton’s theories on gravity also represented a culmination of years worth of research, which in turn was based on centuries of accumulated knowledge. He would present these theories in his magnum opus, Philosophiae Naturalis Principia Mathematica (“Mathematical Principles of Natural Philosophy”), which was first published in 1687.
The 17th century was a very auspicious time for the sciences, with major breakthroughs occurring in the fields of mathematics, physics, astronomy, biology and chemistry. Some of the greatest developments in the period include the development of the heliocentric model of the Solar System by Nicolaus Copernicus, the pioneering work with telescopes and observational astronomy by Galileo Galilei, and the development of modern optics.
It was also during this period that Johannes Kepler developed his Laws of Planetary Motion. Formulated between 1609 and 1619, these laws described the motion of the then-known planets (Mercury, Venus, Earth, Mars, Jupiter, and Saturn) around the Sun. They stated that:
Planets move around the Sun in ellipses, with the Sun at one focus
The line connecting the Sun to a planet sweeps equal areas in equal times.
The square of the orbital period of a planet is proportional to the cube (3rd power) of the mean distance from the Sun in (or in other words–of the”semi-major axis” of the ellipse, half the sum of smallest and greatest distance from the Sun).
These laws resolved the remaining mathematical issues raised by Copernicus’ heliocentric model, thus removing all doubt that it was the correct model of the Universe. Working from these, Sir Isaac Newton began considering gravitation and its effect on the orbits of planets.
Newton’s Three Laws:
In 1678, Newton suffered a complete nervous breakdown due to overwork and a feud with fellow astronomer Robert Hooke. For the next few years, he withdrew from correspondence with other scientists, except where they initiated it, and renewed his interest in mechanics and astronomy. In the winter of 1680-81, the appearance of a comet, about which he corresponded with John Flamsteed (England’s Astronomer Royal) also renewed his interest in astronomy.
After reviewing Kepler’s Laws of Motion, Newton developed a mathematical proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton communicated these results to Edmond Halley (discoverer of “Haley’s Comet”) and to the Royal Society in his De motu corporum in gyrum.
This tract, published in 1684, contained the seed of what Newton would expand to form his magnum opus, the Philosophiae Naturalis Principia Mathematica. This treatise, which was published in July of 1687, contained Newton’s three laws of motion, which stated that:
When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.
The vector sum of the external forces (F) on an object is equal to the mass (m) of that object multiplied by the acceleration vector (a) of the object. In mathematical form, this is expressed as: F=ma
When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
Together, these laws described the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. The laws also allowed Newton to calculate the mass of each planet, the flattening of the Earth at the poles, and the bulge at the equator, and how the gravitational pull of the Sun and Moon create the Earth’s tides.
In the same work, Newton presented a calculus-like method of geometrical analysis using ‘first and last ratios’, worked out the speed of sound in air (based on Boyle’s Law), accounted for the procession of the equinoxes (which he showed were a result of the Moon’s gravitational attraction to the Earth), initiated the gravitational study of the irregularities in the motion of the moon, provided a theory for the determination of the orbits of comets, and much more.
Newton and the “Apple Incident”:
The story of Newton coming up with his theory of universal gravitation as a result of an apple falling on his head has become a staple of popular culture. And while it has often been argued that the story is apocryphal and Newton did not devise his theory at any one moment, Newton himself told the story many times and claimed that the incident had inspired him.
In addition, the writing’s of William Stukeley – an English clergyman, antiquarian and fellow member of the Royal Society – have confirmed the story. But rather than the comical representation of the apple striking Newton on the head, Stukeley described in his Memoirs of Sir Isaac Newton’s Life (1752) a conversation in which Newton described pondering the nature of gravity while watching an apple fall.
“…we went into the garden, & drank thea under the shade of some appletrees; only he, & my self. amidst other discourse, he told me, he was just in the same situation, as when formerly, the notion of gravitation came into his mind. “why should that apple always descend perpendicularly to the ground,” thought he to himself; occasion’d by the fall of an apple…”
John Conduitt, Newton’s assistant at the Royal Mint (who eventually married his niece), also described hearing the story in his own account of Newton’s life. According to Conduitt, the incident took place in 1666 when Newton was traveling to meet his mother in Lincolnshire. While meandering in the garden, he contemplated how gravity’s influence extended far beyond Earth, responsible for the falling of apple as well as the Moon’s orbit.
Similarly, Voltaire wrote n his Essay on Epic Poetry (1727) that Newton had first thought of the system of gravitation while walking in his garden and watching an apple fall from a tree. This is consistent with Newton’s notes from the 1660s, which show that he was grappling with the idea of how terrestrial gravity extends, in an inverse-square proportion, to the Moon.
However, it would take him two more decades to fully develop his theories to the point that he was able to offer mathematical proofs, as demonstrated in the Principia. Once that was complete, he deduced that the same force that makes an object fall to the ground was responsible for other orbital motions. Hence, he named it “universal gravitation”.
Various trees are claimed to be “the” apple tree which Newton describes. The King’s School, Grantham, claims their school purchased the original tree, uprooted it, and transported it to the headmaster’s garden some years later. However, the National Trust, which holds the Woolsthorpe Manor (where Newton grew up) in trust, claims that the tree still resides in their garden. A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there.
Newton’s work would have a profound effect on the sciences, with its principles remaining canon for the following 200 years. It also informed the concept of universal gravitation, which became the mainstay of modern astronomy, and would not be revised until the 20th century – with the discovery of quantum mechanics and Einstein’s theory of General Relativity.
The early 20th century was a very auspicious time for the sciences. In addition to Ernest Rutherford and Niels Bohr giving birth to the Standard Model of particle physics, it was also a period of breakthroughs in the field of quantum mechanics. Thanks to ongoing studies on the behavior of electrons, scientists began to propose theories whereby these elementary particles behaved in ways that defied classical, Newtonian physics.
One such example is the Electron Cloud Model proposed by Erwin Schrodinger. Thanks to this model, electrons were no longer depicted as particles moving around a central nucleus in a fixed orbit. Instead, Schrodinger proposed a model whereby scientists could only make educated guesses as to the positions of electrons. Hence, their locations could only be described as being part of a ‘cloud’ around the nucleus where the electrons are likely to be found.
Atomic Physics To The 20th Century:
The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.
It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800’s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory.
This theory expanded on the laws of conversation of mass and definite proportions and came down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.
Discovery Of The Electron:
By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, his would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.
Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.
This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.
These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’sPhilosophical Magazine, to wide acclaim.
Development Of The Standard Model:
Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”
In this experiment, Geiger and Marsden measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.
Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.
By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.
Subsequent experiments by Antonius Van den Broek and Niels Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).
The Electron Cloud Model:
During the 1920s, Austrian physicist Erwin Schrodinger became fascinated by the theories Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and other physicists. During this time, he also became involved in the fields of atomic theory and spectra, researching at the University of Zurich and then the Friedrich Wilhelm University in Berlin (where he succeeded Planck in 1927).
In 1926, Schrödinger tackled the issue of wave functions and electrons in a series of papers. In addition to describing what would come to be known as the Schrodinger equation – a partial differential equation that describes how the quantum state of a quantum system changes with time – he also used mathematical equations to describe the likelihood of finding an electron in a certain position.
This became the basis of what would come to be known as the Electron Cloud (or quantum mechanical) Model, as well as the Schrodinger equation. Based on quantum theory, which states that all matter has properties associated with a wave function, the Electron Cloud Model differs from the Bohr Model in that it does not define the exact path of an electron.
Instead, it predicts the likely position of the location of the electron based on a function of probabilities. The probability function basically describes a cloud-like region where the electron is likely to be found, hence the name. Where the cloud is most dense, the probability of finding the electron is greatest; and where the electron is less likely to be, the cloud is less dense.
These dense regions are known as “electron orbitals”, since they are the most likely location where an orbiting electron will be found. Extending this “cloud” model to a 3-dimensional space, we see a barbell or flower-shaped atom (as in image at the top). Here, the branching out regions are the ones where we are most likely to find the electrons.
Thanks to Schrodinger’s work, scientists began to understand that in the realm of quantum mechanics, it was impossible to know the exact position and momentum of an electron at the same time. Regardless of what the observer knows initially about a particle, they can only predict its succeeding location or momentum in terms of probabilities.
At no given time will they be able to ascertain either one. In fact, the more they know about the momentum of a particle, the less they will know about its location, and vice versa. This is what is known today as the “Uncertainty Principle”.
Note that the orbitals mentioned in the previous paragraph are formed by a hydrogen atom (i.e. with just one electron). When dealing with atoms that have more electrons, the electron orbital regions spread out evenly into a spherical fuzzy ball. This is where the term ‘electron cloud’ is most appropriate.
This contribution was universally recognized as being one of the cost important contributions of the 20th century, and one which triggered a revolution in the fields of physics, quantum mechanics and indeed all the sciences. Thenceforth, scientists were no longer working in a universe characterized by absolutes of time and space, but in quantum uncertainties and time-space relativity!