LIGO Will Squeeze Light To Overcome The Quantum Noise Of Empty Space

The LIGO Hanford Observatory in Washington State. Credit: LIGO Observatory

When two black holes merge, they release a tremendous amount of energy. When LIGO detected the first black hole merger in 2015, we found that three solar masses worth of energy was released as gravitational waves. But gravitational waves don’t interact strongly with matter. The effects of gravitational waves are so small that you’d need to be extremely close to a merger to feel them. So how can we possibly observe the gravitational waves of merging black holes across millions of light-years?

Continue reading “LIGO Will Squeeze Light To Overcome The Quantum Noise Of Empty Space”

French Scientists Claim to Have Created Metallic Hydrogen

Scientists have long speculated that at the heart of a gas giant, the laws of material physics undergo some radical changes. In these kinds of extreme pressure environments, hydrogen gas is compressed to the point that it actually becomes a metal. For years, scientists have been looking for a way to create metallic hydrogen synthetically because of the endless applications it would offer.

At present, the only known way to do this is to compress hydrogen atoms using a diamond anvil until they change their state. And after decades of attempts (and 80 years since it was first theorized), a team of French scientists may have finally created metallic hydrogen in a laboratory setting. While there is plenty of skepticism, there are many in scientific community who believe this latest claim could be true.

Continue reading “French Scientists Claim to Have Created Metallic Hydrogen”

Antimatter Behaves Exactly the Same as Regular Matter in Double Slit Experiments

In 1924, French physicist Louis de Broglie proposed that photons – the subatomic particle that constitutes light – behave as both a particle and a wave. Known as “particle-wave duality”, this property has been tested and shown to apply with other subatomic particles (electrons and neutrons) as well as larger, more complex molecules.

Recently, an experiment conducted by researchers with the QUantum Interferometry and Gravitation with Positrons and LAsers (QUPLAS) collaboration demonstrated that this same property applies to antimatter. This was done using the same kind of interference test (aka. double-slit experiment) that helped scientists to propose particle-wave duality in the first place.

Continue reading “Antimatter Behaves Exactly the Same as Regular Matter in Double Slit Experiments”

The Coldest Place in Space Has Been Created. Next Challenge, Coldest Place in the Universe

Despite decades of ongoing research, scientists are trying to understand how the four fundamental forces of the Universe fit together. Whereas quantum mechanics can explain how three of these forces things work together on the smallest of scales (electromagnetism, weak and strong nuclear forces), General Relativity explains how things behaves on the largest of scales (i.e. gravity). In this respect, gravity remains the holdout.

To understand how gravity interacts with matter on the tiniest of scales, scientists have developed some truly cutting-edge experiments. One of these is NASA’s Cold Atom Laboratory (CAL), located aboard the ISS, which recently achieved a milestone by creating clouds of atoms known as Bose-Einstein condensates (BECs). This was the first time that BECs have been created in orbit, and offers new opportunities to probe the laws of physics.

Originally predicted by Satyendra Nath Bose and Albert Einstein 71 years ago, BECs are essentially ultracold atoms that reach temperatures just above absolute zero, the point at which atoms should stop moving entirely (in theory). These particles are long-lived and precisely controlled, which makes them the ideal platform for studying quantum phenomena.

The Cold Atom Laboratory (CAL), which consists of two standardized containers that will be installed on the International Space Station. Credit: NASA/JPL-Caltech/Tyler Winn

This is the purpose of the CAL facility, which is to study ultracold quantum gases in a microgravity environment. The laboratory was installed in the US Science Lab aboard the ISS in late May and is the first of its kind in space. It is designed to advance scientists’ ability to make precision measurements of gravity and study how it interacts with matter at the smallest of scales.

As Robert Thompson, the CAL project scientist and a physicist at NASA’s Jet Propulsion Laboratory, explained in a recent press release:

“Having a BEC experiment operating on the space station is a dream come true. It’s been a long, hard road to get here, but completely worth the struggle, because there’s so much we’re going to be able to do with this facility.”

About two weeks ago, CAL scientists confirmed that the facility had produced BECs from atoms of rubidium – a soft, silvery-white metallic element in the alkali group. According to their report, they had reached temperatures as low as 100 nanoKelvin, one-ten million of one Kelvin above absolute zero (-273 °C; -459 °F). This is roughly 3 K (-270 °C; -454 °F) colder than the average temperature of space.

Because of their unique behavior, BECs are characterized as a fifth state of matter, distinct from gases, liquids, solids and plasma. In BECs, atoms act more like waves than particles on the macroscopic scale, whereas this behavior is usually only observable on the microscopic scale. In addition, the atoms all assume their lowest energy state and take on the same wave identity, making them indistinguishable from one another.

The”physics package” inside the Cold Atom Lab, where ultracold clouds of atoms called Bose-Einstein condensates are produced. Credit: NASA/JPL-Caltech/Tyler Winn

In short, the atom clouds begin to behave like a single “super atom” rather than individual atoms, which makes them easier to study. The first BECs were produced in a lab in 1995 by a science team consisting of Eric Cornell, Carl Wieman and Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for their accomplishment. Since that time, hundreds of BEC experiments have been conducted on Earth and some have even been sent into space aboard sounding rockets.

But the CAL facility is unique in that it is the first of its kind on the ISS, where scientists can conduct daily studies over long periods. The facility consists of two standardized containers, which consist of the larger “quad locker” and the smaller “single locker”. The quad locker contains CAL’s physics package, the compartment where CAL will produce clouds of ultra-cold atoms.

This is done by using magnetic fields or focused lasers to create frictionless containers known as “atom traps”. As the atom cloud decompresses inside the atom trap, its temperature naturally drops, getting colder the longer it remains in the trap. On Earth, when these traps are turned off, gravity causes the atoms to begin moving again, which means they can only be studied for fractions of a second.

Aboard the ISS, which is a microgravity environment, BECs can decompress to colder temperatures than with any instrument on Earth and scientists are able to observe individual BECs for five to ten seconds at a time and repeat these measurements for up to six hours per day. And since the facility is controlled remotely from the Earth Orbiting Missions Operation Center at JPL, day-to-day operations require no intervention from astronauts aboard the station.

JPL scientists and members of the Cold Atom Lab’s atomic physics team (left to right) David Aveline, Ethan Elliott and Jason Williams. Credit: NASA/JPL-Caltech

Robert Shotwell, the chief engineer of JPL’s astronomy and physics directorate, has overseen the project since February 2017. As he indicated in a recent NASA press release:

“CAL is an extremely complicated instrument. Typically, BEC experiments involve enough equipment to fill a room and require near-constant monitoring by scientists, whereas CAL is about the size of a small refrigerator and can be operated remotely from Earth. It was a struggle and required significant effort to overcome all the hurdles necessary to produce the sophisticated facility that’s operating on the space station today.”

Looking ahead, the CAL scientists want to go even further and achieve temperatures that are lower than anything achieved on Earth. In addition to rubidium, the CAL team is also working towards making BECSs using two different isotopes of potassium atoms. At the moment, CAL is still in a commissioning phase, which consists of the operations team conducting a long series of tests see how the CAL facility will operate in microgravity.

However, once it is up and running, five science groups – including groups led by Cornell and Ketterle – will conduct experiments at the facility during its first year. The science phase is expected to begin in early September and will last three years. As Kamal Oudrhiri, JPL’s mission manager for CAL, put it:

“There is a globe-spanning team of scientists ready and excited to use this facility. The diverse range of experiments they plan to perform means there are many techniques for manipulating and cooling the atoms that we need to adapt for microgravity, before we turn the instrument over to the principal investigators to begin science operations.”

Given time, the Cold Atom Lab (CAL) may help scientists to understand how gravity works on the tiniest of scales. Combined with high-energy experiments conducted by CERN and other particle physics laboratories around the world, this could eventually lead to a Theory of Everything (ToE) and a complete understanding of how the Universe works.

And be sure to check out this cool video (no pun!) of the CAL facility as well, courtesy of NASA:

Further Reading: NASA

Physicists Take Big Step Towards Quantum Computing and Encryption with new Experiment

Quantum entanglement remains one of the most challenging fields of study for modern physicists. Described by Einstein as “spooky action at a distance”, scientists have long sought to reconcile how this aspect of quantum mechanics can coexist with classical mechanics. Essentially, the fact that two particles can be connected over great distances violates the rules of locality and realism.

Formally, this is a violation of Bell’s Ineqaulity, a theory which has been used for decades to show that locality and realism are valid despite being inconsistent with quantum mechanics. However, in a recent study, a team of researchers from the Ludwig-Maximilian University (LMU) and the Max Planck Institute for Quantum Optics in Munich conducted tests which once again violate Bell’s Inequality and proves the existence of entanglement.

Their study, titled “Event-Ready Bell Test Using Entangled Atoms Simultaneously Closing Detection and Locality Loopholes“, was recently published in the Physical Review Letters. Led by Wenjamin Rosenfeld, a physicist at LMU and the Max Planck Institute for Quantum Optics, the team sought to test Bell’s Inequality by entangling two particles at a distance.

John Bell, the Irish physicist who devised a test to show that nature does not ‘hide variables’ as Einstein had proposed. Credit: CERN\

Bell’s Inequality (named after Irish physicist John Bell, who proposed it in 1964) essentially states that properties of objects exist independent of being observed (realism), and no information or physical influence can propagate faster than the speed of light (locality). These rules perfectly described the reality we human beings experience on a daily basis, where things are rooted in a particular space and time and exist independent of an observer.

However, at the quantum level, things do not appear to follow these rules. Not only can particles be connected in non-local ways over large distances (i.e. entanglement), but the properties of these particles cannot be defined until they are measured. And while all experiments have confirmed that the predictions of quantum mechanics are correct, some scientists have continued to argue that there are loopholes that allow for local realism.

To address this, the Munich team conducted an experiment using two laboratories at LMU. While the first lab was located in the basement of the physics department, the second was located in the basement of the economics department – roughly 400 meters away. In both labs, teams captured a single rubidium atom in an topical trap and then began exciting them until they released a single photon.

As Dr. Wenjamin Rosenfeld explained in an Max Planck Institute press release:

“Our two observer stations are independently operated and are equipped with their own laser and control systems. Because of the 400 meters distance between the laboratories, communication from one to the other would take 1328 nanoseconds, which is much more than the duration of the measurement process. So, no information on the measurement in one lab can be used in the other lab. That’s how we close the locality loophole.”

The experiment was performed in two locations 398 meters apart at the Ludwig Maximilian University campus in Munich, Germany. Credit: Rosenfeld et al/American Physical Society

Once the two rubidium atoms were excited to the point of releasing a photon, the spin-states of the rubidium atoms and the polarization states of the photons were effectively entangled. The photons were then coupled into optical fibers and guided to a set-up where they were brought to interference. After conducting a measurement run for eight days, the scientists were able to collected around 10,000 events to check for signs entanglement.

This would have been indicated by the spins of the two trapped rubidium atoms, which would be pointing in the same direction (or in the opposite direction, depending on the kind of entanglement). What the Munich team found was that for the vast majority of the events, the atoms were in the same state (or in the opposite state), and that there were only six deviations consistent with Bell’s Inequality.

These results were also statistically more significant than those obtained by a team of Dutch physicists in 2015. For the sake of that study, the Dutch team conducted experiments using electrons in diamonds at labs that were 1.3 km apart. In the end, their results (and other recent tests of Bell’s Inequality) demonstrated that quantum entanglement is real, effectively closing the local realism loophole.

As Wenjamin Rosenfeld explained, the tests conducted by his team also went beyond these other experiments by addressing another major issue. “We were able to determine the spin-state of the atoms very fast and very efficiently,” he said. “Thereby we closed a second potential loophole: the assumption, that the observed violation is caused by an incomplete sample of detected atom pairs”.

By obtaining proof of the violation of Bell’s Inequality, scientists are not only helping to resolve an enduring incongruity between classical and quantum physics. They are also opening the door to some exciting possibilities. For instance, for years, scientist have anticipated the development of quantum processors, which rely on entanglements to simulate the zeros and ones of binary code.

Computers that rely on quantum mechanics would be exponentially faster than conventional microprocessors, and would ushering in a new age of research and development. The same principles have been proposed for cybersecurity, where quantum encryption would be used to cypher information, making it invulnerable to hackers who rely on conventional computers.

Last, but certainly not least, there is the concept of Quantum Entanglement Communications, a method that would allow us to transmit information faster than the speed of light. Imagine the possibilities for space travel and exploration if we are no longer bound by the limits of relativistic communication!

Einstein wasn’t wrong when he characterized quantum entanglements as “spooky action”. Indeed, much of the implications of this phenomena are still as frightening as they are fascinating to physicists. But the closer we come to understanding it, the closer we will be towards developing an understanding of how all the known physical forces of the Universe fit together – aka. a Theory of Everything!

Further Reading: LMU, Physical Review Letters

New Explanation for Dark Energy? Tiny Fluctuations of Time and Space

Since the late 1920s, astronomers have been aware of the fact that the Universe is in a state of expansion. Initially predicted by Einstein’s Theory of General Relativity, this realization has gone on to inform the most widely-accepted cosmological model – the Big Bang Theory. However, things became somewhat confusing during the 1990s, when improved observations showed that the Universe’s rate of expansion has been accelerating for billions of years.

This led to the theory of Dark Energy, a mysterious invisible force that is driving the expansion of the cosmos. Much like Dark Matter which explained the “missing mass”, it then became necessary to find this elusive energy, or at least provide a coherent theoretical framework for it. A new study from the University of British Columbia (UBC) seeks to do just that by postulating the Universe is expanding due to fluctuations in space and time.

The study – which was recently published in the journal Physical Review D – was led by Qingdi Wang, a PhD student with the Department of Physics and Astronomy at UBC. Under the supervisions of UBC Professor William Unruh (the man who proposed the Unruh Effect) and with assistance from Zhen Zhu (another PhD student at UBC), they provide a new take on Dark Energy.

Diagram showing the Lambda-CBR universe, from the Big Bang to the the current era. Credit: Alex Mittelmann/Coldcreation

The team began by addressing the inconsistencies arising out of the two main theories that together explain all natural phenomena in the Universe. These theories are none other than General Relativity and quantum mechanics, which effectively explain how the Universe behaves on the largest of scales (i.e. stars, galaxies, clusters) and the smallest (subatomic particles).

Unfortunately, these two theories are not consistent when it comes to a little matter known as gravity, which scientists are still unable to explain in terms of quantum mechanics. The existence of Dark Energy and the expansion of the Universe are another point of disagreement. For starters, candidates theories like vacuum energy – which is one of the most popular explanations for Dark Energy – present serious incongruities.

According to quantum mechanics, vacuum energy would have an incredibly large energy density to it. But if this is true, then General Relativity predicts that this energy would have an incredibly strong gravitational effect, one which would be powerful enough to cause the Universe to explode in size. As Prof. Unruh shared with Universe Today via email:

“The problem is that any naive calculation of the vacuum energy gives huge values. If one assumes that there is some sort of cutoff so one cannot get energy densities much greater than the Planck energy density (or about 1095 Joules/meter³)  then one finds that one gets a Hubble constant – the time scale on which the Universe roughly doubles in size – of the order of 10-44 sec. So, the usual approach is to say that somehow something reduces that down so that one gets the actual expansion rate of about 10 billion years instead. But that ‘somehow’ is pretty mysterious and no one has come up with an even half convincing mechanism.”

Timeline of the Big Bang and the expansion of the Universe. Credit: NASA

Whereas other scientists have sought to modify the theories of General Relativity and quantum mechanics in order to resolve these inconsistencies, Wang and his colleagues sought a different approach. As Wang explained to Universe Today via email:

“Previous studies are either trying to modify quantum mechanics in some way to make vacuum energy small or trying to modify General Relativity in some way to make gravity numb for vacuum energy. However, quantum mechanics and General Relativity are the two most successful theories that explain how our Universe works… Instead of trying to modify quantum mechanics or General Relativity, we believe that we should first understand them better. We takes the large vacuum energy density predicted by quantum mechanics seriously and just let them gravitate according to General Relativity without modifying either of them.”

For the sake of their study, Wang and his colleagues performed new sets of calculations on vacuum energy that took its predicted high energy density into account. They then considered the possibility that on the tiniest of scales – billions of times smaller than electrons – the fabric of spacetime is subject to wild fluctuations, oscillating at every point between expansion and contraction.

Could fluctuations at the tiniest levels of space time explain Dark Energy and the expansion of the cosmos? Credit: University of Washington

As it swings back and forth, the result of these oscillations is a net effect where the Universe expands slowly, but at an accelerating rate. After performing their calculations, they noted that such an explanation was consistent with both the existence of quantum vacuum energy density and General Relativity. On top of that, it is also consistent with what scientists have been observing in our Universe for almost a century. As Unruh described it:

“Our calculations showed that one could consistently regard [that] the Universe on the tiniest scales is actually expanding and contracting at an absurdly fast rate; but that on a large scale, because of an averaging over those tiny scales, physics would not notice that ‘quantum foam’. It has a tiny residual effect in giving an effective cosmological constant (dark energy type effect). In some ways it is like waves on the ocean which travel as if the ocean were perfectly smooth but really we know that there is this incredible dance of the atoms that make up the water, and waves average over those fluctuations, and act as if the surface was smooth.”

In contrast to conflicting theories of a Universe where the various forces that govern it cannot be resolved and must cancel each other out, Wang and his colleagues presents a picture where the Universe is constantly in motion. In this scenario, the effects of vacuum energy are actually self-cancelling, and also give rise to the expansion and acceleration we have been observing all this time.

While it may be too soon to tell, this image of a Universe that is highly-dynamic (even on the tiniest scales) could revolutionize our understanding of spacetime. At the very least, these theoretical findings are sure to stimulate debate within the scientific community, as well as experiments designed to offer direct evidence. And that, as we know, is the only way we can advance our understanding of this thing known as the Universe.

Further Reading: UBC News, Physical Review D

Team Creates Negative Effective Mass In The Lab

Credit: ESA/Hubble, ESO, M. Kornmesser

When it comes to objects and force, Isaac Newton’s Three Laws of Motion are pretty straightforward. Apply force to an object in a specific direction, and the object will move in that direction. And unless there’s something acting against it (like gravity or air pressure) it will keep moving in that direction until something stops it. But when it comes to “negative mass”, the exact opposite is true.

As the name would suggest, the term refers to matter whose mass is opposite that of normal matter. Until a few years ago, negative mass was predominantly a theoretical concept and had only been observed in very specific settings. But according to a recent study by an international team of researchers, they managed to create a fluid with a “negative effective mass” under laboratory conditions for the first time .

To put it in the simplest terms, matter can have a negative mass in the same way that a particle can have a negative charge. When it comes to the Universe that we know and study on a regular basis, one could say that we have encountered only the positive form of mass. In fact, one could say that it is the same situation with matter and antimatter. Theoretical physics tells us both exist, but we only see the one on a regular basis.

. Credit: shock.wsu.edu

As Dr. Michael McNeil Forbes – a Professor at Washington State University, a Fellow at the Institute for Nuclear Theory, and a co-author on the study – explained in a WSU press release:

“That’s what most things that we’re used to do. With negative mass, if you push something, it accelerates toward you. Once you push, it accelerates backwards. It looks like the rubidium hits an invisible wall.”

According to the team’s study, which was recently published in the Physical Review Letters (under the title “Negative-Mass Hydrodynamics in a Spin-Orbit–Coupled Bose-Einstein Condensate“), a negative effective mass can be created by altering the spin-orbit coupling of atoms. Led by Peter Engels – a professor of physics and astronomy at Washington State University – this consisted of using lasers to control the behavior of rubidium atoms.

They began by using a single laser to keep rubidium atoms in a bowl measuring less than 100 microns across. This had the effect of slowing the atoms down and cooling them to just a few degrees above absolute zero, which resulted in the rubidium becoming a Bose-Einstein condensate. Named after Satyendra Nath Bose and Albert Einstein (who predicted how their atoms would behave) these types of condensates behaves like a superfluid.

Velocity-distribution data (3 views) for a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Credit: NIST/JILA/CU-Boulder

Basically, this means that their particles move very slowly and behave like waves, but without losing any energy. A second set of lasers was then applied to move the atoms back and forth, effectively changing the way they spin. Prior to the change in their spins, the superfluid had regular mass and breaking the bowl would result in them pushing out and expanding away from their center of mass.

But after the application of the second laser, the rubidium rushed out and accelerated in the opposite direction – consistent with how a negative mass would. This represented a break with previous laboratory experiments, where researchers were unable to get atoms to behave in a way that was consistent with negative mass. But as Forbes explained, the WSU experiment avoided some of the underlying defects encountered by these experiments:

“What’s a first here is the exquisite control we have over the nature of this negative mass, without any other complications. It provides another environment to study a fundamental phenomenon that is very peculiar.”

And while news of this experiment has been met with fanfare and claims to the effect that the researchers had “rewritten the laws of physics”, it is important to emphasize that this research has created a “negative effective mass” – which is fundamentally different from a negative mass.

Artist’s rendering of an outburst on an ultra-magnetic neutron star, also called a magnetar.
Credit: NASA/Goddard Space Flight Center

As Sabine Hossenfelder, a Research Fellow at the Frankfurt Institute for Advanced Studies, wrote on her website Backreaction in response to the news:

“Physicists use the preamble ‘effective’ to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention. The ‘effective radius’ of a galaxy, for example, is not its radius. The ‘effective nuclear charge’ is not the charge of the nucleus. And the ‘effective negative mass’ – you guessed it – is not a negative mass. The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.”

In other words, the researchers were able to get atoms to behave as a negative mass, rather than creating one. Nevertheless, their experiment demonstrates the level of control researchers now have when conducting quantum experiments, and also serves to clarify how negative mass behaves in other systems. Basically, physicists can use the results of these kinds of experiments to probe the mysteries of the Universe where experimentation is impossible.

These include what goes on inside neutron stars or what transpires beneath the veil of a event horizon. Perhaps they could even shed some light on questions relating to dark energy.

Further Reading: Physical Review Letters, WSU

Who was Max Planck?

Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.

If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!

Early Life and Education:

Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.

Illustration of Friedrich Wilhelms University, with the statue of Frederick the Great (ca. 1850). Credit: Wikipedia Commons/A. Carse

He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.

In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”

For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.

With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.

Professors Michael Bonitz and Frank Hohmann, holding a facsimile of Planck’s Nobel prize certificate, which was given to the University of Kiel in 2013. Credit and Copyright: CAU/Schimmelpfennig

In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.

Black Body Radiation:

It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).

In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.

A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).

From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in 1931. Credit: Wikipedia Commons
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in Berlin, 1931. Credit: Wikipedia Commons

Quantum Mechanics:

By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.

The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).

However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.

During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.

Photograph of the first Solvay Conference in 1911 at the Hotel Metropole in Brussels, Belgium. Credit: International Solvay Institutes/Benjamin Couprie

This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.

World War I and World War II:

In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.

After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.

The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.

Entrance of the administrative headquarters of the Max Planck Society in Munich. Credit: Wikipedia Commons/Maximilian Dörrbecker

Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.

Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.

Death and Legacy:

Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.

In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.

The Max Planck Medal, issued by the German Physical Society in recognition of scientific contributions. Credit: dpg-physik.de

He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.

The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.

In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.

Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.

We have written many articles about Max Planck for Universe Today. Here’s What is Planck Time?, Planck’s First Light?, All-Sky Stunner from Planck, What is Schrodinger’s Cat?, What is the Double Slit Experiment?, and here’s a list of stories about the spacecraft that bears his name.

If you’d like more info on Max Planck, check out Max Planck’s biography from Science World and Space and Motion.

We’ve also recorded an entire episode of Astronomy Cast all about Max Planck. Listen here, Episode 218: Max Planck.

Sources:

What is Absolute Zero?

Canadians don’t have much to be proud of, but we can regale you with our ability to withstand freezing cold temperatures. Now, I live on the West Coast, so I’m soft and weak, rarely experiencing temperatures below freezing.

But for some of my Canadian brethren, temperatures can dip down to levels your mind and body can scarcely comprehend. For example, I have a friend who lives in Winnipeg, Manitoba. For a day last winter, the temperatures there dipped down -31C, but with the windchill, it felt like -50C. On that same day, it was a balmy -29C on Mars. On Mars!

But for scientists, and the Universe, it can get much much colder. So cold, in fact, that they use a completely different temperature scale – Kelvin – to measure how far away things are from the coldest possible temperature: Absolute Zero.

Nowhere close to absolute zero. Credit: Osccarr (CC BY 2.0)
Nowhere close to absolute zero. Credit: Osccarr (CC BY 2.0)

On the Celsius scale, Absolute Zero is -273.15 degrees. And in Fahrenheit, it’s -459.67 degrees. In the Kelvin scale, however, it’s very simple. Absolute Zero is 0 kelvin.

At this point, a science explainer is going to stumble into a minefield of incorrect usage. It’s not 0 degrees kelvin, you don’t say the degrees part, just the kelvin part. Just kelvin.

This is because when you measure something from an arbitrary point, like the direction you just turned, you’ve changed course 15-degrees. But if you’re measuring from an absolute point, like the lowest physical temperature defined by nature, you drop the degrees because it’s an absolute. An Absolute Zero.

Of course, I’ve probably gotten that wrong too. This stuff is hard.

Anyway, back to Absolute Zero.

Still not cold enough. Credit: Lori Cuthbert (CC BY 2.0)
Still not cold enough. Credit: Lori Cuthbert (CC BY 2.0)

Absolute Zero is the coldest possible temperature that can theoretically be reached. At this point, no heat energy can be extracted from a system, no work can be done. It’s dead Jim.

But it’s completely theoretical. It’s practically impossible to cool something down to Absolute Zero. In order to cool something down, you need to do work to extract heat from it. The colder you get, the more work you need to do. In order to get to Absolute Zero, you’d need to put in an infinite amount of work. And that’s ridiculous.

As you probably learned in physics or chemistry class, the temperature of a gas translates to the motion of the particles in the gas. As you cool a gas down, by extracting heat from it, the particles slow down.

You would think, then, that by cooling something down to Absolute Zero, all particle motion in that something would stop. But that’s not true.

From a quantum mechanics point of view, you can never know the position and momentum of particles at the same time. If the particles stopped, you’d know their momentum (zero) and their position… right there. The Universe and its laws of physics just can’t allow that to happen. Thank Heisenberg’s Uncertainty Principle.

Therefore, there’s always a little motion, even if you could get to Absolute Zero, which you can’t. But you can’t extract any more heat from it.

The physicist Robert Boyle was one of the first to consider the possibility that there was a lowest possible temperature, which he called the primum frigidum. In 1702, Guillaume Amontons created a thermometer that he calculated would bottom out at -240 C. Pretty close, actually.

But it was Lord Kelvin, who created this absolute scale in 1848, starting at -273 C, or 0 kelvin.

A photograph of Lord Kelvin.
A photograph of Lord Kelvin.

By this measurement, even with its windchill, Winnipeg was a balmy 223 kelvin on that wintry day.

The surface of Pluto, on the other hand varies from a low of 33 kelvin to a high of 55 kelvin. That’s -240 C to -218 C.

The average background temperature across the entire Universe is just 2.7 kelvin. You won’t find many places that cold, unless you get out to the vast cosmic voids that separate galaxy clusters.

Over time, the background temperature of the Universe will continue to drop, but it’ll never actually reach Absolute Zero. Even in a Googol years, when the last supermassive black hole has finally evaporated, and there’s no usable heat left in the entire Universe.

In fact, astronomers call this bleak future the “heat death” of the Universe. It’s heat death, as in, the death of all heat. And happiness.

You might be surprised to know that the coldest temperature in the entire Universe is right here on Earth. Well, sometimes, anyway. And assuming the aliens haven’t got better technology than us, which they probably do.

At the time that I’m recording this video, physicists have used lasers to cool down Rubidium-87 gas to just 170 nanokelvin, a tiny fraction above Absolute Zero. In fact, they won a Nobel Prize for their work in discovering Bose-Einstein condensates.

NASA is actually working on a new experiment called the Cold Atom Lab that will send a version of this technology to the International Space Station, where it should be able to cool material down to 100 picokelvin. That’s cold.

The Cold Atom Lab is planned to launch in August 2017. Credit: NASA / JPL
The Cold Atom Lab is planned to launch in August 2017. Credit: NASA / JPL

Here are your takeaways. Absolute Zero is the coldest possible temperature than can ever be reached, the point at which no further heat energy can be extracted from a system. Never say degrees kelvin, you’ll cause so much wincing. The Universe can’t match our cold generating abilities… yet. Take that Universe.

I’d love to hear the coldest temperature you’ve ever personally experienced. For me, it was visiting Buffalo in December. That’s not right.

What Is The Electron Cloud Model?

The early 20th century was a very auspicious time for the sciences. In addition to Ernest Rutherford and Niels Bohr giving birth to the Standard Model of particle physics, it was also a period of breakthroughs in the field of quantum mechanics. Thanks to ongoing studies on the behavior of electrons, scientists began to propose theories whereby these elementary particles behaved in ways that defied classical, Newtonian physics.

One such example is the Electron Cloud Model proposed by Erwin Schrodinger. Thanks to this model, electrons were no longer depicted as particles moving around a central nucleus in a fixed orbit. Instead, Schrodinger proposed a model whereby scientists could only make educated guesses as to the positions of electrons. Hence, their locations could only be described as being part of a ‘cloud’ around the nucleus where the electrons are likely to be found.

Atomic Physics To The 20th Century:

The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808). Credit: Public Domain
Various atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy (1808). Credit: Public Domain

It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800’s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory.

This theory expanded on the laws of conversation of mass and definite proportions and came down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.

Discovery Of The Electron:

By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, his would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.

Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.

The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com
The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com

This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.

These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’s Philosophical Magazine, to wide acclaim.

Development Of The Standard Model:

Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”

In this experiment, Geiger and Marsden measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.

A depiction of the atomic structure of the helium atom. Credit: Creative Commons
A depiction of the atomic structure of the helium atom. Credit: Creative Commons

Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.

By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.

Subsequent experiments by Antonius Van den Broek and Niels Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).

The Electron Cloud Model:

During the 1920s, Austrian physicist Erwin Schrodinger became fascinated by the theories Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and other physicists. During this time, he also became involved in the fields of atomic theory and spectra, researching at the University of Zurich and then the Friedrich Wilhelm University in Berlin (where he succeeded Planck in 1927).

Artist's concept of the Electron Cloud model, which described the likely location of electron orbitals. Credit: prezi.com
Artist’s concept of the Electron Cloud model, which described the likely location of electron orbitals over time. Credit: Pearson Prentice Hall

In 1926, Schrödinger tackled the issue of wave functions and electrons in a series of papers. In addition to describing what would come to be known as the Schrodinger equation – a partial differential equation that describes how the quantum state of a quantum system changes with time – he also used mathematical equations to describe the likelihood of finding an electron in a certain position.

This became the basis of what would come to be known as the Electron Cloud (or quantum mechanical) Model, as well as the Schrodinger equation. Based on quantum theory, which states that all matter has properties associated with a wave function, the Electron Cloud Model differs from the Bohr Model in that it does not define the exact path of an electron.

Instead, it predicts the likely position of the location of the electron based on a function of probabilities. The probability function basically describes a cloud-like region where the electron is likely to be found, hence the name. Where the cloud is most  dense, the probability of finding the electron is greatest; and where the  electron is less likely to be, the cloud is less dense.

These dense regions are known as “electron orbitals”, since they are the most likely location where an orbiting electron will be found. Extending this “cloud” model to a 3-dimensional space, we see a barbell or flower-shaped atom (as in image at the top). Here, the branching out regions are the ones where we are most likely to find the electrons.

Thanks to Schrodinger’s work, scientists began to understand that in the realm of quantum mechanics, it was impossible to know the exact position and momentum of an electron at the same time. Regardless of what the observer knows initially about a particle, they can only predict its succeeding location or momentum in terms of probabilities.

At no given time will they be able to ascertain either one. In fact, the more they know about the momentum of a particle, the less they will know about its location, and vice versa. This is what is known today as the “Uncertainty Principle”.

Note that the orbitals mentioned in the previous paragraph are formed by a hydrogen atom (i.e. with just one electron). When dealing with atoms that have more electrons, the electron orbital regions spread out evenly into a spherical fuzzy ball. This is where the term ‘electron cloud’ is most appropriate.

This contribution was universally recognized as being one of the cost important contributions of the 20th century, and one which triggered a revolution in the fields of physics, quantum mechanics and indeed all the sciences. Thenceforth, scientists were no longer working in a universe characterized by absolutes of time and space, but in quantum uncertainties and time-space relativity!

We have written many interesting articles about atoms and atomic models here at Universe Today. Here’s What Is John Dalton’s Atomic Model?, What Is The Plum Pudding Model?, What Is Bohr’s Atomic Model?, Who Was Democritus?, and What Are The Parts Of An Atom?

For more information, be sure to check What Is Quantum Mechanics? from Live Science.

Astronomy Cast also has episode on the topic, like Episode 130: Radio Astronomy, Episode 138: Quantum Mechanics, and Episode 252: Heisenberg Uncertainty Principle