The Coldest Place in Space Has Been Created. Next Challenge, Coldest Place in the Universe

This series of graphs show the changing density of a cloud of atoms as it is cooled to lower and lower temperatures (going from left to right) approaching absolute zero. Credit: NASA/JPL-Caltech

Despite decades of ongoing research, scientists are trying to understand how the four fundamental forces of the Universe fit together. Whereas quantum mechanics can explain how three of these forces things work together on the smallest of scales (electromagnetism, weak and strong nuclear forces), General Relativity explains how things behaves on the largest of scales (i.e. gravity). In this respect, gravity remains the holdout.

To understand how gravity interacts with matter on the tiniest of scales, scientists have developed some truly cutting-edge experiments. One of these is NASA’s Cold Atom Laboratory (CAL), located aboard the ISS, which recently achieved a milestone by creating clouds of atoms known as Bose-Einstein condensates (BECs). This was the first time that BECs have been created in orbit, and offers new opportunities to probe the laws of physics.

Originally predicted by Satyendra Nath Bose and Albert Einstein 71 years ago, BECs are essentially ultracold atoms that reach temperatures just above absolute zero, the point at which atoms should stop moving entirely (in theory). These particles are long-lived and precisely controlled, which makes them the ideal platform for studying quantum phenomena.

The Cold Atom Laboratory (CAL), which consists of two standardized containers that will be installed on the International Space Station. Credit: NASA/JPL-Caltech/Tyler Winn

This is the purpose of the CAL facility, which is to study ultracold quantum gases in a microgravity environment. The laboratory was installed in the US Science Lab aboard the ISS in late May and is the first of its kind in space. It is designed to advance scientists’ ability to make precision measurements of gravity and study how it interacts with matter at the smallest of scales.

As Robert Thompson, the CAL project scientist and a physicist at NASA’s Jet Propulsion Laboratory, explained in a recent press release:

“Having a BEC experiment operating on the space station is a dream come true. It’s been a long, hard road to get here, but completely worth the struggle, because there’s so much we’re going to be able to do with this facility.”

About two weeks ago, CAL scientists confirmed that the facility had produced BECs from atoms of rubidium – a soft, silvery-white metallic element in the alkali group. According to their report, they had reached temperatures as low as 100 nanoKelvin, one-ten million of one Kelvin above absolute zero (-273 °C; -459 °F). This is roughly 3 K (-270 °C; -454 °F) colder than the average temperature of space.

Because of their unique behavior, BECs are characterized as a fifth state of matter, distinct from gases, liquids, solids and plasma. In BECs, atoms act more like waves than particles on the macroscopic scale, whereas this behavior is usually only observable on the microscopic scale. In addition, the atoms all assume their lowest energy state and take on the same wave identity, making them indistinguishable from one another.

The”physics package” inside the Cold Atom Lab, where ultracold clouds of atoms called Bose-Einstein condensates are produced. Credit: NASA/JPL-Caltech/Tyler Winn

In short, the atom clouds begin to behave like a single “super atom” rather than individual atoms, which makes them easier to study. The first BECs were produced in a lab in 1995 by a science team consisting of Eric Cornell, Carl Wieman and Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for their accomplishment. Since that time, hundreds of BEC experiments have been conducted on Earth and some have even been sent into space aboard sounding rockets.

But the CAL facility is unique in that it is the first of its kind on the ISS, where scientists can conduct daily studies over long periods. The facility consists of two standardized containers, which consist of the larger “quad locker” and the smaller “single locker”. The quad locker contains CAL’s physics package, the compartment where CAL will produce clouds of ultra-cold atoms.

This is done by using magnetic fields or focused lasers to create frictionless containers known as “atom traps”. As the atom cloud decompresses inside the atom trap, its temperature naturally drops, getting colder the longer it remains in the trap. On Earth, when these traps are turned off, gravity causes the atoms to begin moving again, which means they can only be studied for fractions of a second.

Aboard the ISS, which is a microgravity environment, BECs can decompress to colder temperatures than with any instrument on Earth and scientists are able to observe individual BECs for five to ten seconds at a time and repeat these measurements for up to six hours per day. And since the facility is controlled remotely from the Earth Orbiting Missions Operation Center at JPL, day-to-day operations require no intervention from astronauts aboard the station.

JPL scientists and members of the Cold Atom Lab’s atomic physics team (left to right) David Aveline, Ethan Elliott and Jason Williams. Credit: NASA/JPL-Caltech

Robert Shotwell, the chief engineer of JPL’s astronomy and physics directorate, has overseen the project since February 2017. As he indicated in a recent NASA press release:

“CAL is an extremely complicated instrument. Typically, BEC experiments involve enough equipment to fill a room and require near-constant monitoring by scientists, whereas CAL is about the size of a small refrigerator and can be operated remotely from Earth. It was a struggle and required significant effort to overcome all the hurdles necessary to produce the sophisticated facility that’s operating on the space station today.”

Looking ahead, the CAL scientists want to go even further and achieve temperatures that are lower than anything achieved on Earth. In addition to rubidium, the CAL team is also working towards making BECSs using two different isotopes of potassium atoms. At the moment, CAL is still in a commissioning phase, which consists of the operations team conducting a long series of tests see how the CAL facility will operate in microgravity.

However, once it is up and running, five science groups – including groups led by Cornell and Ketterle – will conduct experiments at the facility during its first year. The science phase is expected to begin in early September and will last three years. As Kamal Oudrhiri, JPL’s mission manager for CAL, put it:

“There is a globe-spanning team of scientists ready and excited to use this facility. The diverse range of experiments they plan to perform means there are many techniques for manipulating and cooling the atoms that we need to adapt for microgravity, before we turn the instrument over to the principal investigators to begin science operations.”

Given time, the Cold Atom Lab (CAL) may help scientists to understand how gravity works on the tiniest of scales. Combined with high-energy experiments conducted by CERN and other particle physics laboratories around the world, this could eventually lead to a Theory of Everything (ToE) and a complete understanding of how the Universe works.

And be sure to check out this cool video (no pun!) of the CAL facility as well, courtesy of NASA:

Further Reading: NASA

Physicists Take Big Step Towards Quantum Computing and Encryption with new Experiment

Artist’s concept of the experiment in which two atoms are being entangled over a distance of 400 meters. Credit: Wenjamin Rosenfeld

Quantum entanglement remains one of the most challenging fields of study for modern physicists. Described by Einstein as “spooky action at a distance”, scientists have long sought to reconcile how this aspect of quantum mechanics can coexist with classical mechanics. Essentially, the fact that two particles can be connected over great distances violates the rules of locality and realism.

Formally, this is a violation of Bell’s Ineqaulity, a theory which has been used for decades to show that locality and realism are valid despite being inconsistent with quantum mechanics. However, in a recent study, a team of researchers from the Ludwig-Maximilian University (LMU) and the Max Planck Institute for Quantum Optics in Munich conducted tests which once again violate Bell’s Inequality and proves the existence of entanglement.

Their study, titled “Event-Ready Bell Test Using Entangled Atoms Simultaneously Closing Detection and Locality Loopholes“, was recently published in the Physical Review Letters. Led by Wenjamin Rosenfeld, a physicist at LMU and the Max Planck Institute for Quantum Optics, the team sought to test Bell’s Inequality by entangling two particles at a distance.

John Bell, the Irish physicist who devised a test to show that nature does not ‘hide variables’ as Einstein had proposed. Credit: CERN\

Bell’s Inequality (named after Irish physicist John Bell, who proposed it in 1964) essentially states that properties of objects exist independent of being observed (realism), and no information or physical influence can propagate faster than the speed of light (locality). These rules perfectly described the reality we human beings experience on a daily basis, where things are rooted in a particular space and time and exist independent of an observer.

However, at the quantum level, things do not appear to follow these rules. Not only can particles be connected in non-local ways over large distances (i.e. entanglement), but the properties of these particles cannot be defined until they are measured. And while all experiments have confirmed that the predictions of quantum mechanics are correct, some scientists have continued to argue that there are loopholes that allow for local realism.

To address this, the Munich team conducted an experiment using two laboratories at LMU. While the first lab was located in the basement of the physics department, the second was located in the basement of the economics department – roughly 400 meters away. In both labs, teams captured a single rubidium atom in an topical trap and then began exciting them until they released a single photon.

As Dr. Wenjamin Rosenfeld explained in an Max Planck Institute press release:

“Our two observer stations are independently operated and are equipped with their own laser and control systems. Because of the 400 meters distance between the laboratories, communication from one to the other would take 1328 nanoseconds, which is much more than the duration of the measurement process. So, no information on the measurement in one lab can be used in the other lab. That’s how we close the locality loophole.”

The experiment was performed in two locations 398 meters apart at the Ludwig Maximilian University campus in Munich, Germany. Credit: Rosenfeld et al/American Physical Society

Once the two rubidium atoms were excited to the point of releasing a photon, the spin-states of the rubidium atoms and the polarization states of the photons were effectively entangled. The photons were then coupled into optical fibers and guided to a set-up where they were brought to interference. After conducting a measurement run for eight days, the scientists were able to collected around 10,000 events to check for signs entanglement.

This would have been indicated by the spins of the two trapped rubidium atoms, which would be pointing in the same direction (or in the opposite direction, depending on the kind of entanglement). What the Munich team found was that for the vast majority of the events, the atoms were in the same state (or in the opposite state), and that there were only six deviations consistent with Bell’s Inequality.

These results were also statistically more significant than those obtained by a team of Dutch physicists in 2015. For the sake of that study, the Dutch team conducted experiments using electrons in diamonds at labs that were 1.3 km apart. In the end, their results (and other recent tests of Bell’s Inequality) demonstrated that quantum entanglement is real, effectively closing the local realism loophole.

As Wenjamin Rosenfeld explained, the tests conducted by his team also went beyond these other experiments by addressing another major issue. “We were able to determine the spin-state of the atoms very fast and very efficiently,” he said. “Thereby we closed a second potential loophole: the assumption, that the observed violation is caused by an incomplete sample of detected atom pairs”.

By obtaining proof of the violation of Bell’s Inequality, scientists are not only helping to resolve an enduring incongruity between classical and quantum physics. They are also opening the door to some exciting possibilities. For instance, for years, scientist have anticipated the development of quantum processors, which rely on entanglements to simulate the zeros and ones of binary code.

Computers that rely on quantum mechanics would be exponentially faster than conventional microprocessors, and would ushering in a new age of research and development. The same principles have been proposed for cybersecurity, where quantum encryption would be used to cypher information, making it invulnerable to hackers who rely on conventional computers.

Last, but certainly not least, there is the concept of Quantum Entanglement Communications, a method that would allow us to transmit information faster than the speed of light. Imagine the possibilities for space travel and exploration if we are no longer bound by the limits of relativistic communication!

Einstein wasn’t wrong when he characterized quantum entanglements as “spooky action”. Indeed, much of the implications of this phenomena are still as frightening as they are fascinating to physicists. But the closer we come to understanding it, the closer we will be towards developing an understanding of how all the known physical forces of the Universe fit together – aka. a Theory of Everything!

Further Reading: LMU, Physical Review Letters

New Explanation for Dark Energy? Tiny Fluctuations of Time and Space

A new study from researchers from the University of British Columbia offers a new explanation of Dark Energy. Credit: NASA

Since the late 1920s, astronomers have been aware of the fact that the Universe is in a state of expansion. Initially predicted by Einstein’s Theory of General Relativity, this realization has gone on to inform the most widely-accepted cosmological model – the Big Bang Theory. However, things became somewhat confusing during the 1990s, when improved observations showed that the Universe’s rate of expansion has been accelerating for billions of years.

This led to the theory of Dark Energy, a mysterious invisible force that is driving the expansion of the cosmos. Much like Dark Matter which explained the “missing mass”, it then became necessary to find this elusive energy, or at least provide a coherent theoretical framework for it. A new study from the University of British Columbia (UBC) seeks to do just that by postulating the Universe is expanding due to fluctuations in space and time.

The study – which was recently published in the journal Physical Review D – was led by Qingdi Wang, a PhD student with the Department of Physics and Astronomy at UBC. Under the supervisions of UBC Professor William Unruh (the man who proposed the Unruh Effect) and with assistance from Zhen Zhu (another PhD student at UBC), they provide a new take on Dark Energy.

Diagram showing the Lambda-CBR universe, from the Big Bang to the the current era. Credit: Alex Mittelmann/Coldcreation

The team began by addressing the inconsistencies arising out of the two main theories that together explain all natural phenomena in the Universe. These theories are none other than General Relativity and quantum mechanics, which effectively explain how the Universe behaves on the largest of scales (i.e. stars, galaxies, clusters) and the smallest (subatomic particles).

Unfortunately, these two theories are not consistent when it comes to a little matter known as gravity, which scientists are still unable to explain in terms of quantum mechanics. The existence of Dark Energy and the expansion of the Universe are another point of disagreement. For starters, candidates theories like vacuum energy – which is one of the most popular explanations for Dark Energy – present serious incongruities.

According to quantum mechanics, vacuum energy would have an incredibly large energy density to it. But if this is true, then General Relativity predicts that this energy would have an incredibly strong gravitational effect, one which would be powerful enough to cause the Universe to explode in size. As Prof. Unruh shared with Universe Today via email:

“The problem is that any naive calculation of the vacuum energy gives huge values. If one assumes that there is some sort of cutoff so one cannot get energy densities much greater than the Planck energy density (or about 1095 Joules/meter³)  then one finds that one gets a Hubble constant – the time scale on which the Universe roughly doubles in size – of the order of 10-44 sec. So, the usual approach is to say that somehow something reduces that down so that one gets the actual expansion rate of about 10 billion years instead. But that ‘somehow’ is pretty mysterious and no one has come up with an even half convincing mechanism.”

Timeline of the Big Bang and the expansion of the Universe. Credit: NASA

Whereas other scientists have sought to modify the theories of General Relativity and quantum mechanics in order to resolve these inconsistencies, Wang and his colleagues sought a different approach. As Wang explained to Universe Today via email:

“Previous studies are either trying to modify quantum mechanics in some way to make vacuum energy small or trying to modify General Relativity in some way to make gravity numb for vacuum energy. However, quantum mechanics and General Relativity are the two most successful theories that explain how our Universe works… Instead of trying to modify quantum mechanics or General Relativity, we believe that we should first understand them better. We takes the large vacuum energy density predicted by quantum mechanics seriously and just let them gravitate according to General Relativity without modifying either of them.”

For the sake of their study, Wang and his colleagues performed new sets of calculations on vacuum energy that took its predicted high energy density into account. They then considered the possibility that on the tiniest of scales – billions of times smaller than electrons – the fabric of spacetime is subject to wild fluctuations, oscillating at every point between expansion and contraction.

Could fluctuations at the tiniest levels of space time explain Dark Energy and the expansion of the cosmos? Credit: University of Washington

As it swings back and forth, the result of these oscillations is a net effect where the Universe expands slowly, but at an accelerating rate. After performing their calculations, they noted that such an explanation was consistent with both the existence of quantum vacuum energy density and General Relativity. On top of that, it is also consistent with what scientists have been observing in our Universe for almost a century. As Unruh described it:

“Our calculations showed that one could consistently regard [that] the Universe on the tiniest scales is actually expanding and contracting at an absurdly fast rate; but that on a large scale, because of an averaging over those tiny scales, physics would not notice that ‘quantum foam’. It has a tiny residual effect in giving an effective cosmological constant (dark energy type effect). In some ways it is like waves on the ocean which travel as if the ocean were perfectly smooth but really we know that there is this incredible dance of the atoms that make up the water, and waves average over those fluctuations, and act as if the surface was smooth.”

In contrast to conflicting theories of a Universe where the various forces that govern it cannot be resolved and must cancel each other out, Wang and his colleagues presents a picture where the Universe is constantly in motion. In this scenario, the effects of vacuum energy are actually self-cancelling, and also give rise to the expansion and acceleration we have been observing all this time.

While it may be too soon to tell, this image of a Universe that is highly-dynamic (even on the tiniest scales) could revolutionize our understanding of spacetime. At the very least, these theoretical findings are sure to stimulate debate within the scientific community, as well as experiments designed to offer direct evidence. And that, as we know, is the only way we can advance our understanding of this thing known as the Universe.

Further Reading: UBC News, Physical Review D

Team Creates Negative Effective Mass In The Lab

Credit: ESA/Hubble, ESO, M. Kornmesser
Researchers at WSU have created a fluid with a negative effective mass for the first time, which could open the door to studying the deeper mysteries of the Universe. Credit: ESA/Hubble, ESO, M. Kornmesse

When it comes to objects and force, Isaac Newton’s Three Laws of Motion are pretty straightforward. Apply force to an object in a specific direction, and the object will move in that direction. And unless there’s something acting against it (like gravity or air pressure) it will keep moving in that direction until something stops it. But when it comes to “negative mass”, the exact opposite is true.

As the name would suggest, the term refers to matter whose mass is opposite that of normal matter. Until a few years ago, negative mass was predominantly a theoretical concept and had only been observed in very specific settings. But according to a recent study by an international team of researchers, they managed to create a fluid with a “negative effective mass” under laboratory conditions for the first time .

To put it in the simplest terms, matter can have a negative mass in the same way that a particle can have a negative charge. When it comes to the Universe that we know and study on a regular basis, one could say that we have encountered only the positive form of mass. In fact, one could say that it is the same situation with matter and antimatter. Theoretical physics tells us both exist, but we only see the one on a regular basis.

. Credit: shock.wsu.edu

As Dr. Michael McNeil Forbes – a Professor at Washington State University, a Fellow at the Institute for Nuclear Theory, and a co-author on the study – explained in a WSU press release:

“That’s what most things that we’re used to do. With negative mass, if you push something, it accelerates toward you. Once you push, it accelerates backwards. It looks like the rubidium hits an invisible wall.”

According to the team’s study, which was recently published in the Physical Review Letters (under the title “Negative-Mass Hydrodynamics in a Spin-Orbit–Coupled Bose-Einstein Condensate“), a negative effective mass can be created by altering the spin-orbit coupling of atoms. Led by Peter Engels – a professor of physics and astronomy at Washington State University – this consisted of using lasers to control the behavior of rubidium atoms.

They began by using a single laser to keep rubidium atoms in a bowl measuring less than 100 microns across. This had the effect of slowing the atoms down and cooling them to just a few degrees above absolute zero, which resulted in the rubidium becoming a Bose-Einstein condensate. Named after Satyendra Nath Bose and Albert Einstein (who predicted how their atoms would behave) these types of condensates behaves like a superfluid.

Velocity-distribution data (3 views) for a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Credit: NIST/JILA/CU-Boulder

Basically, this means that their particles move very slowly and behave like waves, but without losing any energy. A second set of lasers was then applied to move the atoms back and forth, effectively changing the way they spin. Prior to the change in their spins, the superfluid had regular mass and breaking the bowl would result in them pushing out and expanding away from their center of mass.

But after the application of the second laser, the rubidium rushed out and accelerated in the opposite direction – consistent with how a negative mass would. This represented a break with previous laboratory experiments, where researchers were unable to get atoms to behave in a way that was consistent with negative mass. But as Forbes explained, the WSU experiment avoided some of the underlying defects encountered by these experiments:

“What’s a first here is the exquisite control we have over the nature of this negative mass, without any other complications. It provides another environment to study a fundamental phenomenon that is very peculiar.”

And while news of this experiment has been met with fanfare and claims to the effect that the researchers had “rewritten the laws of physics”, it is important to emphasize that this research has created a “negative effective mass” – which is fundamentally different from a negative mass.

Artist’s rendering of an outburst on an ultra-magnetic neutron star, also called a magnetar.
Credit: NASA/Goddard Space Flight Center

As Sabine Hossenfelder, a Research Fellow at the Frankfurt Institute for Advanced Studies, wrote on her website Backreaction in response to the news:

“Physicists use the preamble ‘effective’ to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention. The ‘effective radius’ of a galaxy, for example, is not its radius. The ‘effective nuclear charge’ is not the charge of the nucleus. And the ‘effective negative mass’ – you guessed it – is not a negative mass. The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.”

In other words, the researchers were able to get atoms to behave as a negative mass, rather than creating one. Nevertheless, their experiment demonstrates the level of control researchers now have when conducting quantum experiments, and also serves to clarify how negative mass behaves in other systems. Basically, physicists can use the results of these kinds of experiments to probe the mysteries of the Universe where experimentation is impossible.

These include what goes on inside neutron stars or what transpires beneath the veil of a event horizon. Perhaps they could even shed some light on questions relating to dark energy.

Further Reading: Physical Review Letters, WSU

Who was Max Planck?

Portrait of Max Planck (c. 1930). Credit: Smithsonian Libraries

Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.

If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!

Early Life and Education:

Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.

Illustration of Friedrich Wilhelms University, with the statue of Frederick the Great (ca. 1850). Credit: Wikipedia Commons/A. Carse

He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.

In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”

For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.

With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.

Professors Michael Bonitz and Frank Hohmann, holding a facsimile of Planck’s Nobel prize certificate, which was given to the University of Kiel in 2013. Credit and Copyright: CAU/Schimmelpfennig

In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.

Black Body Radiation:

It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).

In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.

A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).

From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in 1931. Credit: Wikipedia Commons
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in Berlin, 1931. Credit: Wikipedia Commons

Quantum Mechanics:

By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.

The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).

However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.

During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.

Photograph of the first Solvay Conference in 1911 at the Hotel Metropole in Brussels, Belgium. Credit: International Solvay Institutes/Benjamin Couprie

This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.

World War I and World War II:

In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.

After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.

The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.

Entrance of the administrative headquarters of the Max Planck Society in Munich. Credit: Wikipedia Commons/Maximilian Dörrbecker

Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.

Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.

Death and Legacy:

Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.

In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.

The Max Planck Medal, issued by the German Physical Society in recognition of scientific contributions. Credit: dpg-physik.de

He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.

The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.

In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.

Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.

We have written many articles about Max Planck for Universe Today. Here’s What is Planck Time?, Planck’s First Light?, All-Sky Stunner from Planck, What is Schrodinger’s Cat?, What is the Double Slit Experiment?, and here’s a list of stories about the spacecraft that bears his name.

If you’d like more info on Max Planck, check out Max Planck’s biography from Science World and Space and Motion.

We’ve also recorded an entire episode of Astronomy Cast all about Max Planck. Listen here, Episode 218: Max Planck.

Sources:

What is Absolute Zero?

What is Absolute Zero?

Canadians don’t have much to be proud of, but we can regale you with our ability to withstand freezing cold temperatures. Now, I live on the West Coast, so I’m soft and weak, rarely experiencing temperatures below freezing.

But for some of my Canadian brethren, temperatures can dip down to levels your mind and body can scarcely comprehend. For example, I have a friend who lives in Winnipeg, Manitoba. For a day last winter, the temperatures there dipped down -31C, but with the windchill, it felt like -50C. On that same day, it was a balmy -29C on Mars. On Mars!

But for scientists, and the Universe, it can get much much colder. So cold, in fact, that they use a completely different temperature scale – Kelvin – to measure how far away things are from the coldest possible temperature: Absolute Zero.

Nowhere close to absolute zero. Credit: Osccarr (CC BY 2.0)
Nowhere close to absolute zero. Credit: Osccarr (CC BY 2.0)

On the Celsius scale, Absolute Zero is -273.15 degrees. And in Fahrenheit, it’s -459.67 degrees. In the Kelvin scale, however, it’s very simple. Absolute Zero is 0 kelvin.

At this point, a science explainer is going to stumble into a minefield of incorrect usage. It’s not 0 degrees kelvin, you don’t say the degrees part, just the kelvin part. Just kelvin.

This is because when you measure something from an arbitrary point, like the direction you just turned, you’ve changed course 15-degrees. But if you’re measuring from an absolute point, like the lowest physical temperature defined by nature, you drop the degrees because it’s an absolute. An Absolute Zero.

Of course, I’ve probably gotten that wrong too. This stuff is hard.

Anyway, back to Absolute Zero.

Still not cold enough. Credit: Lori Cuthbert (CC BY 2.0)
Still not cold enough. Credit: Lori Cuthbert (CC BY 2.0)

Absolute Zero is the coldest possible temperature that can theoretically be reached. At this point, no heat energy can be extracted from a system, no work can be done. It’s dead Jim.

But it’s completely theoretical. It’s practically impossible to cool something down to Absolute Zero. In order to cool something down, you need to do work to extract heat from it. The colder you get, the more work you need to do. In order to get to Absolute Zero, you’d need to put in an infinite amount of work. And that’s ridiculous.

As you probably learned in physics or chemistry class, the temperature of a gas translates to the motion of the particles in the gas. As you cool a gas down, by extracting heat from it, the particles slow down.

You would think, then, that by cooling something down to Absolute Zero, all particle motion in that something would stop. But that’s not true.

From a quantum mechanics point of view, you can never know the position and momentum of particles at the same time. If the particles stopped, you’d know their momentum (zero) and their position… right there. The Universe and its laws of physics just can’t allow that to happen. Thank Heisenberg’s Uncertainty Principle.

Therefore, there’s always a little motion, even if you could get to Absolute Zero, which you can’t. But you can’t extract any more heat from it.

The physicist Robert Boyle was one of the first to consider the possibility that there was a lowest possible temperature, which he called the primum frigidum. In 1702, Guillaume Amontons created a thermometer that he calculated would bottom out at -240 C. Pretty close, actually.

But it was Lord Kelvin, who created this absolute scale in 1848, starting at -273 C, or 0 kelvin.

A photograph of Lord Kelvin.
A photograph of Lord Kelvin.

By this measurement, even with its windchill, Winnipeg was a balmy 223 kelvin on that wintry day.

The surface of Pluto, on the other hand varies from a low of 33 kelvin to a high of 55 kelvin. That’s -240 C to -218 C.

The average background temperature across the entire Universe is just 2.7 kelvin. You won’t find many places that cold, unless you get out to the vast cosmic voids that separate galaxy clusters.

Over time, the background temperature of the Universe will continue to drop, but it’ll never actually reach Absolute Zero. Even in a Googol years, when the last supermassive black hole has finally evaporated, and there’s no usable heat left in the entire Universe.

In fact, astronomers call this bleak future the “heat death” of the Universe. It’s heat death, as in, the death of all heat. And happiness.

You might be surprised to know that the coldest temperature in the entire Universe is right here on Earth. Well, sometimes, anyway. And assuming the aliens haven’t got better technology than us, which they probably do.

At the time that I’m recording this video, physicists have used lasers to cool down Rubidium-87 gas to just 170 nanokelvin, a tiny fraction above Absolute Zero. In fact, they won a Nobel Prize for their work in discovering Bose-Einstein condensates.

NASA is actually working on a new experiment called the Cold Atom Lab that will send a version of this technology to the International Space Station, where it should be able to cool material down to 100 picokelvin. That’s cold.

The Cold Atom Lab is planned to launch in August 2017. Credit: NASA / JPL
The Cold Atom Lab is planned to launch in August 2017. Credit: NASA / JPL

Here are your takeaways. Absolute Zero is the coldest possible temperature than can ever be reached, the point at which no further heat energy can be extracted from a system. Never say degrees kelvin, you’ll cause so much wincing. The Universe can’t match our cold generating abilities… yet. Take that Universe.

I’d love to hear the coldest temperature you’ve ever personally experienced. For me, it was visiting Buffalo in December. That’s not right.

What Is The Electron Cloud Model?

3d model of electron orbitals, based on the electron cloud model. Credit: Wikipedia Commons/Particia.fidi

The early 20th century was a very auspicious time for the sciences. In addition to Ernest Rutherford and Niels Bohr giving birth to the Standard Model of particle physics, it was also a period of breakthroughs in the field of quantum mechanics. Thanks to ongoing studies on the behavior of electrons, scientists began to propose theories whereby these elementary particles behaved in ways that defied classical, Newtonian physics.

One such example is the Electron Cloud Model proposed by Erwin Schrodinger. Thanks to this model, electrons were no longer depicted as particles moving around a central nucleus in a fixed orbit. Instead, Schrodinger proposed a model whereby scientists could only make educated guesses as to the positions of electrons. Hence, their locations could only be described as being part of a ‘cloud’ around the nucleus where the electrons are likely to be found.

Atomic Physics To The 20th Century:

The earliest known examples of atomic theory come from ancient Greece and India, where philosophers such as Democritus postulated that all matter was composed of tiny, indivisible and indestructible units. The term “atom” was coined in ancient Greece and gave rise to the school of thought known as “atomism”. However, this theory was more of a philosophical concept than a scientific one.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808). Credit: Public Domain
Various atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy (1808). Credit: Public Domain

It was not until the 19th century that the theory of atoms became articulated as a scientific matter, with the first evidence-based experiments being conducted. For example, in the early 1800’s, English scientist John Dalton used the concept of the atom to explain why chemical elements reacted in certain observable and predictable ways. Through a series of experiments involving gases, Dalton went on to develop what is known as Dalton’s Atomic Theory.

This theory expanded on the laws of conversation of mass and definite proportions and came down to five premises: elements, in their purest state, consist of particles called atoms; atoms of a specific element are all the same, down to the very last atom; atoms of different elements can be told apart by their atomic weights; atoms of elements unite to form chemical compounds; atoms can neither be created or destroyed in chemical reaction, only the grouping ever changes.

Discovery Of The Electron:

By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, his would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.

Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.

The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com
The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com

This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.

These corpuscles would later be named “electrons”, based on the theoretical particle predicted by Anglo-Irish physicist George Johnstone Stoney in 1874. And from this, the Plum Pudding Model was born, so named because it closely resembled the English desert that consists of plum cake and raisins. The concept was introduced to the world in the March 1904 edition of the UK’s Philosophical Magazine, to wide acclaim.

Development Of The Standard Model:

Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”

In this experiment, Geiger and Marsden measured the scattering pattern of the alpha particles with a fluorescent screen. If Thomson’s model were correct, the alpha particles would pass through the atomic structure of the foil unimpeded. However, they noted instead that while most shot straight through, some of them were scattered in various directions, with some going back in the direction of the source.

A depiction of the atomic structure of the helium atom. Credit: Creative Commons
A depiction of the atomic structure of the helium atom. Credit: Creative Commons

Geiger and Marsden concluded that the particles had encountered an electrostatic force far greater than that allowed for by Thomson’s model. Since alpha particles are just helium nuclei (which are positively charged) this implied that the positive charge in the atom was not widely dispersed, but concentrated in a tiny volume. In addition, the fact that those particles that were not deflected passed through unimpeded meant that these positive spaces were separated by vast gulfs of empty space.

By 1911, physicist Ernest Rutherford interpreted the Geiger-Marsden experiments and rejected Thomson’s model of the atom. Instead, he proposed a model where the atom consisted of mostly empty space, with all its positive charge concentrated in its center in a very tiny volume, that was surrounded by a cloud of electrons. This came to be known as the Rutherford Model of the atom.

Subsequent experiments by Antonius Van den Broek and Niels Bohr refined the model further. While Van den Broek suggested that the atomic number of an element is very similar to its nuclear charge, the latter proposed a Solar-System-like model of the atom, where a nucleus contains the atomic number of positive charge and is surrounded by an equal number of electrons in orbital shells (aka. the Bohr Model).

The Electron Cloud Model:

During the 1920s, Austrian physicist Erwin Schrodinger became fascinated by the theories Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and other physicists. During this time, he also became involved in the fields of atomic theory and spectra, researching at the University of Zurich and then the Friedrich Wilhelm University in Berlin (where he succeeded Planck in 1927).

Artist's concept of the Electron Cloud model, which described the likely location of electron orbitals. Credit: prezi.com
Artist’s concept of the Electron Cloud model, which described the likely location of electron orbitals over time. Credit: Pearson Prentice Hall

In 1926, Schrödinger tackled the issue of wave functions and electrons in a series of papers. In addition to describing what would come to be known as the Schrodinger equation – a partial differential equation that describes how the quantum state of a quantum system changes with time – he also used mathematical equations to describe the likelihood of finding an electron in a certain position.

This became the basis of what would come to be known as the Electron Cloud (or quantum mechanical) Model, as well as the Schrodinger equation. Based on quantum theory, which states that all matter has properties associated with a wave function, the Electron Cloud Model differs from the Bohr Model in that it does not define the exact path of an electron.

Instead, it predicts the likely position of the location of the electron based on a function of probabilities. The probability function basically describes a cloud-like region where the electron is likely to be found, hence the name. Where the cloud is most  dense, the probability of finding the electron is greatest; and where the  electron is less likely to be, the cloud is less dense.

These dense regions are known as “electron orbitals”, since they are the most likely location where an orbiting electron will be found. Extending this “cloud” model to a 3-dimensional space, we see a barbell or flower-shaped atom (as in image at the top). Here, the branching out regions are the ones where we are most likely to find the electrons.

Thanks to Schrodinger’s work, scientists began to understand that in the realm of quantum mechanics, it was impossible to know the exact position and momentum of an electron at the same time. Regardless of what the observer knows initially about a particle, they can only predict its succeeding location or momentum in terms of probabilities.

At no given time will they be able to ascertain either one. In fact, the more they know about the momentum of a particle, the less they will know about its location, and vice versa. This is what is known today as the “Uncertainty Principle”.

Note that the orbitals mentioned in the previous paragraph are formed by a hydrogen atom (i.e. with just one electron). When dealing with atoms that have more electrons, the electron orbital regions spread out evenly into a spherical fuzzy ball. This is where the term ‘electron cloud’ is most appropriate.

This contribution was universally recognized as being one of the cost important contributions of the 20th century, and one which triggered a revolution in the fields of physics, quantum mechanics and indeed all the sciences. Thenceforth, scientists were no longer working in a universe characterized by absolutes of time and space, but in quantum uncertainties and time-space relativity!

We have written many interesting articles about atoms and atomic models here at Universe Today. Here’s What Is John Dalton’s Atomic Model?, What Is The Plum Pudding Model?, What Is Bohr’s Atomic Model?, Who Was Democritus?, and What Are The Parts Of An Atom?

For more information, be sure to check What Is Quantum Mechanics? from Live Science.

Astronomy Cast also has episode on the topic, like Episode 130: Radio Astronomy, Episode 138: Quantum Mechanics, and Episode 252: Heisenberg Uncertainty Principle

Who was Stephen Hawking?

In honor of Dr. Stephen Hawking, the COSMOS center will be creating the most detailed 3D mapping effort of the Universe to date. Credit: BBC, Illus.: T.Reyes

When we think of major figures in the history of science, many names come to mind. Einstein, Newton, Kepler, Galileo – all great theorists and thinkers who left an indelible mark during their lifetime. In many cases, the full extent of their contributions would not be appreciated until after their death. But those of us that are alive today are fortunate to have a great scientist among us who made considerable contributions – Dr. Stephen Hawking.

Considered by many to be the “modern Einstein”, Hawking’s work in cosmology and theoretical physics was unmatched among his contemporaries. In addition to his work on gravitational singularities and quantum mechanics, he was also responsible for discovering that black holes emit radiation. On top of that, Hawking was a cultural icon, endorsing countless causes, appearing on many television shows as himself, and penning several books that have made science accessible to a wider audience.

Early Life:

Hawking was born on January 8th, 1942 (the 300th anniversary of the death of Galileo) in Oxford, England. His parents, Frank and Isobel Hawking, were both students at Oxford University, where Frank studied medicine and Isobel studied philosophy, politics and economics. The couple originally lived in Highgate, a suburb of London, but moved to Oxford to get away from the bombings during World War II and give birth to their child in safety. The two would go on to have two daughters, Philippa and Mary, and one adopted son, Edward.

The family moved again in 1950, this time to St. Albans, Hertfordshire, because Stephen’s father became the head of parasitology at the National Institute for Medical Research (now part of the Francis Crick Institute). While there, the family gained the reputation for being highly intelligent, if somewhat eccentric. They lived frugally, living in a large, cluttered and poorly maintained house, driving around in a converted taxicab, and constantly reading (even at the dinner table).

Stephen Hawking as a young man. Credit: gazettereview.com
Stephen Hawking as a young man. Credit: gazettereview.com

Education:

Hawking began his schooling at the Byron House School, where he experienced difficulty in learning to read (which he later blamed on the school’s “progressive methods”.) While in St. Albans, the eight-year-old Hawking attended St. Albans High School for Girls for a few months (which was permitted at the time for younger boys). In September of 1952, he was enrolled at Radlett School for a year, but would remain at St. Albans for the majority of his teen years due the family’s financial constraints.

While there, Hawking made many friends, with whom he played board games, manufactured fireworks, model airplanes and boats, and had long discussions with on subjects ranging from religion to extrasensory perception. From 1958, and with the help of the mathematics teacher Dikran Tahta, Hawking and his friends built a computer from clock parts, an old telephone switchboard and other recycled components.

Though he was not initially academically successfully, Hawking showed considerable aptitude for scientific subjects and was nicknamed “Einstein”. Inspired by his teacher Tahta, he decided to study mathematics at university. His father had hoped that his son would attend Oxford and study medicine, but since it was not possible to study math there at the time, Hawking chose to study physics and chemistry.

Stephen Hawking (holding the handkerchief) and the Oxford Boat Club. Credit: focusfeatures.com
Stephen Hawking (holding the handkerchief) and the Oxford Boat Club. Credit: focusfeatures.com

In 1959, when he was just 17, Hawking took the Oxford entrance exam and was awarded a scholarship. For the first 18 months, he was bored and lonely, owing to the fact that he was younger than his peers and found the work “ridiculously easy”. During his second and third year, Hawking made greater attempts to bond with his peers and developed into a popular student, joining the Oxford Boat Club and developing an interest in classical music and science fiction.

When it came time for his final exam, Hawking’s performance was lackluster. Instead of answering all the questions, he chose to focus on theoretical physics questions and avoided any that required factual knowledge. The result was a score that put him on the borderline between first- and second-class honors. Needing a first-class honors for his planned graduate studies in cosmology at Cambridge, he was forced to take a via (oral exam).

Concerned that he was viewed as a lazy and difficult student, Hawking described his future plans as follows during the viva: “If you award me a First, I will go to Cambridge. If I receive a Second, I shall stay in Oxford, so I expect you will give me a First.” However, Hawking was held in higher regard than he believed, and received a first-class BA (Hons.) degree, thus allowing him to pursue graduate work at Cambridge University in October 1962.

Hawking on graduation day in 1962. Credit: telegraph.co.uk
Hawking on graduation day in 1962. Credit: telegraph.co.uk

Hawking experienced some initial difficulty during his first year of doctoral studies. He found his background in mathematics inadequate for work in general relativity and cosmology, and was assigned Dennis William Sciama (one of the founders of modern cosmology) as his supervisor, rather than noted astronomer Fred Hoyle (whom he had been hoping for).

In addition, it was during his graduate studies that Hawking was diagnosed with early-onset amyotrophic lateral sclerosis (ALS). During his final year at Oxford, he had experienced an accident where he fell down a flight of stairs, and also began experiencing difficulties when rowing and incidents of slurred speech. When the diagnosis came in 1963, he fell into a state of depression and felt there was little point in continuing his studies.

However, his outlook soon changed, as the disease progressed more slowly than the doctors had predicted – initially, he was given two years to live. Then, with the encouragement of Sciama, he returned to his work, and quickly gained a reputation for brilliance and brashness. This was demonstrated when he publicly challenged the work of noted astronomer Fred Hoyle, who was famous for rejecting the Big Bang theory, at a lecture in June of 1964.

Stephen Hawking and Jane Wilde on their wedding day, July 14, 1966. Credit: telegraph.co.uk
Stephen Hawking and Jane Wilde on their wedding day, July 14, 1966. Credit: telegraph.co.uk

When Hawking began his graduate studies, there was much debate in the physics community about the prevailing theories of the creation of the universe: the Big Bang and the Steady State theories. In the former, the universe was conceived in a gigantic explosion, in which all matter in the known universe was created. In the latter, new matter is constantly created as the universe expands. Hawking quickly joined the debate.

Hawking became inspired by Roger Penrose’s theorem that a spacetime singularity – a point where the quantities used to measure the gravitational field of a celestial body become infinite – exists at the center of a black hole. Hawking applied the same thinking to the entire universe, and wrote his 1965 thesis on the topic. He went on to receive a research fellowship at Gonville and Caius College and obtained his PhD degree in cosmology in 1966.

It was also during this time that Hawking met his first wife, Jane Wilde. Though he had met her shortly before his diagnosis with ALS, their relationship continued to grow as he returned to complete his studies. The two became engaged in October of 1964 and were married on July 14th, 1966. Hawking would later say that his relationship with Wilde gave him “something to live for”.

Scientific Achievements:

In his doctoral thesis, which he wrote in collaboration with Penrose, Hawking extended the existence of singularities to the notion that the universe might have started as a singularity. Their joint essay – entitled, “Singularities and the Geometry of Space-Time” – was the runner-up in the 1968 Gravity Research Foundation competition and shared top honors with one by Penrose to win Cambridge’s most prestigious Adams Prize for that year.

In 1970, Hawking became part of the Sherman Fairchild Distinguished Scholars visiting professorship program, which allowed him to lecture at the California Institute of Technology (Caltech). It was during this time that he and Penrose published a proof that incorporated the theories of General Relativity and the physical cosmology developed by Alexander Freidmann.

Based on Einstein’s equations, Freidmann asserted that the universe was dynamic and changed in size over time. He also asserted that space-time had geometry, which is determined by its overall mass/energy density. If equal to the critical density, the universe has zero curvature (i.e. flat configuration); if it is less than critical, the universe has negative curvature (open configuration); and if greater than critical, the universe has a positive curvature (closed configuration)

According to the Hawking-Penrose singularity theorem, if the universe truly obeyed the models of general relativity, then it must have begun as a singularity. This essentially meant that, prior to the Big Bang, the entire universe existed as a point of infinite density that contained all of the mass and space-time of the universe, before quantum fluctuations caused it to rapidly expand.

Per the Friedmann equations, the geometry of the universe is determined by its overall mass/energy density. If equal to the critical density, ?0 the universe has zero curvature (flat configuration). If less than critical, the universe has negative curvature (open configuration). If greater than critical, the universe has positive curvature (closed configuration). Image credit: NASA/GSFC
Per the Friedmann equations, the geometry of the universe is determined by its overall mass/energy density, and can have either flat, negative, or positive curvature. Credit: NASA/GSFC

Also in 1970, Hawking postulated what became known as the second law of black hole dynamics. With James M. Bardeen and Brandon Carter, he proposed the four laws of black hole mechanics, drawing an analogy with the four laws of thermodynamics.

These four laws stated that – for a stationary black hole, the horizon has constant surface gravity; for perturbations of stationary black holes, the change of energy is related to change of area, angular momentum, and electric charge; the horizon area is, assuming the weak energy condition, a non-decreasing function of time; and that it is not possible to form a black hole with vanishing surface gravity.

In 1971, Hawking released an essay titled “Black Holes in General Relativity” in which he conjectured that the surface area of black holes can never decrease, and therefore certain limits can be placed on the amount of energy they emit. This essay won Hawking the Gravity Research Foundation Award in January of that year.

In 1973, Hawking’s first book, which he wrote during his post-doc studies with George Ellis, was published. Titled, The Large Scale Structure of Space-Time, the book describes the foundation of space itself and the nature of its infinite expansion, using differential geometry to examine the consequences of Einstein’s General Theory of Relativity.

Hawking was elected a Fellow of the Royal Society (FRS) in 1974, a few weeks after the announcement of Hawking radiation (see below). In 1975, he returned to Cambridge and was given a new position as Reader, which is reserved for senior academics with a distinguished international reputation in research or scholarship.

The mid-to-late 1970s was a time of growing interest in black holes, as well as the researchers associated with them. As such, Hawking’s public profile began to grow and he received increased academic and public recognition, appearing in print and television interviews and receiving numerous honorary positions and awards.

In the late 1970s, Hawking was elected Lucasian Professor of Mathematics at the University of Cambridge, an honorary position created in 1663 which is considered one of the most prestigious academic posts in the world. Prior to Hawking, its former holders included such scientific greats as Sir Isaac Newton, Joseph Larmor, Charles Babbage, George Stokes, and Paul Dirac.

His inaugural lecture as Lucasian Professor of Mathematics was titled: “Is the end in sight for Theoretical Physics”. During the speech, he proposed N=8 Supergravity – a quantum field theory which involves gravity in 8 supersymmetries – as the leading theory to solve many of the outstanding problems physicists were studying.

Hawking’s promotion coincided with a health crisis which led to Hawking being forced to accept some nursing services at home. At the same time, he began making a transition in his approach to physics, becoming more intuitive and speculative rather than insisting on mathematical proofs. By 1981, this saw Hawking begin to focus his attention on cosmological inflation theory and the origins of the universe.

Inflation theory – which had been proposed by Alan Guth that same year – posits that following the Big Bang, the universe initially expanded very rapidly before settling into to a slower rate of expansion. In response, Hawking presented work at the Vatican conference that year, where he suggested that their might be no boundary or beginning to the universe.

During the summer of 1982, he and his colleague Gary Gibbons organized a three-week workshop on the subject titled “The Very Early Universe” at Cambridge University. With Jim Hartle, an American physicist and professor of physics at the University of California, he proposed that during the earliest period of the universe (aka. the Planck epoch) the universe had no boundary in space time.

In 1983, they published this model, known as the Hartle-Hawking state. Among other things, it asserted that before the Big Bang, time did not exist, and the concept of the beginning of the universe is therefore meaningless. It also replaced the initial singularity of the Big Bang with a region akin to the North Pole which (similar to the real North Pole) one cannot travel north of because it is a point where lines meet that has no boundary.

This proposal predicted a closed universe, which had many existential implications, particularly about the existence of God. At no point did Hawking rule out the existence of God, choosing to use God in a metaphorical sense when explaining the mysteries of the universe. However, he would often suggest that the existence of God was unnecessary to explain the origin of the universe, or the existence of a unified field theory.

In 1982, he also began work on a book that would explain the nature of the universe, relativity and quantum mechanics in a way that would be accessible to the general public. This led him to sign a contract with Bantam Books for the sake of publishing A Brief History of Time, the first draft of which he completed in 1984.

After multiple revisions, the final draft was published in 1988, and was met with much critical acclaim. The book was translated into multiple languages, remained at the top of bestseller lists in both the US and UK for months, and ultimately sold an estimated 9 million copies. Media attention was intense, and Newsweek magazine cover and a television special both described him as “Master of the Universe”.

Further work by Hawking in the area of arrows of time led to the 1985 publication of a paper theorizing that if the no-boundary proposition were correct, then when the universe stopped expanding and eventually collapsed, time would run backwards. He would later withdraw this concept after independent calculations disputed it, but the theory did provide valuable insight into the possible connections between time and cosmic expansion.

During the 1990’s, Hawking continued to publish and lecture on his theories regarding physics, black holes and the Big Bang. In 1993, he co-edited a book with Gary Gibbons on on Euclidean quantum gravity, a theory they had been working on together in the late 70s. According to this theory, a section of a gravitational field in a black hole can be evaluated using a functional integral approach, such that it can avoid the singularities.

That same year, a popular-level collection of essays, interviews and talks titled, Black Holes and Baby Universes and Other Essays was also published. In 1994, Hawking and Penrose delivered a series of six lectures at Cambridge’s Newton Institute, which were published in 1996 under the title “The Nature of Space and Time“.

It was also in 1990s that major developments happened in Hawking’s personal life. In 1990, he and Jane Hawking commenced divorce proceedings after many years of strained relations, owing to his disability, the constant presence of care-givers, and his celebrity status. Hawking remarried in 1995 to Elaine Mason, his caregiver of many years.

Stephen Hawking lectured regularly throughout the 90s and 2000s. Credit: educatinghumanity.com
Stephen Hawking lectured regularly throughout the 90s, many of which were collected and published in “The Nature of Space and Time” in 1996. Credit: educatinghumanity.com

In the 2000s, Hawking produced many new books and new editions of older ones. These included The Universe in a Nutshell (2001), A Briefer History of Time (2005), and God Created the Integers (2006). He also began collaborating with Jim Hartle of the University of California, Santa Barbara, and the European Organization for Nuclear Research (CERN) to produce new cosmological theories.

Foremost of these was Hawking’s “top-down cosmology”, which states that the universe had not one unique initial state but many different ones, and that predicting the universe’s current state from a single initial state is therefore inappropriate. Consistent with quantum mechanics, top-down cosmology posits that the present “selects” the past from a superposition of many possible histories.

In so doing, the theory also offered a possible resolution of the “fine-tuning question”, which addresses the possibility that life can only exist when certain physical constraints lie within a narrow range. By offering this new model of cosmology, Hawking opened up the possibility that life may not be bound by such restrictions and could be much more plentiful than previously thought.

In 2006, Hawking and his second wife, Elaine Mason, quietly divorced, and Hawking resumed closer relationships with his first wife Jane, his children (Robert, Lucy and Timothy), and grandchildren. In 2009, he retired as Lucasian Professor of Mathematics, which was required by Cambridge University regulations. Hawking has continued to work as director of research at the Cambridge University Department of Applied Mathematics and Theoretical Physics ever since, and has made no indication of retiring.

“Hawking Radiation” and the “Black Hole Information Paradox”:

In the early 1970s, Hawking’s began working on what is known as the “no-hair theorem”. Based on the Einstein-Maxwell equations of gravitation and electromagnetism in general relativity, the theorem stated that all black holes can be completely characterized by only three externally observable classical parameters: mass, electric charge, and angular momentum.

In this scenario, all other information about the matter which formed a black hole or is falling into it (for which “hair’ is used as a metaphor), “disappears” behind the black-hole event horizon, and is therefore preserved but permanently inaccessible to external observers.

In 1973, Hawking traveled to Moscow and met with Soviet scientists Yakov Borisovich Zel’dovich and Alexei Starobinsky. During his discussions with them about their work, they showed him how the uncertainty principle demonstrated that black holes should emit particles. This contradicted Hawking’ second law of black hole thermodynamics (i.e. black holes can’t get smaller) since it meant that by losing energy they must be losing mass.

What’s more, it supported a theory advanced by Jacob Bekenstein, a graduate student of John Wheeler University, that black holes should have a finite, non-zero temperature and entropy. All of this contradicted the “no-hair theorem” about black boles. Hawking revised this theorem shortly thereafter, showing that when quantum mechanical effects are taken into account, one finds that black holes emit thermal radiation at a temperature.

From 1974 onward, Hawking presented Bekenstein’s results, which showed that black holes emit radiation. This came to be known as “Hawking radiation”, and was initially controversial. However, by the late 1970s and following the publication of further research, the discovery was widely accepted as a significant breakthrough in theoretical physics.

However, one of the outgrowths of this theory was the likelihood that black holes gradually lose mass and energy. Because of this, black holes that lose more mass than they gain through other means are expected to shrink and ultimately vanish – a phenomena which is known as black hole “evaporation”.

In 1981, Hawking proposed that information in a black hole is irretrievably lost when a black hole evaporates, which came to be known as the “Black Hole Information Paradox”. This states that physical information could permanently disappear in a black hole, allowing many physical states to devolve into the same state.

This was controversial because it violated two fundamental tenets of quantum physics. In principle, quantum physics tells us that complete information about a physical system – i.e. the state of its matter (mass, position, spin, temperature, etc.) – is encoded in its wave function up to the point when that wave function collapses. This in turn gives rise to two other principles.

The first is Quantum Determinism, which states that – given a present wave function – future changes are uniquely determined by the evolution operator. The second is Reversibility, which states that the evolution operator has an inverse, meaning that the past wave functions are similarly unique. The combination of these means that the information about the quantum state of matter must always be preserved.

By proposing that this information disappears once a black evaporates, Hawking essentially created a fundamental paradox. If a black hole can evaporate, which causes all the information about a quantum wave function to disappear, than information can in fact be lost forever. This has been the subject of ongoing debate among scientists, one which has remained largely unresolved.

However, by 2003, the growing consensus among physicists was that Hawking was wrong about the loss of information in a black hole. In a 2004 lecture in Dublin, he conceded his bet with fellow John Preskill of Caltech (which he made in 1997), but described his own, somewhat controversial solution to the paradox problem – that black holes may have more than one topology.

In the 2005 paper he published on the subject – “Information Loss in Black Holes” – he argued that the information paradox was explained by examining all the alternative histories of universes, with the information loss in those with black holes being cancelled out by those without. As of January 2014, Hawking has described the Black Hole Information Paradox as his “biggest blunder”.

Other Accomplishments:

In addition to advancing our understanding of black holes and cosmology through the application of general relativity and quantum mechanics, Stephen Hawking has also been pivotal in bringing science to a wider audience. Over the course of his career, he has published many popular books, traveled and lectured extensively, and has made numerous appearances and done voice-over work for television shows, movies and even provided narration for the Pink Floyd song, “Keep Talking”.

Stephen Hawking's theories on black holes became the subject of many television specials, such as . Credit: discovery.com
Stephen Hawking’s theories on black holes became the subject of television specials, such as “Stephen Hawking’s Universe” on PBS. Credit: discovery.com

A film version of A Brief History of Time, directed by Errol Morris and produced by Steven Spielberg, premiered in 1992. Hawking had wanted the film to be scientific rather than biographical, but he was persuaded otherwise. In 1997, a six-part television series Stephen Hawking’s Universe premiered on PBS, with a companion book also being released.

In 2007, Hawking and his daughter Lucy published George’s Secret Key to the Universe, a children’s book designed to explain theoretical physics in an accessible fashion and featuring characters similar to those in the Hawking family. The book was followed by three sequels – George’s Cosmic Treasure Hunt (2009), George and the Big Bang (2011), George and the Unbreakable Code (2014).

Since the 1990s, Hawking has also been a major role model for people dealing with disabilities and degenerative illnesses, and his outreach for disability awareness and research has been unparalleled. At the turn of the century, he and eleven other luminaries joined with Rehabilitation International to sign the Charter for the Third Millennium on Disability, which called on governments around the world to prevent disabilities and protect disability rights.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.
Professor Stephen Hawking participating in a zero-gravity flight (aka. the “Vomit Comet”) in 2007. Credit: gozerog.com

Motivated by the desire to increase public interest in spaceflight and to show the potential of people with disabilities, in 2007 he participated in zero-gravity flight in a “Vomit Comet” – a specially fitted aircraft that dips and climbs through the air to simulate the feeling of weightlessness – courtesy of Zero Gravity Corporation, during which he experienced weightlessness eight times.

In August 2012, Hawking narrated the “Enlightenment” segment of the 2012 Summer Paralympics opening ceremony. In September of 2013, he expressed support for the legalization of assisted suicide for the terminally ill. In August of 2014, Hawking accepted the Ice Bucket Challenge to promote ALS/MND awareness and raise contributions for research. As he had pneumonia in 2013, he was advised not to have ice poured over him, but his children volunteered to accept the challenge on his behalf.

During his career, Hawking has also been a committed educator, having personally supervised 39 successful PhD students.He has also lent his name to the ongoing search for extra-terrestrial intelligence and the debate regarding the development of robots and artificial intelligence. On July 20th, 2015, Stephen Hawking helped launch Breakthrough Initiatives, an effort to search for extraterrestrial life in the universe.

Also in 2015, Hawking lent his voice and celebrity status to the promotion of The Global Goals, a series of 17 goals adopted by the United Nations Sustainable Development Summit to end extreme poverty, social inequality, and fixing climate change over the course of the next 15 years.

President Barack Obama talks with Stephen Hawking in the Blue Room of the White House before a ceremony presenting him and 15 others the Presidential Medal of Freedom, August 12, 2009. The Medal of Freedom is the nation's highest civilian honor. (Official White House photo by Pete Souza)
President Barack Obama talks with Stephen Hawking in the Blue Room of the White House before a ceremony presenting him and 15 others the Presidential Medal of Freedom, August 12th, 2009. Credit: Pete Souza/White House photo stream

Honors and Legacy:

As already noted, in 1974, Hawking was elected a Fellow of the Royal Society (FRS), and was one of the youngest scientists to become a Fellow. At that time, his nomination read:

Hawking has made major contributions to the field of general relativity. These derive from a deep understanding of what is relevant to physics and astronomy, and especially from a mastery of wholly new mathematical techniques. Following the pioneering work of Penrose he established, partly alone and partly in collaboration with Penrose, a series of successively stronger theorems establishing the fundamental result that all realistic cosmological models must possess singularities. Using similar techniques, Hawking has proved the basic theorems on the laws governing black holes: that stationary solutions of Einstein’s equations with smooth event horizons must necessarily be axisymmetric; and that in the evolution and interaction of black holes, the total surface area of the event horizons must increase. In collaboration with G. Ellis, Hawking is the author of an impressive and original treatise on “Space-time in the Large.

Other important work by Hawking relates to the interpretation of cosmological observations and to the design of gravitational wave detectors.

On 12 November Peter Higgs and Stephen Hawking visited the "Collider" exhibition at London's Science Museum (Image: c. Science Museum 2013)
Peter Higgs and Stephen Hawking visiting the “Collider” exhibition at London’s Science Museum in 2013, in honor of the discovery of the Higgs Boson. Credit: sciencemuseum.org.uk

In 1975, he was awarded both the Eddington Medal and the Pius XI Gold Medal, and in 1976 the Dannie Heineman Prize, the Maxwell Prize and the Hughes Medal. In 1977, he was appointed a professor with a chair in gravitational physics, and received the Albert Einstein Medal and an honorary doctorate from the University of Oxford by the following year.

In 1981, Hawking was awarded the American Franklin Medal, followed by a Commander of the Order of the British Empire (CBE) medal the following year. For the remainder of the decade, he was honored three times, first with the Gold Medal of the Royal Astronomical Society in 1985, the Paul Dirac Medal in 1987 and, jointly with Penrose, with the prestigious Wolf Prize in 1988. In 1989, he was appointed Member of the Order of the Companions of Honour (CH), but reportedly declined a knighthood.

In 1999, Hawking was awarded the Julius Edgar Lilienfeld Prize of the American Physical Society. In 2002, following a UK-wide vote, the BBC included him in their list of the 100 Greatest Britons. More recently, Hawking has been awarded the Copley Medal from the Royal Society (2006), the Presidential Medal of Freedom, America’s highest civilian honor (2009), and the Russian Special Fundamental Physics Prize (2013).

Several buildings have been named after him, including the Stephen W. Hawking Science Museum in San Salvador, El Salvador, the Stephen Hawking Building in Cambridge, and the Stephen Hawking Center at Perimeter Institute in Canada. And given Hawking’s association with time, he was chosen to unveil the mechanical “Chronophage” – aka. the Corpus Clock – at Corpus Christi College Cambridge in September of 2008.

Stephen Hawking being presented by his daughter Lucy Hawking at the lecture he gave for NASA's 50th anniversary. Credit: NASA/Paul Alers
Stephen Hawking being presented by his daughter Lucy Hawking at the lecture he gave for NASA’s 50th anniversary. Credit: NASA/Paul Alers

Also in 2008, while traveling to Spain, Hawking received the Fonseca Prize – an annual award created by the University of Santiago de Compostela which is awarded to those for outstanding achievement in science communication. Hawking was singled out for the award because of his “exceptional mastery in the popularization of complex concepts in Physics at the very edge of our current understanding of the Universe, combined with the highest scientific excellence, and for becoming a public reference of science worldwide.”

Multiple films have been made about Stephen Hawking over the years as well. These include the previously mentioned A Brief History of Time, the 1991 biopic film directed by Errol Morris and Stephen Spielberg; Hawking, a 2004 BBC drama starring Benedict Cumberbatch in the title role; the 2013 documentary titled “Hawking”, by Stephen Finnigan.

Most recently, there was the 2014 film The Theory of Everything that chronicled the life of Stephen Hawking and his wife Jane. Directed by James Marsh, the movie stars Eddie Redmayne as Professor Hawking and Felicity Jones as Jane Hawking.

Death:

Dr. Stephen Hawking passed away in the early hours of Wednesday, March 14th, 2018 at his home in Cambridge. According to a statement made by his family, he died peacefully. He was 76 years old, and is survived by his first wife, Jane Wilde, and their three children – Lucy, Robert and Tim.

When all is said and done, Stephen Hawking was the arguably the most famous scientist alive in the modern era. His work in the field of astrophysics and quantum mechanics has led to a breakthrough in our understanding of time and space, and will likely be poured over by scientists for decades. In addition, he has done more than any living scientist to make science accessible and interesting to the general public.

Stephen Hawking holding a public lecture at the Stockholm Waterfront congress center, 24 August 2015. Credit: Public Domain/photo by Alexandar Vujadinovic
Stephen Hawking holding a public lecture at the Stockholm Waterfront congress center, 24 August 2015. Credit: Public Domain/photo by Alexandar Vujadinovic

To top it off, he traveled all over the world and lectured on topics ranging from science and cosmology to human rights, artificial intelligence, and the future of the human race. He also used the celebrity status afforded him to advance the causes of scientific research, space exploration, disability awareness, and humanitarian causes wherever possible.

In all of these respects, he was very much like his predecessor, Albert Einstein – another influential scientist-turned celebrity who was sure to use his powers to combat ignorance and promote humanitarian causes. But what was  especially impressive in all of this is that Hawking has managed to maintain his commitment to science and a very busy schedule while dealing with a degenerative disease.

For over 50 years, Hawking lived with a disease that doctor’s initially thought would take his life within just two. And yet, he not only managed to make his greatest scientific contributions while dealing with ever-increasing problems of mobility and speech, he also became a jet-setting personality who travelled all around the world to address audiences and inspire people.

His passing was mourned by millions worldwide and, in the worlds of famed scientist and science communicator Neil DeGrasse Tyson , “left an intellectual vacuum in its wake”. Without a doubt, history will place Dr. Hawking among such luminaries as Einstein, Newton, Galileo and Curie as one of the greatest scientific minds that ever lived.

We have many great articles about Stephen Hawking here at Universe Today. Here is one about Hawking Radiation, How Do Black Holes Evaporate?, why Hawking could be Wrong About Black Holes, and recent experiments to Replicate Hawking Radiation in a Laboratory.

And here are some video interviews where Hawking addresses how God is not necessary for the creation of the Universe, and the trailer for Theory of Everything.

Astronomy Cast has a number of great podcasts that deal with Hawing and his discoveries, like: Episode 138: Quantum Mechanics, and Questions Show: Hidden Fusion, the Speed of Neutrinos, and Hawking Radiation.

For more information, check out Stephen Hawking’s website, and his page at Biography.com

A Universe of 10 Dimensions

Superstrings may exist in 11 dimensions at once. Via National Institute of Technology Tiruchirappalli.

When someone mentions “different dimensions,” we tend to think of things like parallel universes – alternate realities that exist parallel to our own but where things work differently. However, the reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization.

To break it down, dimensions are simply the different facets of what we perceive to be reality. We are immediately aware of the three dimensions that surround us – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).

Beyond these three visible dimensions, scientists believe that there may be many more. In fact, the theoretical framework of Superstring Theory posits that the Universe exists in ten different dimensions. These different aspects govern the Universe, the fundamental forces of nature, and all the elementary particles contained within.

The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities. Add to that a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).

The third dimension involves depth (the z-axis) and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume. Beyond these three dimensions reside the seven that are not immediately apparent to us but can still be perceived as having a direct effect on the Universe and reality as we know it.

The timeline of the universe, beginning with the Big Bang. Credit: NASA
The timeline of the Universe, beginning with the Big Bang. According to String Theory, this is just one of many possible worlds. Credit: NASA

Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an object’s position in time is essential to plotting its position in the Universe. The other dimensions are where the deeper possibilities come into play, and explaining their interaction with the others is where things get particularly tricky for physicists.

According to Superstring Theory, the fifth and sixth dimensions are where the notion of possible worlds arises. If we could see on through to the fifth dimension, we would see a world slightly different from our own, giving us a means of measuring the similarity and differences between our world and other possible ones.

In the sixth, we would see a plane of possible worlds, where we could compare and position all the possible universes that start with the same initial conditions as this one (i.e., the Big Bang). In theory, if you could master the fifth and sixth dimensions, you could travel back in time or go to different futures.

In the seventh dimension, you have access to the possible worlds that start with different initial conditions. Whereas in the fifth and sixth, the initial conditions were the same, and subsequent actions were different, everything is different from the very beginning of time. The eighth dimension again gives us a plane of such possible universe histories. Each begins with different initial conditions and branches out infinitely (hence why they are called infinities).

In the ninth dimension, we can compare all the possible universe histories, starting with all the different possible laws of physics and initial conditions. In the tenth and final dimension, we arrive at the point where everything possible and imaginable is covered. Beyond this, nothing can be imagined by us lowly mortals, which makes it the natural limitation of what we can conceive in terms of dimensions.

String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson. String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.
The existence of extra dimensions is explained using the Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.

The existence of these additional six dimensions, which we cannot perceive, is necessary for String Theory for there to be consistency in nature. The fact that we can perceive only four dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted (aka. brane theory).

If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold (shown above). While imperceptible as far as our senses are concerned, they would have governed the formation of the Universe from the very beginning. Hence why scientists believe that by peering back through time and using telescopes to observe light from the early Universe (i.e., billions of years ago), they might be able to see how the existence of these additional dimensions could have influenced the evolution of the cosmos.

Much like other candidates for a grand unifying theory – aka the Theory of Everything (TOE) – the belief that the Universe is made up of ten dimensions (or more, depending on which model of string theory you use) is an attempt to reconcile the standard model of particle physics with the existence of gravity. In short, it is an attempt to explain how all known forces within our Universe interact and how other possible universes themselves might work.

For additional information, here’s an article on Universe Today about parallel Universes and another on a parallel Universe that scientists thought they’d found, but doesn’t actually exist.

There are also some other great resources online. There is a great video that explains the ten dimensions in detail. You can also look at the PBS website for the TV show Elegant Universe. It has a great page on the ten dimensions.

You can also listen to Astronomy Cast. You might find Episode 137: Large Scale Structure of the Universe very interesting.

Source: PBS

Spooky Experiment on ISS Could Pioneer New Quantum Communications Network

The cameras mounted in the ISS's cupola could serve as the platform for the first-ever quantum optics experiment in space.

With its 180 degree views of Earth and space, the ISS’s cupola is the perfect place for photography. But Austrian researchers want to use the unique and panoramic platform to test the limits of “spooky action at distance” in hopes of creating a new quantum communications network.

In a new study published April 9, 2012 in the New Journal of Physics, a group of Austrian researchers propose equipping the camera that is already aboard the ISS — the Nikon 400 mm NightPOD camera — with an optical receiver that would be key to performing the first-ever quantum optics experiment in space. The NightPOD camera faces the ground in the cupola and can track ground targets for up to 70 seconds allowing researchers to bounce a secret encryption key across longer distances than currently possible with optical fiber networks on Earth.

“During a few months a year, the ISS passes five to six times in a row in the correct orientation for us to do our experiments. We envision setting up the experiment for a whole week and therefore having more than enough links to the ISS available,” said co-author of the study Professor Rupert Ursin from the Austrian Academy of Sciences.

Albert Einstein first coined the phrase ‘spooky action at a distance’ during his philosophical battles with Neils Bohr in the 1930s to explain his frustration with the inadequacies of the new theory called quantum mechanics. Quantum mechanics explains actions on the tiniest scales in the domain of atoms and elemental particles. While classical physics explains motion, matter and energy on the level that we can see, 19th century scientists observed phenomena in both the macro and micro world that could not easily explained using classical physics.

In particular, Einstein was dissatisfied with the idea of entanglement. Entanglement occurs when two particles are so deeply connected that they share the same existence; meaning that they share the same mathematical relationships of position, spin, momentum and polarization. This could happen when two particles are created at the same point and instant in spacetime. Over time, as the two particles become widely separated in space, even by light-years, quantum mechanics suggests that a measurement of one would immediately impact the other. Einstein was quick to point out that this violated the universal speed limit set out by special relativity. It was this paradox Einstein referred to as spooky action.

CERN physicist John Bell partially resolved this mystery in 1964 by coming up with the idea of non-local phenomena. While entanglement allows one particle to be instantaneously influenced by its exact counterpart, the flow of classical information does not travel faster than light.

The orbital pass of the ISS over an optical ground station could be used for quantum communication from inside the Cupola Module, as long as the OGS is not more than 36° off the NADIR direction. Credit: T Scheidl, E Wille and R Ursin.
The orbital pass of the ISS over an optical ground station could be used for quantum communication from inside the Cupola Module, as long as the OGS is not more than 36° off the NADIR direction. Credit: T Scheidl, E Wille and R Ursin.
The ISS experiment proposes using a “Bell experiment” to test the theoretical contradiction between predictions in quantum and classical physics. For the Bell experiment, a pair of entangled photons would be generated on the ground; one would be sent from the ground station to the modified camera aboard the ISS, while the other would be measured locally on the ground for later comparison. So far, researchers sent a secret key to receivers just a few hundred kilometers apart.

“According to quantum physics, entanglement is independent of distance. Our proposed Bell-type experiment will show that particles are entangled, over large distances — around 500 km — for the very first time in an experiment,” says Ursin. “Our experiments will also enable us to test potential effects gravity may have on quantum entanglement.”

The researchers point out that making the minor alteration to a camera already aboard the ISS will save time and money needed to build a series of satellites to test researchers’ ideas.