Back in 2013, the European Space Agency released its first analysis of the data gathered by the Planck observatory. Between 2009 and 2013, this spacecraft observed the remnants of the radiation that filled the Universe immediately after the Big Bang – the Cosmic Microwave Background (CMB) – with the highest sensitivity of any mission to date and in multiple wavelengths.
In addition to largely confirming current theories on how the Universe evolved, Planck’s first map also revealed a number of temperature anomalies – like the CMB “Cold Spot” – that are difficult to explai. Unfortunately, with the latest analysis of the mission data, the Planck Collaboration team has found no new evidence for these anomalies, which means that astrophysicists are still short of an explanation.
According to the Big Bang cosmological model, our Universe began 13.8 billion years ago when all the matter and energy in the cosmos began expanding. This period of “cosmic inflation” is believed to be what accounts for the large-scale structure of the Universe and why space and the Cosmic Microwave Background (CMB) appear to be largely uniform in all directions.
However, to date, no evidence has been discovered that can definitely prove the cosmic inflation scenario or rule out alternative theories. But thanks to a new study by a team of astronomers from Harvard University and the Harvard-Smithsonian Center for Astrophysics (CfA), scientists may have a new means of testing one of the key parts of the Big Bang cosmological model.
For thousands of years, human being have been contemplating the Universe and seeking to determine its true extent. And whereas ancient philosophers believed that the world consisted of a disk, a ziggurat or a cube surrounded by celestial oceans or some kind of ether, the development of modern astronomy opened their eyes to new frontiers. By the 20th century, scientists began to understand just how vast (and maybe even unending) the Universe really is.
And in the course of looking farther out into space, and deeper back in time, cosmologists have discovered some truly amazing things. For example, during the 1960s, astronomers became aware of microwave background radiation that was detectable in all directions. Known as the Cosmic Microwave Background (CMB), the existence of this radiation has helped to inform our understanding of how the Universe began. Continue reading “What is the Cosmic Microwave Background?”
For decades, scientists have theorized that beyond the edge of the Solar System, at a distance of up to 50,000 AU (0.79 ly) from the Sun, there lies a massive cloud of icy planetesimals known as the Oort Cloud. Named in honor of Dutch astronomer Jan Oort, this cloud is believed to be where long-term comets originate from. However, to date, no direct evidence has been provided to confirm the Oort Cloud’s existence.
This is due to the fact that the Oort Cloud is very difficult to observe, being rather far from the Sun and dispersed over a very large region of space. However, in a recent study, a team of astrophysicists from the University of Pennsylvania proposed a radical idea. Using maps of the Cosmic Microwave Background (CMB) created by the Planck mission and other telescopes, they believe that Oort Clouds around other stars can be detected.
The study – “Probing Oort clouds around Milky Way stars with CMB surveys“, which recently appeared online – was led by Eric J Baxter, a postdoctoral researcher from the Department of Physics and Astronomy at the University of Pennsylvania. He was joined by Pennsylvania professors Cullen H. Blake and Bhuvnesh Jain (Baxter’s primary mentor).
To recap, the Oort Cloud is a hypothetical region of space that is thought to extend from between 2,000 and 5,000 AU (0.03 and 0.08 ly) to as far as 50,000 AU (0.79 ly) from the Sun – though some estimates indicate it could reach as far as 100,000 to 200,000 AU (1.58 and 3.16 ly). Like the Kuiper Belt and the Scattered Disc, the Oort Cloud is a reservoir of trans-Neptunian objects, though it is over a thousands times more distant from our Sun as these other two.
This cloud is believed to have originated from a population of small, icy bodies within 50 AU of the Sun that were present when the Solar System was still young. Over time, it is theorized that orbital perturbations caused by the giant planets caused those objects that had highly-stable orbits to form the Kuiper Belt along the ecliptic plane, while those that had more eccentric and distant orbits formed the Oort Cloud.
According to Baxter and his colleagues, because the existence of the Oort Cloud played an important role in the formation of the Solar System, it is therefore logical to assume that other star systems have their own Oort Clouds – which they refer to as exo-Oort Clouds (EXOCs). As Dr. Baxter explained to Universe Today via email:
“One of the proposed mechanisms for the formation of the Oort cloud around our sun is that some of the objects in the protoplanetary disk of our solar system were ejected into very large, elliptical orbits by interactions with the giant planets. The orbits of these objects were then affected by nearby stars and galactic tides, causing them to depart from orbits restricted to the plane of the solar system, and to form the now-spherical Oort cloud. You could imagine that a similar process could occur around another star with giant planets, and we know that there are many stars out there that do have giant planets.”
As Baxter and his colleagues indicated in their study, detecting EXOCs is difficult, largely for the same reasons for why there is no direct evidence for the Solar System’s own Oort Cloud. For one, there is not a lot of material in the cloud, with estimates ranging from a few to twenty times the mass of the Earth. Second, these objects are very far away from our Sun, which means they do not reflect much light or have strong thermal emissions.
For this reason, Baxter and his team recommended using maps of the sky at the millimeter and submillimeter wavelengths to search for signs of Oort Clouds around other stars. Such maps already exist, thanks to missions like the Planck telescope which have mapped the Cosmic Microwave Background (CMB). As Baxter indicated:
“In our paper, we use maps of the sky at 545 GHz and 857 GHz that were generated from observations by the Planck satellite. Planck was pretty much designed *only* to map the CMB; the fact that we can use this telescope to study exo-Oort clouds and potentially processes connected to planet formation is pretty surprising!”
This is a rather revolutionary idea, as the detection of EXOCs was not part of the intended purpose of the Planck mission. By mapping the CMB, which is “relic radiation” left over from the Big Bang, astronomers have sought to learn more about how the Universe has evolved since the the early Universe – circa. 378,000 years after the Big Bang. However, their study does build on previous work led by Alan Stern (the principal investigator of the New Horizons mission).
In 1991, along with John Stocke (of the University of Colorado, Boulder) and Paul Weissmann (from NASA’s Jet Propulsion Laboratory), Stern conducted a study titled “An IRAS search for extra-solar Oort clouds“. In this study, they suggested using data from the Infrared Astronomical Satellite (IRAS) for the purpose of searching for EXOCs. However, whereas this study focused on certain wavelengths and 17 star systems, Baxter and his team relied on data for tens of thousands of systems and at a wider range of wavelengths.
“Furthermore, the Gaia satellite has recently mapped out very accurately the positions and distances of stars in our galaxy,” Baxter added. “This makes choosing targets for exo-Oort cloud searches relatively straightforward. We used a combination of Gaia and Planck data in our analysis.”
To test their theory, Baxter and is team constructed a series of models for the thermal emission of exo-Oort clouds. “These models suggested that detecting exo-Oort clouds around nearby stars (or at least putting limits on their properties) was feasible given existing telescopes and observations,” he said. “In particular, the models suggested that data from the Planck satellite could potentially come close to detecting an exo-Oort cloud like our own around a nearby star.”
In addition, Baxter and his team also detected a hint of a signal around some of the stars that they considered in their study – specifically in the Vega and Formalhaut systems. Using this data, they were able to place constraints on the possible existence of EXOCs at a distance of 10,000 to 100,000 AUs from these stars, which roughly coincides with the distance between our Sun and the Oort Cloud.
However, additional surveys will be needed before the existence any of EXOCs can be confirmed. These surveys will likely involve the James Webb Space Telescope, which is scheduled to launch in 2021. In the meantime, this study has some rather significant implications for astronomers, and not just because it involves the use of existing CMB maps for extra-solar studies. As Baxter put it:
“Just detecting an exo-Oort cloud would be really interesting, since as I mentioned above, we don’t have any direct evidence for the existence of our own Oort cloud. If you did get a detection of an exo-Oort cloud, it could in principle provide insights into processes connected to planet formation and the evolution of protoplanetary disks. For instance, imagine that we only detected exo-Oort clouds around stars that have giant planets. That would provide pretty convincing evidence that the formation of an Oort cloud is connected to giant planets, as suggested by popular theories of the formation of our own Oort cloud.”
As our knowledge of the Universe expands, scientists become increasingly interested in what our Solar System has in common with other star systems. This, in turn, helps us to learn more about the formation and evolution of our own system. It also provides possible hints as to how the Universe changed over time, and maybe even where life could be found someday.
In the 1920s, Edwin Hubble made the groundbreaking discovery that the Universe was in a state of expansion. Originally predicted as a consequence of Einstein’s Theory of General Relativity, measurements of this expansion came to be known as Hubble’s Constant. Today, and with the help of next-generation telescopes – like the aptly-named Hubble Space Telescope (HST) – astronomers have remeasured and revised this law many times.
These measurements confirmed that the rate of expansion has increased over time, though scientists are still unsure why. The latest measurements were conducted by an international team using Hubble, who then compared their results with data obtained by the European Space Agency’s (ESA) Gaia observatory. This has led to the most precise measurements of the Hubble Constant to date, though questions about cosmic acceleration remain.
Since 2005, Adam Riess – a Nobel Laureate Professor with the Space Telescope Science Institute and the Johns Hopkins University – has been working to refine the Hubble Constant value by streamlining and strengthening the “cosmic distance ladder”. Along with his team, known as Supernova H0 for the Equation of State (SH0ES), they have successfully reduced the uncertainty associated with the rate of cosmic expansion to just 2.2%
To break it down, astronomers have traditionally used the “cosmic distance ladder” to measure distances in the Universe. This consists of relying on distance markers like Cepheid variables in distant galaxies – pulsating stars whose distances can be inferred by comparing their intrinsic brightness with their apparent brightness. These measurements are then compared to the way light from distant galaxies is redshifted to determine how fast the space between galaxies is expanding.
From this, the Hubble Constant is derived. Another method that is used is to observe the Cosmic Microwave Background (CMB) to trace the expansion of the cosmos during the early Universe – circa. 378,000 years after the Big Bang – and then using physics to extrapolate that to the present expansion rate. Together, the measurements should provide an end-to-end measurement of how the Universe has expanded over time.
However, astronomers have known for some time that the two measurements don’t match up. In a previous study, Riess and his team conducted measurements using Hubble to obtain a Hubble Constant value of 73 km/s (45.36 mps) per megaparsec (3.3 million light-years). Meanwhile, results based on the ESA’ Planck observatory (which observed the CMB between 2009 and 2013) predicted that the Hubble constant value should now be 67 km/s (41.63 mps) per megaparsec and no higher than 69 km/s (42.87 mps) – which represents a discrepancy of 9%.
“The tension seems to have grown into a full-blown incompatibility between our views of the early and late time universe. At this point, clearly it’s not simply some gross error in any one measurement. It’s as though you predicted how tall a child would become from a growth chart and then found the adult he or she became greatly exceeded the prediction. We are very perplexed.”
In this case, Riess and his colleagues used Hubble to gauge the brightness of distant Cepheid variables while Gaia provided the parallax information – the apparent change in an objects position based on different points of view – needed to determine the distance. Gaia also added to the study by measuring the distance to 50 Cepheid variables in the Milky Way, which were combined with brightness measurements from Hubble.
This allowed the astronomers to more accurately calibrate the Cepheids and then use those seen outside the Milky Way as milepost markers. Using both the Hubble measurements and newly released data from Gaia, Riess and his colleagues were able to refine their measurements on the present rate of expansion to 73.5 kilometers (45.6 miles) per second per megaparsec.
As Stefano Casertano, of the Space Telescope Science Institute and a member of the SHOES team, added:
“Hubble is really amazing as a general-purpose observatory, but Gaia is the new gold standard for calibrating distance. It is purpose-built for measuring parallax—this is what it was designed to do. Gaia brings a new ability to recalibrate all past distance measures, and it seems to confirm our previous work. We get the same answer for the Hubble constant if we replace all previous calibrations of the distance ladder with just the Gaia parallaxes. It’s a crosscheck between two very powerful and precise observatories.”
Looking to the future, Riess and his team hope to continue to work with Gaia so they can reduce the uncertainty associated with the value of the Hubble Constant to just 1% by the early 2020s. In the meantime, the discrepancy between modern rates of expansion and those based on the CMB will continue to be a puzzle to astronomers.
In the end, this may be an indication that other physics are at work in our Universe, that dark matter interacts with normal matter in a way that is different than what scientists suspect, or that dark energy could be even more exotic than previously thought. Whatever the cause, it is clear the Universe still has some surprises in store for us!
For decades, the predominant cosmological model used by scientists has been based on the theory that in addition to baryonic matter – aka. “normal” or “luminous” matter, which we can see – the Universe also contains a substantial amount of invisible mass. This “Dark Matter” accounts for roughly 26.8% of the mass of the Universe, whereas normal matter accounts for just 4.9%.
While the search for Dark Matter is ongoing and direct evidence is yet to be found, scientists have also been aware that roughly 90% of the Universe’s normal matter still remained undetected. According to twonew studies that were recently published, much of this normal matter – which consists of filaments of hot, diffuse gas that links galaxies together – may have finally been found.
Based on cosmological simulations, the predominant theory has been that the previously-undetected normal matter of the Universe consists of strands of baryonic matter – i.e. protons, neutrons and electrons – that is floating between galaxies. These regions are what is known as the “Cosmic Web”, where low density gas exists at a temperatures of 105 to 107 K (-168 t0 -166 °C; -270 to 266 °F).
For the sake of their studies, both teams consulted data from the Planck Collaboration, a venture maintained by the European Space Agency that includes all those who contributed to the Planck mission (ESA). This was presented in 2015, where it was used to create a thermal map of the Universe by measuring the influence of the Sunyaev-Zeldovich (SZ) effect.
This effect refers to a spectral distortion in the Cosmic Microwave Background, where photons are scattered by ionized gas in galaxies and larger structures. During its mission to study the cosmos, the Planck satellite measured the spectral distortion of CMB photons with great sensitivity, and the resulting thermal map has since been used to chart the large-scale structure of the Universe.
However, the filaments between galaxies appeared too faint for scientists to examine at the time. To remedy this, the two teams consulted data from the North and South CMASS galaxy catalogues, which were produced from the 12th data release of the Sloan Digital Sky Survey (SDSS). From this data set, they then selected pairs of galaxies and focused on the space between them.
They then stacked the thermal data obtained by Planck for these areas on top of each other in order to strengthen the signals caused by SZ effect between galaxies. As Dr. Hideki told Universe Today via email:
“The SDSS galaxy survey gives a shape of the large-scale structure of the Universe. The Planck observation provides an all-sky map of gas pressure with a better sensitivity. We combine these data to probe the low-dense gas in the cosmic web.”
While Tanimura and his team stacked data from 260,000 galaxy pairs, de Graaff and her team stacked data from over a million. In the end, the two teams came up with strong evidence of gas filaments, though their measurements differed somewhat. Whereas Tanimura’s team found that the density of these filaments was around three times the average density in the surrounding void, de Graaf and her team found that they were six times the average density.
“We detect the low-dense gas in the cosmic web statistically by a stacking method,” said Hideki. “The other team uses almost the same method. Our results are very similar. The main difference is that we are probing a nearby Universe, on the other hand, they are probing a relatively farther Universe.”
This particular aspect of particularly interesting, in that it hints that over time, baryonic matter in the Cosmic Web has become less dense. Between these two results, the studies accounted for between 15 and 30% of the total baryonic content of the Universe. While that would mean that a significant amount of the Universe’s baryonic matter still remains to be found, it is nevertheless an impressive find.
As Hideki explained, their results not only support the current cosmological model of the Universe (the Lambda CDM model) but also goes beyond it:
“The detail in our universe is still a mystery. Our results shed light on it and reveals a more precise picture of the Universe. When people went out to the ocean and started making a map of our world, it was not used for most of the people then, but we use the world map now to travel abroad. In the same way, a map of the entire universe may not be valuable now because we do not have a technology to go far out to the space. However, it could be valuable 500 years later. We are in the first stage of making a map of the entire Universe.”
It also opens up opportunities for future studies of the Comsic Web, which will no doubt benefit from the deployment of next-generation instruments like James Webb Telescope, the Atacama Cosmology Telescope and the Q/U Imaging ExperimenT (QUIET). With any luck, they will be able to spot the remaining missing matter. Then, perhaps we can finally zero in on all the invisible mass!
What is the Universe? That is one immensely loaded question! No matter what angle one took to answer that question, one could spend years answering that question and still barely scratch the surface. In terms of time and space, it is unfathomably large (and possibly even infinite) and incredibly old by human standards. Describing it in detail is therefore a monumental task. But we here at Universe Today are determined to try!
So what is the Universe? Well, the short answer is that it is the sum total of all existence. It is the entirety of time, space, matter and energy that began expanding some 13.8 billion years ago and has continued to expand ever since. No one is entirely certain how extensive the Universe truly is, and no one is entirely sure how it will all end. But ongoing research and study has taught us a great deal in the course of human history.
The term “the Universe” is derived from the Latin word “universum”, which was used by Roman statesman Cicero and later Roman authors to refer to the world and the cosmos as they knew it. This consisted of the Earth and all living creatures that dwelt therein, as well as the Moon, the Sun, the then-known planets (Mercury, Venus, Mars, Jupiter, Saturn) and the stars.
The term “cosmos” is often used interchangeably with the Universe. It is derived from the Greek word kosmos, which literally means “the world”. Other words commonly used to define the entirety of existence include “Nature” (derived from the Germanic word natur) and the English word “everything”, who’s use can be seen in scientific terminology – i.e. “Theory Of Everything” (TOE).
Today, this term is often to used to refer to all things that exist within the known Universe – the Solar System, the Milky Way, and all known galaxies and superstructures. In the context of modern science, astronomy and astrophysics, it also refers to all spacetime, all forms of energy (i.e. electromagnetic radiation and matter) and the physical laws that bind them.
Origin of the Universe:
The current scientific consensus is that the Universe expanded from a point of super high matter and energy density roughly 13.8 billion years ago. This theory, known as the Big Bang Theory, is not the only cosmological model for explaining the origins of the Universe and its evolution – for example, there is the Steady State Theory or the Oscillating Universe Theory.
It is, however, the most widely-accepted and popular. This is due to the fact that the Big Bang theory alone is able to explain the origin of all known matter, the laws of physics, and the large scale structure of the Universe. It also accounts for the expansion of the Universe, the existence of the Cosmic Microwave Background, and a broad range of other phenomena.
Working backwards from the current state of the Universe, scientists have theorized that it must have originated at a single point of infinite density and finite time that began to expand. After the initial expansion, the theory maintains that Universe cooled sufficiently to allow for the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies.
This all began roughly 13.8 billion years ago, and is thus considered to be the age of the Universe. Through the testing of theoretical principles, experiments involving particle accelerators and high-energy states, and astronomical studies that have observed the deep Universe, scientists have constructed a timeline of events that began with the Big Bang and has led to the current state of cosmic evolution.
However, the earliest times of the Universe – lasting from approximately 10-43 to 10-11 seconds after the Big Bang – are the subject of extensive speculation. Given that the laws of physics as we know them could not have existed at this time, it is difficult to fathom how the Universe could have been governed. What’s more, experiments that can create the kinds of energies involved are in their infancy.
Still, many theories prevail as to what took place in this initial instant in time, many of which are compatible. In accordance with many of these theories, the instant following the Big Bang can be broken down into the following time periods: the Singularity Epoch, the Inflation Epoch, and the Cooling Epoch.
Also known as the Planck Epoch (or Planck Era), the Singularity Epoch was the earliest known period of the Universe. At this time, all matter was condensed on a single point of infinite density and extreme heat. During this period, it is believed that the quantum effects of gravity dominated physical interactions and that no other physical forces were of equal strength to gravitation.
This Planck period of time extends from point 0 to approximately 10-43 seconds, and is so named because it can only be measured in Planck time. Due to the extreme heat and density of matter, the state of the Universe was highly unstable. It thus began to expand and cool, leading to the manifestation of the fundamental forces of physics. From approximately 10-43 second and 10-36, the Universe began to cross transition temperatures.
It is here that the fundamental forces that govern the Universe are believed to have began separating from each other. The first step in this was the force of gravitation separating from gauge forces, which account for strong and weak nuclear forces and electromagnetism. Then, from 10-36 to 10-32 seconds after the Big Bang, the temperature of the Universe was low enough (1028 K) that electromagnetism and weak nuclear force were able to separate as well.
With the creation of the first fundamental forces of the Universe, the Inflation Epoch began, lasting from 10-32 seconds in Planck time to an unknown point. Most cosmological models suggest that the Universe at this point was filled homogeneously with a high-energy density, and that the incredibly high temperatures and pressure gave rise to rapid expansion and cooling.
This began at 10-37 seconds, where the phase transition that caused for the separation of forces also led to a period where the Universe grew exponentially. It was also at this point in time that baryogenesis occurred, which refers to a hypothetical event where temperatures were so high that the random motions of particles occurred at relativistic speeds.
As a result of this, particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions, which is believed to have led to the predominance of matter over antimatter in the present Universe. After inflation stopped, the Universe consisted of a quark–gluon plasma, as well as all other elementary particles. From this point onward, the Universe began to cool and matter coalesced and formed.
As the Universe continued to decrease in density and temperature, the Cooling Epoch began. This was characterized by the energy of particles decreasing and phase transitions continuing until the fundamental forces of physics and elementary particles changed into their present form. Since particle energies would have dropped to values that can be obtained by particle physics experiments, this period onward is subject to less speculation.
For example, scientists believe that about 10-11 seconds after the Big Bang, particle energies dropped considerably. At about 10-6 seconds, quarks and gluons combined to form baryons such as protons and neutrons, and a small excess of quarks over antiquarks led to a small excess of baryons over antibaryons.
Since temperatures were not high enough to create new proton-antiproton pairs (or neutron-anitneutron pairs), mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons and none of their antiparticles. A similar process happened at about 1 second after the Big Bang for electrons and positrons.
After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the Universe was dominated by photons – and to a lesser extent, neutrinos. A few minutes into the expansion, the period known as Big Bang nucleosynthesis also began.
Thanks to temperatures dropping to 1 billion kelvin and energy densities dropping to about the equivalent of air, neutrons and protons began to combine to form the Universe’s first deuterium (a stable isotope of hydrogen) and helium atoms. However, most of the Universe’s protons remained uncombined as hydrogen nuclei.
After about 379,000 years, electrons combined with these nuclei to form atoms (again, mostly hydrogen), while the radiation decoupled from matter and continued to expand through space, largely unimpeded. This radiation is now known to be what constitutes the Cosmic Microwave Background (CMB), which today is the oldest light in the Universe.
As the CMB expanded, it gradually lost density and energy, and is currently estimated to have a temperature of 2.7260 ± 0.0013 K (-270.424 °C/ -454.763 °F ) and an energy density of 0.25 eV/cm3 (or 4.005×10-14 J/m3; 400–500 photons/cm3). The CMB can be seen in all directions at a distance of roughly 13.8 billion light years, but estimates of its actual distance place it at about 46 billion light years from the center of the Universe.
Evolution of the Universe:
Over the course of the several billion years that followed, the slightly denser regions of the Universe’s matter (which was almost uniformly distributed) began to become gravitationally attracted to each other. They therefore grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures that we regularly observe today.
This is what is known as the Structure Epoch, since it was during this time that the modern Universe began to take shape. This consisted of visible matter distributed in structures of various sizes (i.e. stars and planets to galaxies, galaxy clusters, and super clusters) where matter is concentrated, and which are separated by enormous gulfs containing few galaxies.
The details of this process depend on the amount and type of matter in the Universe. Cold dark matter, warm dark matter, hot dark matter, and baryonic matter are the four suggested types. However, the Lambda-Cold Dark Matter model (Lambda-CDM), in which the dark matter particles moved slowly compared to the speed of light, is the considered to be the standard model of Big Bang cosmology, as it best fits the available data.
In this model, cold dark matter is estimated to make up about 23% of the matter/energy of the Universe, while baryonic matter makes up about 4.6%. The Lambda refers to the Cosmological Constant, a theory originally proposed by Albert Einstein that attempted to show that the balance of mass-energy in the Universe remains static.
In this case, it is associated with dark energy, which served to accelerate the expansion of the Universe and keep its large-scale structure largely uniform. The existence of dark energy is based on multiple lines of evidence, all of which indicate that the Universe is permeated by it. Based on observations, it is estimated that 73% of the Universe is made up of this energy.
During the earliest phases of the Universe, when all of the baryonic matter was more closely space together, gravity predominated. However, after billions of years of expansion, the growing abundance of dark energy led it to begin dominating interactions between galaxies. This triggered an acceleration, which is known as the Cosmic Acceleration Epoch.
When this period began is subject to debate, but it is estimated to have began roughly 8.8 billion years after the Big Bang (5 billion years ago). Cosmologists rely on both quantum mechanics and Einstein’s General Relativity to describe the process of cosmic evolution that took place during this period and any time after the Inflationary Epoch.
Through a rigorous process of observations and modeling, scientists have determined that this evolutionary period does accord with Einstein’s field equations, though the true nature of dark energy remains illusive. What’s more, there are no well-supported models that are capable of determining what took place in the Universe prior to the period predating 10-15 seconds after the Big Bang.
However, ongoing experiments using CERN’s Large Hadron Collider (LHC) seek to recreate the energy conditions that would have existed during the Big Bang, which is also expected to reveal physics that go beyond the realm of the Standard Model.
Any breakthroughs in this area will likely lead to a unified theory of quantum gravitation, where scientists will finally be able to understand how gravity interacts with the three other fundamental forces of the physics – electromagnetism, weak nuclear force and strong nuclear force. This, in turn, will also help us to understand what truly happened during the earliest epochs of the Universe.
Structure of the Universe:
The actual size, shape and large-scale structure of the Universe has been the subject of ongoing research. Whereas the oldest light in the Universe that can be observed is 13.8 billion light years away (the CMB), this is not the actual extent of the Universe. Given that the Universe has been in a state of expansion for billion of years, and at velocities that exceed the speed of light, the actual boundary extends far beyond what we can see.
Our current cosmological models indicate that the Universe measures some 91 billion light years (28 billion parsecs) in diameter. In other words, the observable Universe extends outwards from our Solar System to a distance of roughly 46 billion light years in all directions. However, given that the edge of the Universe is not observable, it is not yet clear whether the Universe actually has an edge. For all we know, it goes on forever!
Within the observable Universe, matter is distributed in a highly structured fashion. Within galaxies, this consists of large concentrations – i.e. planets, stars, and nebulas – interspersed with large areas of empty space (i.e. interplanetary space and the interstellar medium).
Things are much the same at larger scales, with galaxies being separated by volumes of space filled with gas and dust. At the largest scale, where galaxy clusters and superclusters exist, you have a wispy network of large-scale structures consisting of dense filaments of matter and gigantic cosmic voids.
In terms of its shape, spacetime may exist in one of three possible configurations – positively-curved, negatively-curved and flat. These possibilities are based on the existence of at least four dimensions of space-time (an x-coordinate, a y-coordinate, a z-coordinate, and time), and depend upon the nature of cosmic expansion and whether or not the Universe is finite or infinite.
A positively-curved (or closed) Universe would resemble a four-dimensional sphere that would be finite in space and with no discernible edge. A negatively-curved (or open) Universe would look like a four-dimensional “saddle” and would have no boundaries in space or time.
In the former scenario, the Universe would have to stop expanding due to an overabundance of energy. In the latter, it would contain too little energy to ever stop expanding. In the third and final scenario – a flat Universe – a critical amount of energy would exist and its expansion woudl only halt after an infinite amount of time.
Fate of the Universe:
Hypothesizing that the Universe had a starting point naturally gives rise to questions about a possible end point. If the Universe began as a tiny point of infinite density that started to expand, does that mean it will continue to expand indefinitely? Or will it one day run out of expansive force, and begin retreating inward until all matter crunches back into a tiny ball?
Answering this question has been a major focus of cosmologists ever since the debate about which model of the Universe was the correct one began. With the acceptance of the Big Bang Theory, but prior to the observation of dark energy in the 1990s, cosmologists had come to agree on two scenarios as being the most likely outcomes for our Universe.
In the first, commonly known as the “Big Crunch” scenario, the Universe will reach a maximum size and then begin to collapse in on itself. This will only be possible if the mass density of the Universe is greater than the critical density. In other words, as long as the density of matter remains at or above a certain value (1-3 ×10-26 kg of matter per m³), the Universe will eventually contract.
Alternatively, if the density in the Universe were equal to or below the critical density, the expansion would slow down but never stop. In this scenario, known as the “Big Freeze”, the Universe would go on until star formation eventually ceased with the consumption of all the interstellar gas in each galaxy. Meanwhile, all existing stars would burn out and become white dwarfs, neutron stars, and black holes.
Very gradually, collisions between these black holes would result in mass accumulating into larger and larger black holes. The average temperature of the Universe would approach absolute zero, and black holes would evaporate after emitting the last of their Hawking radiation. Finally, the entropy of the Universe would increase to the point where no organized form of energy could be extracted from it (a scenarios known as “heat death”).
Modern observations, which include the existence of dark energy and its influence on cosmic expansion, have led to the conclusion that more and more of the currently visible Universe will pass beyond our event horizon (i.e. the CMB, the edge of what we can see) and become invisible to us. The eventual result of this is not currently known, but “heat death” is considered a likely end point in this scenario too.
Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion. This scenario is known as the “Big Rip”, in which the expansion of the Universe itself will eventually be its undoing.
History of Study:
Strictly speaking, human beings have been contemplating and studying the nature of the Universe since prehistoric times. As such, the earliest accounts of how the Universe came to be were mythological in nature and passed down orally from one generation to the next. In these stories, the world, space, time, and all life began with a creation event, where a God or Gods were responsible for creating everything.
Astronomy also began to emerge as a field of study by the time of the Ancient Babylonians. Systems of constellations and astrological calendars prepared by Babylonian scholars as early as the 2nd millennium BCE would go on to inform the cosmological and astrological traditions of cultures for thousands of years to come.
By Classical Antiquity, the notion of a Universe that was dictated by physical laws began to emerge. Between Greek and Indian scholars, explanations for creation began to become philosophical in nature, emphasizing cause and effect rather than divine agency. The earliest examples include Thales and Anaximander, two pre-Socratic Greek scholars who argued that everything was born of a primordial form of matter.
By the 5th century BCE, pre-Socratic philosopher Empedocles became the first western scholar to propose a Universe composed of four elements – earth, air, water and fire. This philosophy became very popular in western circles, and was similar to the Chinese system of five elements – metal, wood, water, fire, and earth – that emerged around the same time.
It was not until Democritus, the 5th/4th century BCE Greek philosopher, that a Universe composed of indivisible particles (atoms) was proposed. The Indian philosopher Kanada (who lived in the 6th or 2nd century BCE) took this philosophy further by proposing that light and heat were the same substance in different form. The 5th century CE Buddhist philosopher Dignana took this even further, proposing that all matter was made up of energy.
The notion of finite time was also a key feature of the Abrahamic religions – Judaism, Christianity and Islam. Perhaps inspired by the Zoroastrian concept of the Day of Judgement, the belief that the Universe had a beginning and end would go on to inform western concepts of cosmology even to the present day.
Between the 2nd millennium BCE and the 2nd century CE, astronomy and astrology continued to develop and evolve. In addition to monitoring the proper motions of the planets and the movement of the constellations through the Zodiac, Greek astronomers also articulated the geocentric model of the Universe, where the Sun, planets and stars revolve around the Earth.
These traditions are best described in the 2nd century CE mathematical and astronomical treatise, the Almagest, which was written by Greek-Egyptian astronomer Claudius Ptolemaeus (aka. Ptolemy). This treatise and the cosmological model it espoused would be considered canon by medieval European and Islamic scholars for over a thousand years to come.
However, even before the Scientific Revolution (ca. 16th to 18th centuries), there were astronomers who proposed a heliocentric model of the Universe – where the Earth, planets and stars revolved around the Sun. These included Greek astronomer Aristarchus of Samos (ca. 310 – 230 BCE), and Hellenistic astronomer and philosopher Seleucus of Seleucia (190 – 150 BCE).
During the Middle Ages, Indian, Persian and Arabic philosophers and scholars maintained and expanded on Classical astronomy. In addition to keeping Ptolemaic and non-Aristotelian ideas alive, they also proposed revolutionary ideas like the rotation of the Earth. Some scholars – such as Indian astronomer Aryabhata and Persian astronomers Albumasar and Al-Sijzi – even advanced versions of a heliocentric Universe.
By the 16th century, Nicolaus Copernicus proposed the most complete concept of a heliocentric Universe by resolving lingering mathematical problems with the theory. His ideas were first expressed in the 40-page manuscript titled Commentariolus (“Little Commentary”), which described a heliocentric model based on seven general principles. These seven principles stated that:
Celestial bodies do not all revolve around a single point
The center of Earth is the center of the lunar sphere—the orbit of the moon around Earth; all the spheres rotate around the Sun, which is near the center of the Universe
The distance between Earth and the Sun is an insignificant fraction of the distance from Earth and Sun to the stars, so parallax is not observed in the stars
The stars are immovable – their apparent daily motion is caused by the daily rotation of Earth
Earth is moved in a sphere around the Sun, causing the apparent annual migration of the Sun
Earth has more than one motion
Earth’s orbital motion around the Sun causes the seeming reverse in direction of the motions of the planets.
A more comprehensive treatment of his ideas was released in 1532, when Copernicus completed his magnum opus – De revolutionibus orbium coelestium(On the Revolutions of the Heavenly Spheres). In it, he advanced his seven major arguments, but in more detailed form and with detailed computations to back them up. Due to fears of persecution and backlash, this volume was not released until his death in 1542.
His ideas would be further refined by 16th/17th century mathematicians, astronomer and inventor Galileo Galilei. Using a telescope of his own creation, Galileo would make recorded observations of the Moon, the Sun, and Jupiter which demonstrated flaws in the geocentric model of the Universe while also showcasing the internal consistency of the Copernican model.
His observations were published in several different volumes throughout the early 17th century. His observations of the cratered surface of the Moon and his observations of Jupiter and its largest moons were detailed in 1610 with his Sidereus Nuncius (The Starry Messenger) while his observations were sunspots were described in On the Spots Observed in the Sun (1610).
Galileo also recorded his observations about the Milky Way in the Starry Messenger, which was previously believed to be nebulous. Instead, Galileo found that it was a multitude of stars packed so densely together that it appeared from a distance to look like clouds, but which were actually stars that were much farther away than previously thought.
In 1632, Galileo finally addressed the “Great Debate” in his treatise Dialogo sopra i due massimi sistemi del mondo (Dialogue Concerning the Two Chief World Systems), in which he advocated the heliocentric model over the geocentric. Using his own telescopic observations, modern physics and rigorous logic, Galileo’s arguments effectively undermined the basis of Aristotle’s and Ptolemy’s system for a growing and receptive audience.
Johannes Kepler advanced the model further with his theory of the elliptical orbits of the planets. Combined with accurate tables that predicted the positions of the planets, the Copernican model was effectively proven. From the middle of the seventeenth century onward, there were few astronomers who were not Copernicans.
When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.
The vector sum of the external forces (F) on an object is equal to the mass (m) of that object multiplied by the acceleration vector (a) of the object. In mathematical form, this is expressed as: F=ma
When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
Together, these laws described the relationship between any object, the forces acting upon it, and the resulting motion, thus laying the foundation for classical mechanics. The laws also allowed Newton to calculate the mass of each planet, calculate the flattening of the Earth at the poles and the bulge at the equator, and how the gravitational pull of the Sun and Moon create the Earth’s tides.
His calculus-like method of geometrical analysis was also able to account for the speed of sound in air (based on Boyle’s Law), the precession of the equinoxes – which he showed were a result of the Moon’s gravitational attraction to the Earth – and determine the orbits of the comets. This volume would have a profound effect on the sciences, with its principles remaining canon for the following 200 years.
Another major discovery took place in 1755, when Immanuel Kant proposed that the Milky Way was a large collection of stars held together by mutual gravity. Just like the Solar System, this collection of stars would be rotating and flattened out as a disk, with the Solar System embedded within it.
Astronomer William Herschel attempted to actually map out the shape of the Milky Way in 1785, but he didn’t realize that large portions of the galaxy are obscured by gas and dust, which hides its true shape. The next great leap in the study of the Universe and the laws that govern it did not come until the 20th century, with the development of Einstein’s theories of Special and General Relativity.
Einstein’s groundbreaking theories about space and time (summed up simply as E=mc²) were in part the result of his attempts to resolve Newton’s laws of mechanics with the laws of electromagnetism (as characterized by Maxwell’s equations and the Lorentz force law). Eventually, Einstein would resolve the inconsistency between these two fields by proposing Special Relativity in his 1905 paper, “On the Electrodynamics of Moving Bodies“.
Basically, this theory stated that the speed of light is the same in all inertial reference frames. This broke with the previously-held consensus that light traveling through a moving medium would be dragged along by that medium, which meant that the speed of the light is the sum of its speed through a medium plus the speed of that medium. This theory led to multiple issues that proved insurmountable prior to Einstein’s theory.
Special Relativity not only reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, but also simplified the mathematical calculations by doing away with extraneous explanations used by other scientists. It also made the existence of a medium entirely superfluous, accorded with the directly observed speed of light, and accounted for the observed aberrations.
Between 1907 and 1911, Einstein began considering how Special Relativity could be applied to gravity fields – what would come to be known as the Theory of General Relativity. This culminated in 1911 with the publications of “On the Influence of Gravitation on the Propagation of Light“, in which he predicted that time is relative to the observer and dependent on their position within a gravity field.
He also advanced what is known as the Equivalence Principle, which states that gravitational mass is identical to inertial mass. Einstein also predicted the phenomenon of gravitational time dilation – where two observers situated at varying distances from a gravitating mass perceive a difference in the amount of time between two events. Another major outgrowth of his theories were the existence of Black Holes and an expanding Universe.
In 1915, a few months after Einstein had published his Theory of General Relativity, German physicist and astronomer Karl Schwarzschild found a solution to the Einstein field equations that described the gravitational field of a point and spherical mass. This solution, now called the Schwarzschild radius, describes a point where the mass of a sphere is so compressed that the escape velocity from the surface would equal the speed of light.
In 1931, Indian-American astrophysicist Subrahmanyan Chandrasekhar calculated, using Special Relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass would collapse in on itself. In 1939, Robert Oppenheimer and others concurred with Chandrasekhar’s analysis, claiming that neutron stars above a prescribed limit would collapse into black holes.
Another consequence of General Relativity was the prediction that the Universe was either in a state of expansion or contraction. In 1929, Edwin Hubble confirmed that the former was the case. At the time, this appeared to disprove Einstein’s theory of a Cosmological Constant, which was a force which “held back gravity” to ensure that the distribution of matter in the Universe remained uniform over time.
To this, Edwin Hubble demonstrated using redshift measurements that galaxies were moving away from the Milky Way. What’s more, he showed that the galaxies that were farther from Earth appeared to be receding faster – a phenomena that would come to be known as Hubble’s Law. Hubble attempted to constrain the value of the expansion factor – which he estimated at 500 km/sec per Megaparsec of space (which has since been revised).
And then in 1931, Georges Lemaitre, a Belgian physicist and Roman Catholic priest, articulated an idea that would give rise to the Big Bang Theory. After confirming independently that the Universe was in a state of expansion, he suggested that the current expansion of the Universe meant that the father back in time one went, the smaller the Universe would be.
In other words, at some point in the past, the entire mass of the Universe would have been concentrated on a single point. These discoveries triggered a debate between physicists throughout the 1920s and 30s, with the majority advocating that the Universe was in a steady state (i.e. the Steady State Theory). In this model, new matter is continuously created as the Universe expands, thus preserving the uniformity and density of matter over time.
After World War II, the debate came to a head between proponents of the Steady State Model and proponents of the Big Bang Theory – which was growing in popularity. Eventually, the observational evidence began to favor the Big Bang over the Steady State, which included the discovery and confirmation of the CMB in 1965. Since that time, astronomers and cosmologists have sought to resolve theoretical problems arising from this model.
In the 1960s, for example, Dark Matter (originally proposed in 1932 by Jan Oort) was proposed as an explanation for the apparent “missing mass” of the Universe. In addition, papers submitted by Stephen Hawking and other physicists showed that singularities were an inevitable initial condition of general relativity and a Big Bang model of cosmology.
In 1981, physicist Alan Guth theorized a period of rapid cosmic expansion (aka. the “Inflation” Epoch) that resolved other theoretical problems. The 1990s also saw the rise of Dark Energy as an attempt to resolve outstanding issues in cosmology. In addition to providing an explanation as to the Universe’s missing mass (along with Dark Matter) it also provided an explanation as to why the Universe is still accelerating, and offered a resolution to Einstein’s Cosmological Constant.
Significant progress has been made in our study of the Universe thanks to advances in telescopes, satellites, and computer simulations. These have allowed astronomers and cosmologists to see farther into the Universe (and hence, farther back in time). This has in turn helped them to gain a better understanding of its true age, and make more precise calculations of its matter-energy density.
For example, in June of 2016, NASA announced findings that indicate that the Universe is expanding even faster than previously thought. Based on new data provided by the Hubble Space Telescope (which was then compared to data from the WMAP and the Planck Observatory) it appeared that the Hubble Constant was 5% to 9% greater than expected.
Next-generation telescopes like the James Webb Space Telescope (JWST) and ground-based telescopes like the Extremely Large Telescope (ELT) are also expected to allow for additional breakthroughs in our understanding of the Universe in the coming years and decades.
Without a doubt, the Universe is beyond the reckoning of our minds. Our best estimates say hat it is unfathomably vast, but for all we know, it could very well extend to infinity. What’s more, its age in almost impossible to contemplate in strictly human terms. In the end, our understanding of it is nothing less than the result of thousands of years of constant and progressive study.
And in spite of that, we’ve only really begun to scratch the surface of the grand enigma that it is the Universe. Perhaps some day we will be able to see to the edge of it (assuming it has one) and be able to resolve the most fundamental questions about how all things in the Universe interact. Until that time, all we can do is measure what we don’t know by what we do, and keep exploring!
To speed you on your way, here is a list of topics we hope you will enjoy and that will answer your questions. Good luck with your exploration!
Since the 1960s, astronomers have been aware of the electromagnetic background radiation that pervades the Universe. Known as the Cosmic Microwave Background, this radiation is the oldest light in the Universe and what is left over from the Big Bang. By 2004, astronomers also became aware that a large region within the CMB appeared to be colder than its surroundings.
Known as the “CMB Cold Spot”, scientists have puzzled over this anomaly for years, with explanations ranging from a data artifact to it being caused by a supervoid. According to a new study conducted by a team of scientists from Durham University, the presence of a supervoid has been ruled out. This conclusion once again opens the door to more exotic explanations – like the existence of a parallel Universe!
The Cold Spot is one of several anomalies that astronomers have been studying since the first maps of CMB were created using data from the Wilkinson Microwave Anisotropy Probe (WMAP). These anomalies are regions in the CMB that fall beneath the average background temperature of 2.73 degrees above absolute zero (-270.43 °C; -460.17 °F). In the case of the Cold Spot, the area is just 0.00015° colder than its surroundings.
And yet, this temperature difference is enough that the Cold Spot has become something of a thorn in the hip of standard models of cosmology. Previously, the smart money appeared to be on it being caused by a supervoid – and area of space measuring billions of light years across which contained few galaxies. To test this theory, the Durham team conducted a survey of the galaxies in the region.
This technique, which measures the extent to which visible light coming from an object is shifted towards the red end of the spectrum, has been the standard method for determining the distance to other galaxies for over a century. For the sake of their study, the Durham team relied on data from the Anglo-Australian Telescope to conduct a survey where they measured the redshifts of 7,000 nearby galaxies.
Based on this high-fidelity dataset, the researchers found no evidence that the Cold Spot corresponded to a relative lack of galaxies. In other words, there was no indication that the region is a supervoid. The results of their study will be published in the Monthly Notices of the Royal Astronomical Society (MNRAS) under the title “Evidence Against a Supervoid Causing the CMB Cold Spot“.
“The voids we have detected cannot explain the Cold Spot under standard cosmology. There is the possibility that some non-standard model could be proposed to link the two in the future but our data place powerful constraints on any attempt to do that.”
Specifically, the Durham team found that the Cold Spot region could be split into smaller voids, each of which were surrounded by clusters of galaxies. This distribution was consistent with a control field the survey chose for the study, both of which exhibited the same “soap bubble” structure. The question therefore arises: if the Cold Spot is not the result of a void or a relative lack of galaxies, what is causing it?
This is where the more exotic explanations come in, which emphasize that the Cold Spot may be due to something that exists outside the standard model of cosmology. As Tom Shanks, a Professor with the Dept.of Physics at Durham and a co-author of the study, explained:
“Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble Universe. If further, more detailed, analysis of CMB data proves this to be the case then the Cold Spot might be taken as the first evidence for the multiverse – and billions of other Universes may exist like our own.”
Multiverse Theory, which was first proposed by philosopher and psychologist William James, states that there may be multiple or an even infinite number of Universes that exist parallel to our own. Between these Universes exists the entirety of existence and all cosmological phenomena – i.e. space, time, matter, energy, and all of the physical laws that bind them.
Whereas it is often treated as a philosophical concept, the theory arose in part from the study of cosmological forces, like black holes and problems arising from the Big Bang Theory. In addition, variations on multiverse theory have been suggested as potential resolutions to theories that go beyond the Standard Model of particle physics – such as String Theory and M-theory.
Another variation – the Many-Worlds interpretation – has also been offered as a possible resolution for the wavefunction of subatomic particles. Essentially, it states that all possible outcomes in quantum mechanics exist in alternate universes, and there really is no such thing as “wavefunction collapse’. Could it therefore be argued that an alternate or parallel Universe is too close to our own, and thus responsible for the anomalies we see in the CMB?
As explanations go, it certainly is exciting, if perhaps a bit fantastic? And the Durham team is not prepared to rule out that the Cold Spot could be the result fluctuations that can be explained by the standard model of cosmology. Right now, the only thing that can be said definitively is that the Cold Spot cannot be explained by something as straightforward as a supervoid and the absence of galaxies.
And in the meantime, additional surveys and experiments need to be conducted. Otherwise, this mystery may become a real sticking point for cosmology!
Back in 1997, a team of leading scientists and cosmologists came together to establish the COSMOS supercomputing center at Cambridge University. Under the auspices of famed physicist Stephen Hawking, this facility and its supercomputer are dedicated to the research of cosmology, astrophysics and particle physics – ultimately, for the purpose of unlocking the deeper mysteries of the Universe.
Yesterday, in what was themed as a “tribute to Stephen Hawking”, the COSMOS center announced that it will be embarking on what is perhaps the boldest experiment in cosmological mapping. Essentially, they intend to create the most detailed 3D map of the early universe to date, plotting the position of billions of cosmic structures including supernovas, black holes, and galaxies.
This map will be created using the facility’s supercomputer, located in Cambridge’s Department of Applied Mathematics and Theoretical Physics. Currently, it is the largest shared-memory computer in Europe, boasting 1,856 Intel Xeon E5 processor cores, 31 Intel Many Integrated Core (MIC) co-processors, and 14.5 terabytes of globally shared memory.
The 3D will also rely on data obtained by two previous surveys – the ESA’s Planck satellite and the Dark Energy Survey. From the former, the COSMOS team will use the detailed images of the Cosmic Microwave Background (CMB) – the radiation leftover by the Big Ban – that were released in 2013. These images of the oldest light in the cosmos allowed physicists to refine their estimates for the age of the Universe (13.82 billion years) and its rate of expansion.
This information will be combined with data from the Dark Energy Survey which shows the expansion of the Universe over the course of the last 10 billion years. From all of this, the COSMOS team will compare the early distribution of matter in the Universe with its subsequent expansion to see how the two link up.
The project is also expected to receive a boost from the deployment of the ESA’s Euclid probe, which is scheduled for launch in 2020. This mission will measure the shapes and redshifts of galaxies (looking 10 billion years into the past), thereby helping scientists to understand the geometry of the “dark Universe” – i.e. how dark matter and dark energy influence it as a whole.
The plans for the COSMOS center’s 3D map are will be unveiled at the Starmus science conference, which will be taking place from July 2nd to 27th, 2016, in Tenerife – the largest of the Canary Islands, located off the coast of Spain. At this conference, Hawking will be discussing the details of the COSMOS project.
In addition to being the man who brought the COSMOS team together, the theme of the project – “Beyond the Horizon – Tribute to Stephen Hawking” – was selected because of Hawking’s long-standing commitment to physics and cosmology. “Hawking is a great theorist but he always wants to test his theories against observations,” said Prof. Shellard in a Cambridge press release. “What will emerge is a 3D map of the universe with the positions of billions of galaxies.”
Hawking will also present the first ever Stephen Hawking Medal for Science Communication, an award established by Hawking that will be bestowed on those who help promote science to the public through media – i.e. cinema, music, writing and art. Other speakers who will attending the event include Neil deGrasse Tyson, Chris Hadfield, Martin Rees, Adam Riess, Rusty Schweickart, Eric Betzig, Neil Turok, and Kip Thorne.
Naturally, it is hoped that the creation of this 3D map will confirm current cosmological theories, which include the current age of the Universe and whether or not the Standard Model of cosmology – aka. the Lambda Cold Dark Matter (CDM) model – is in fact the correct one. As Hawking is surely hoping, this could bring us one step closer to a Theory of Everything!
Something’s up in cosmology that may force us to re-write a few textbooks. It’s all centred around the measurement of the expansion of the Universe, which is, obviously, a pretty key part of our understanding of the cosmos.
The expansion of the Universe is regulated by two things: Dark Energy and Dark Matter. They’re like the yin and yang of the cosmos. One drives expansion, while one puts the brakes on expansion. Dark Energy pushes the universe to continually expand, while Dark Matter provides the gravity that retards that expansion. And up until now, Dark Energy has appeared to be a constant force, never wavering.
How is this known? Well, the Cosmic Microwave Background (CMB) is one way the expansion is measured. The CMB is like an echo from the early days of the Universe. It’s the evidence left behind from the moment about 380,000 years after the Big Bang, when the rate of expansion of the Universe stabilized. The CMB is the source for most of what we know of Dark Energy and Dark Matter. (You can hear the CMB for yourself by turning on a household radio, and tuning into static. A small percentage of that static is from the CMB. It’s like listening to the echo of the Big Bang.)
The CMB has been measured and studied pretty thoroughly, most notably by the ESA’s Planck Observatory, and by the Wilkinson Microwave Anisotropy Probe (WMAP). The Planck, in particular, has given us a snapshot of the early Universe that has allowed cosmologists to predict the expansion of the Universe. But our understanding of the expansion of the Universe doesn’t just come from studying the CMB, but also from the Hubble Constant.
The Hubble Constant is named after Edwin Hubble, an American astronomer who observed that the expansion velocity of galaxies can be confirmed by their redshift. Hubble also observed Cepheid variable stars, a type of standard candle that gives us reliable measurements of distances between galaxies. Combining the two observations, the velocity and the distance, yielded a measurement for the expansion of the Universe.
So we’ve had two ways to measure the expansion of the Universe, and they mostly agree with each other. There’ve been discrepancies between the two of a few percentage points, but that has been within the realm of measurement errors.
But now something’s changed.
In a new paper, Dr. Adam Riess of Johns Hopkins University, and his team, have reported a more stringent measurement of the expansion of the Universe. Riess and his team used the Hubble Space Telescope to observe 18 standard candles in their host galaxies, and have reduced some of the uncertainty inherent in past studies of standard candles.
The result of this more accurate measurement is that the Hubble constant has been refined. And that, in turn, has increased the difference between the two ways the expansion of the Universe is measured. The gap between what the Hubble constant tells us is the rate of expansion, and what the CMB, as measured by the Planck spacecraft, tells us is the rate of expansion, is now 8%. And 8% is too large a discrepancy to be explained away as measurement error.
The fallout from this is that we may need to revise our standard model of cosmology to account for this, somehow. And right now, we can only guess what might need to be changed. There are at least a couple candidates, though.
It might be centred around Dark Matter, and how it behaves. It’s possible that Dark Matter is affected by a force in the Universe that doesn’t act on anything else. Since so little is known about Dark Matter, and the name itself is little more than a placeholder for something we are almost completely ignorant about, that could be it.
Or, it could be something to do with Dark Energy. Its name, too, is really just a placeholder for something we know almost nothing about. Maybe Dark Energy is not constant, as we have thought, but changes over time to become stronger now than in the past. That could account for the discrepancy.
A third possibility is that standard candles are not the reliable indicators of distance that we thought they were. We’ve refined our measurements of standard candles before, maybe we will again.
Where this all leads is open to speculation at this point. The rate of expansion of the Universe has changed before; about 7.5 billion years ago it accelerated. Maybe it’s changing again, right now in our time. Since Dark Energy occupies so-called empty space, maybe more of it is being created as expansion continues. Maybe we’re reaching another tipping or balancing point.
The only thing certain is that it is a mystery. One that we are driven to understand.