It’s Been Three Years Since We’ve Had a Supernova This Close

Artistic impression of a star going supernova, casting its chemically enriched contents into the universe. Credit: NASA/Swift/Skyworks Digital/Dana Berry

A supernova is one of the most impressive astronomical events anyone can possibly witness. Characterized by a massive explosion that takes place during the final stages of a massive star’s life (after billions of years of evolution), this sort of event is understandably quite rare. In fact, within the Milky Way Galaxy, a supernova event is likely to happen just once a century.

But within the Fireworks Galaxy (aka. the spiral galaxy NGC 6946), which is located 22 million light years from Earth and has half as many stars as our galaxy, supernovae are about ten times more frequent. On May 13th, while examining this galaxy from his home in Utah, amateur astronomer Patrick Wiggins spotted what was later confirmed to be a Type II supernova.

To break this magnificent astronomical event down, most supernova can be placed into two categories. Type I Supernovae occur when a smaller star has consumed all of its nuclear fuel, and then undergoes core collapse with the help of additional matter accreted from a nearby orbiting star. Type II Supernovae are the result of massive stars undergoing core collapse all on their own.

The confirmed supernova, “SN 2017aew”, which can be seen on the top right side of the “Fireworks Galaxy”. Click to see animation. Credit: Patrick Wiggins

In both cases, the result is a sudden and extreme increase in brightness, where the star blows off its outer layers and may become temporarily brighter than all the other stars in its galaxy. It then spends the next few months slowly fading until it becomes a white dwarf. It was while surveying the Fireworks galaxy with his own telescope that Wiggins noticed such a sudden burst in brightness, which had not been there just two nights before.

Wiggins finding was confirmed a day later (May 14th) by two experts in supernovae – Subo Dong and Krzysztof Z. Stanek, two professors from Peking University and Ohio State University, respectively. After conducting observations of their own, they determined that what Wiggins had witnessed was a Type II supernova, which has since been designated as SN 2017eaw.

In addition to being an amateur astronomer, Patrick Wiggins is also the public outreach educator for the University of Utah’s Department of Physics & Astronomy and the NASA Solar System Ambassador to Utah. This supernova, which was the third Wiggins has observed in his lifetime, is also the closest to Earth in three years, being about 22 million light years from Earth.

The last time a supernova was observed exploding this close to Earth was on January 22nd, 2014. At the time, students at the University of London Observatory spotted an exploding star (SN 2014J) in the nearby Cigar Galaxy (aka. M82), which is located around 12 million light years away. This was the closest supernova to be observed in recent decades.

Animation showing a comparison between M82 on Jan. 22nd, 2014 Nov. 22nd, 2013. Credit: E. Guido/N. Howes/M. Nicolini

As such, the observation of a supernova at a comparatively close distance to Earth just three years later is a pretty impressive feat. And it is an additional feather in the cap of an amateur astronomer whose resume is already quite impressive! Besides the three supernova he was observed, Wiggins has received many accolades over the years for his contributions to astronomy.

These include the Distinguished Public Service Medal, which is the highest civilian honor NASA can bestow. In addition, he discovered an asteroid in 2008 which the IAU – at Wiggin’s request – officially named “Univofutah”, in honor of the University of Utah. He is also a member of the Phun with Physics team, which provides free scientific lessons at the Natural History Museum of Utah.

Further Reading: University of Utah UNews

We Might Have a New Way to Push Back Space Radiation

Artist's depiction with cutaway section of the two giant donuts of radiation, called the Van Allen Belts, that surround Earth. Credit: NASA

Human beings have known for quite some time that our behavior has a significant influence on our planet. In fact, during the 20th century, humanity’s impact on the natural environment and climate has become so profound that some geologists began to refer to the modern era as the “Anthropocene”. In this age, human agency is the most deterministic force on the planet.

But according to a comprehensive new study by an Anglo-American team of researchers, human beings might be shaping the near-space environment as well. According to the study, radio communications, EM radiation from nuclear testing and other human actions have led to the creation of a barrier around Earth that is shielding it against high-energy space radiation.

The study, which was published in the journal Space Science Reviews under the title “Anthropogenic Space Weather“, was conducted by a team of scientists from the US and Imperial College, London. Led by Dr. Tamas Gombosi, a professor at the University of Michigan and the director at the Center for Space Modelling, the team reviewed the impact anthropogenic processes have on Earth’s near-space environment.

These processes include VLF and radio-frequency (RF) radio communications, which began in earnest during the 19th century and grew considerably during the 20th century. Things became more intense during the 1960s when the United States and the Soviet Union began conducting high-altitude nuclear tests, which resulted in massive electromagnetic pulses (EMP) in Earth’s atmosphere.

To top it off, the creation of large-scale power grids has also had an impact on the near-space environment. As they state in their study:

“The permanent existence, and growth, of power grids and of VLF transmitters around the globe means that it is unlikely that Earth’s present-day space environment is entirely “natural” – that is, that the environment today is the environment that existed at the onset of the 19th century. This can be concluded even though there continue to exist major uncertainties as to the nature of the physical processes that operate under the influence of both the natural environment and the anthropogenically-produced waves.”

The existence of radiation belts (or “toroids”) around Earth has been a well-known fact since the late 1950s. These belts were found to be the result of charged particles coming from the Sun (i.e. “solar wind”) that were captured by and held around Earth by it’s magnetic field. They were named Van Allen Radiation Belts after their discover, the American space scientist James Van Allen.

The twin Radiation Belt Storm Probes, later renamed the Van Allen Probes. Credit: NASA/JHUAPL

The extent of these belts, their energy distribution and particle makeup has been the subject of multiple space missions since then. Similarly, studies began to be mounted around the same time to discover how human-generated charged particles, which would interact with Earth’s magnetic fields once they reached near-space, could contribute to artificial radiation belts.

However, it has been with the deployment of orbital missions like the Van Allen Probes (formerly the Radiation Belt Storm Probes) that scientists have been truly able to study these belts. In addition to the aforementioned Van Allen Belts, they have also taken note of the VLF bubble that radio transmissions have surrounded Earth with. As Phil Erickson, the assistant director at the MIT Haystack Observatory, said in a NASA press release:

“A number of experiments and observations have figured out that, under the right conditions, radio communications signals in the VLF frequency range can in fact affect the properties of the high-energy radiation environment around the Earth.”

One thing that the probes have noticed was the interesting way that the outward extent of the VLF bubble corresponds almost exactly to the inner and outer Van Allen radiation belts. What’s more, comparisons between the modern extent of the radiations belts from the Van Allen Probe data shows that the inner boundary is much farther away than it appeared to be during the 1960s (when VLF transmissions were lower).

Two giant belts of radiation surround Earth. The inner belt is dominated by protons and the outer one by electrons. Credit: NASA

What this could mean is that the VLF bubble we humans have been creating for over a century and half has been removing excess radiation from the near-Earth environment. This could be good news for us, since the effects of charged particles on electronics and human health is well-documented. And during periods of intense space weather – aka. solar flares – the effects can be downright devastating.

Given the opportunity for further study, we may find ways to predictably and reliably use VLF transmissions to make the near-Earth environment more human and electronics-friendly. And with companies like SpaceX planning on bringing internet access to the world through broadband internet-providing satellites, and even larger plans for the commercialization of Near-Earth Orbit, anything that can mitigate the risk posed by radiation is welcome.

And be sure to check this video that illustrates the Van Allen Probes findings, courtesy of NASA:

Further Reading: NASA, Space Science Reviews

Dawn Gets Right in Between the Sun and Ceres and Takes this Video

Artist's rendition of the Dawn mission on approach to the protoplanet Ceres. Credit: NASA/JPL

The Dawn probe continues to excite and amaze! Since it achieved orbit around Ceres in March of 2015, it has been sending back an impressive stream of data and images on the protoplanet. In addition to capturing pictures of the mysterious “bright spots” on Ceres’ surface, it has also revealed evidence of cryovolcanism and the possibility of an interior ocean that could even support life.

Most recently, the Dawn probe conducted observations of the protoplanet while it was at opposition – directly between the Sun and Ceres surface – on April 29th. From this position, the craft was able to capture pictures of the Occator Crater, which contains the brightest spot on Ceres. These images were then stitched together by members of the mission team in order to create a short movie that showcases the view Dawn had of the planet.

The images were snapped when the Dawn probe was at an altitude of about 20,000 km (12,000 mi) from Ceres’ surface. As you can see (by clicking on the image below), the short movie shows the protoplanet rotating so that the Occator Crater is featured prominently. This crater is unmistakable thanks to the way its bright spots (two side by side white dots) stand out from the bland, grey landscape.

NASA movie made of images taken by NASA’s Dawn spacecraft, from a position exactly between the sun and Ceres’ surface. Credits: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

This increase in brightness is attributable to the size of grains of material on the surface, as well as their degree of porosity. As scientists have known for some time (thanks to the Dawn mission data) these bright spots are salt deposits, which stand out because they are more reflective than their surrounding environment. But for the sake of movie, this contrast was enhanced further in order to highlight the difference.

The observations were conducted as part of the latest phase of the Dawn mission, where it is recording cosmic rays in order to refine its earlier measurements of Ceres’ underground environment. In order to conduct these readings, the probe has been placed through an intricate set of maneuvers designed to shift its orbit around Ceres. Towards the end of April, this placed the probe in a position directly between the Sun and Ceres.

Based on previous data collected by ground-based telescopes and spacecraft that have viewed planetary bodies at opposition, the Dawn team predicted that Ceres would appear brighter from this vantage point. But rather than simply providing for some beautiful images of Ceres’ surface, the pictures are expected to reveal new details of the surface that are not discernible by visual inspection.

A view of Ceres in natural colour, pictured by the Dawn spacecraft in May 2015. Credit: NASA/JPL/Planetary Society/Justin Cowart

For more than two years now, the Dawn probe has been observing Ceres from a range of illumination angles that exceed those made of just about any other body in the Solar System. These has provided scientists with the opportunity to gain new insights into its surface features, properties, and the forces which shape it. Such observations will come in very handy as they continue to probe Ceres’ surface for hints of what lies beneath.

For years, scientists have been of the opinion that Ceres’ harbors an interior ocean that could support life. In fact, the Dawn probe has already gathered spectral data that hinted at the presence of organic molecules on the surface, which were reasoned to have been kicked up when a meteor impacted the surface. Characterizing the surface and subsurface environments will help determine if this astronomical body really could support life.

At present, the Dawn probe is maintaining an elliptical orbit that is taking it farther away from Ceres. As of May 11th, NASA reported that the probe was in good health and functioning well, despite the malfunction that took place in April where it’s third reaction wheel failed. The Dawn mission has already been extended, and it is expected to operate around Ceres until 2017.

Further Reading: NASA

Could the Closest Extrasolar Planet Be Habitable? Astronomers Plan to Find Out

Artist’s impression of Proxima b, which was discovered using the Radial Velocity method. Credit: ESO/M. Kornmesser

The extra-solar planet known as Proxima b has occupied a special place in the public mind ever since its existence was announced in August of 2016. As the closest exoplanet to our Solar System, its discovery has raised questions about the possibility of exploring it in the not-too-distant future. And even more tantalizing are the questions relating to its potential habitability.

Despite numerous studies that have attempted to indicate whether the planet could be suitable for life as we know it, nothing definitive has been produced. Fortunately, a team of astrophysics from the University of Exeter – with the help of meteorology experts from the UK’s Met Office – have taken the first tentative steps towards determining if Proxima b has a habitable climate.

According to their study, which appeared recently in the journal Astronomy & Astrophysics, the team conducted a series of simulations using the state-of-the-art Met Office Unified Model (UM). This numerical model has been used for decades to study Earth’s atmosphere, with applications ranging from weather prediction to the effects of climate change.

Artist’s impression of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri. The double star Alpha Centauri AB is visible to the upper right of Proxima itself. Credit: ESO

With this model, the team simulated what the climate of Proxima b would be like if it had a similar atmospheric composition to Earth. They also conducted simulations on what the planet would be like it if had a much simpler atmosphere – one composed of nitrogen with trace amounts of carbon dioxide. Last, but not least, they made allowances for variations in the planet’s orbit.

For instance, given the planet’s distance from its sun – 0.05 AU (7.5 million km; 4.66 million mi) – there have been questions about the planet’s orbital characteristics. On the one hand, it could be tidally-locked, where one face is constantly facing towards Proxima Centauri. On the other, the planet could be in a 3:2 orbital resonance with its sun, where it rotates three times on its axis for every two orbits (much like Mercury experiences with our Sun).

In either case, this would result in one side of the planet being exposed to quite a bit of radiation. Given the nature of M-type red dwarf stars, which are highly variable and unstable compared to other types of stars, the sun-facing side would be periodically irradiated.  Also, in both orbital scenarios, the planet would be subject to significant variations in temperature that would make it difficult for liquid water to exist.

For example, on a tidally-locked planet, the main atmospheric gases on the night-facing side would be likely to freeze, which would leave the daylight zone exposed and dry. And on a planet with a 3:2 orbital resonance, a single solar day would most likely last a very long time (a solar day on Mercury lasts 176 Earth days), causing one side to become too hot and dry the other side too cold and dry.

This infographic compares the orbit of the planet around Proxima Centauri (Proxima b) with the same region of the Solar System. Credit: ESO

By taking all this into account, the team’s simulations allowed for some crucial comparisons with previous studies, but also allowed the team to reach beyond them. As Dr. Ian Boutle, an Honorary University Fellow at the University of Exeter and the lead author of the paper, explained in a University press release:

“Our research team looked at a number of different scenarios for the planet’s likely orbital configuration using a set of simulations. As well as examining how the climate would behave if the planet was ‘tidally-locked’ (where one day is the same length as one year), we also looked at how an orbit similar to Mercury, which rotates three times on its axis for every two orbits around the sun (a 3:2 resonance), would affect the environment.”

In the end, the results were quite favorable, as the team found that Proxima b would have a remarkably stable climate with either atmosphere and in either orbital configuration. Essentially, the UM software simulations showed that when both atmospheres and both the tidally-locked and 3:2 resonance configurations were accounted for, there would still be regions on the planet where water was able to exist in liquid form.

Naturally, the 3:2 resonance example resulted in more substantial areas of the planet falling within this temperature range. They also found that an eccentric orbit, where the distance between the planet and Proxima Centauri varied to a significant degree over the course of a single orbital period, would lead to a further increase in potential habitability.

Artist’s depiction of a watery exoplanet orbiting a distant red dwarf star. New research indicates that Proxima b could be especially watery. Credit: CfA

As Dr James Manners, another Honorary University Fellow and one of the co-authors on the paper, said:

“One of the main features that distinguishes this planet from Earth is that the light from its star is mostly in the near infra-red. These frequencies of light interact much more strongly with water vapor and carbon dioxide in the atmosphere which affects the climate that emerges in our model.”

Of course, much more work needs be done before we can truly understand whether this planet is capable of supporting life as we know it. Beyond feeding the hopes of those who would like to see it colonized someday, studies into Proxima b’s conditions are also of extreme importance in determining whether or not indigenous life exists there right now.

But in the meantime, studies such as this are extremely helpful when it comes to anticipating what kinds of environments we might find on distant planets. Dr Nathan Mayne – the scientific lead on exoplanet modelling at the University of Exeter and a co-author on the paper – also indicated that climate studies of this kind could have applications for scientists here at home.

“With the project we have at Exeter we are trying to not only understand the somewhat bewildering diversity of exoplanets being discovered, but also exploit this to hopefully improve our understanding of how our own climate has and will evolve,” he said. What’s more, it helps to illustrate how conditions here on Earth can be used to predict what may exist in extra-solar environments.

While that might sound a bit Earth-centric, it is entirely reasonable to assume that planets in other star systems are subject to processes and mechanics similar to what we’ve seen on the Solar planets. And this is something we are invariably forced to do when it comes to searching for habitable planets and life beyond our Solar System. Until we can go there directly, we will be forced to measure what we don’t know by what we do.

Further Reading: University of Exeter, Astronomy & Astrophysics

New Explanation for Dark Energy? Tiny Fluctuations of Time and Space

A new study from researchers from the University of British Columbia offers a new explanation of Dark Energy. Credit: NASA

Since the late 1920s, astronomers have been aware of the fact that the Universe is in a state of expansion. Initially predicted by Einstein’s Theory of General Relativity, this realization has gone on to inform the most widely-accepted cosmological model – the Big Bang Theory. However, things became somewhat confusing during the 1990s, when improved observations showed that the Universe’s rate of expansion has been accelerating for billions of years.

This led to the theory of Dark Energy, a mysterious invisible force that is driving the expansion of the cosmos. Much like Dark Matter which explained the “missing mass”, it then became necessary to find this elusive energy, or at least provide a coherent theoretical framework for it. A new study from the University of British Columbia (UBC) seeks to do just that by postulating the Universe is expanding due to fluctuations in space and time.

The study – which was recently published in the journal Physical Review D – was led by Qingdi Wang, a PhD student with the Department of Physics and Astronomy at UBC. Under the supervisions of UBC Professor William Unruh (the man who proposed the Unruh Effect) and with assistance from Zhen Zhu (another PhD student at UBC), they provide a new take on Dark Energy.

Diagram showing the Lambda-CBR universe, from the Big Bang to the the current era. Credit: Alex Mittelmann/Coldcreation

The team began by addressing the inconsistencies arising out of the two main theories that together explain all natural phenomena in the Universe. These theories are none other than General Relativity and quantum mechanics, which effectively explain how the Universe behaves on the largest of scales (i.e. stars, galaxies, clusters) and the smallest (subatomic particles).

Unfortunately, these two theories are not consistent when it comes to a little matter known as gravity, which scientists are still unable to explain in terms of quantum mechanics. The existence of Dark Energy and the expansion of the Universe are another point of disagreement. For starters, candidates theories like vacuum energy – which is one of the most popular explanations for Dark Energy – present serious incongruities.

According to quantum mechanics, vacuum energy would have an incredibly large energy density to it. But if this is true, then General Relativity predicts that this energy would have an incredibly strong gravitational effect, one which would be powerful enough to cause the Universe to explode in size. As Prof. Unruh shared with Universe Today via email:

“The problem is that any naive calculation of the vacuum energy gives huge values. If one assumes that there is some sort of cutoff so one cannot get energy densities much greater than the Planck energy density (or about 1095 Joules/meter³)  then one finds that one gets a Hubble constant – the time scale on which the Universe roughly doubles in size – of the order of 10-44 sec. So, the usual approach is to say that somehow something reduces that down so that one gets the actual expansion rate of about 10 billion years instead. But that ‘somehow’ is pretty mysterious and no one has come up with an even half convincing mechanism.”

Timeline of the Big Bang and the expansion of the Universe. Credit: NASA

Whereas other scientists have sought to modify the theories of General Relativity and quantum mechanics in order to resolve these inconsistencies, Wang and his colleagues sought a different approach. As Wang explained to Universe Today via email:

“Previous studies are either trying to modify quantum mechanics in some way to make vacuum energy small or trying to modify General Relativity in some way to make gravity numb for vacuum energy. However, quantum mechanics and General Relativity are the two most successful theories that explain how our Universe works… Instead of trying to modify quantum mechanics or General Relativity, we believe that we should first understand them better. We takes the large vacuum energy density predicted by quantum mechanics seriously and just let them gravitate according to General Relativity without modifying either of them.”

For the sake of their study, Wang and his colleagues performed new sets of calculations on vacuum energy that took its predicted high energy density into account. They then considered the possibility that on the tiniest of scales – billions of times smaller than electrons – the fabric of spacetime is subject to wild fluctuations, oscillating at every point between expansion and contraction.

Could fluctuations at the tiniest levels of space time explain Dark Energy and the expansion of the cosmos? Credit: University of Washington

As it swings back and forth, the result of these oscillations is a net effect where the Universe expands slowly, but at an accelerating rate. After performing their calculations, they noted that such an explanation was consistent with both the existence of quantum vacuum energy density and General Relativity. On top of that, it is also consistent with what scientists have been observing in our Universe for almost a century. As Unruh described it:

“Our calculations showed that one could consistently regard [that] the Universe on the tiniest scales is actually expanding and contracting at an absurdly fast rate; but that on a large scale, because of an averaging over those tiny scales, physics would not notice that ‘quantum foam’. It has a tiny residual effect in giving an effective cosmological constant (dark energy type effect). In some ways it is like waves on the ocean which travel as if the ocean were perfectly smooth but really we know that there is this incredible dance of the atoms that make up the water, and waves average over those fluctuations, and act as if the surface was smooth.”

In contrast to conflicting theories of a Universe where the various forces that govern it cannot be resolved and must cancel each other out, Wang and his colleagues presents a picture where the Universe is constantly in motion. In this scenario, the effects of vacuum energy are actually self-cancelling, and also give rise to the expansion and acceleration we have been observing all this time.

While it may be too soon to tell, this image of a Universe that is highly-dynamic (even on the tiniest scales) could revolutionize our understanding of spacetime. At the very least, these theoretical findings are sure to stimulate debate within the scientific community, as well as experiments designed to offer direct evidence. And that, as we know, is the only way we can advance our understanding of this thing known as the Universe.

Further Reading: UBC News, Physical Review D

Finding Alien Megastructures Around Nearby Pulsars

Artist's representation of a Dyson ring, orbiting a star at a distance of 1 AU. Credit: WIkipedia Commons/Falcorian

During the 1960s, Freeman Dyson and Nikolai Kardashev captured the imaginations of people everywhere by making some radical proposals. Whereas Dyson proposed that intelligent species could eventually create megastructures to harness the energy of their stars, Kardashev offered a three-tiered classification system for intelligent species based on their ability to harness the energy of their planet, solar system and galaxy, respectively.

Continue reading “Finding Alien Megastructures Around Nearby Pulsars”

Astronomers Find a Rogue Supermassive Black Hole, Kicked out by a Galactic Collision

Using data from Chandra and other telescopes, astronomers have found a possible "recoiling" black hole. Credit: NASA/CXC/M.Weiss

When galaxies collide, all manner of chaos can ensue. Though the process takes millions of years, the merger of two galaxies can result in Supermassive Black Holes (SMBHs, which reside at their centers) merging and becoming even larger. It can also result in stars being kicked out of their galaxies, sending them and even their systems of planets into space as “rogue stars“.

But according to a new study by an international team of astronomers, it appears that in some cases, SMBHs could  also be ejected from their galaxies after a merger occurs. Using data from NASA’s Chandra X-ray Observatory and other telescopes, the team detected what could be a “renegade supermassive black hole” that is traveling away from its galaxy.

According to the team’s study – which appeared in the Astrophysical Journal under the title A Potential Recoiling Supermassive Black Hole, CXO J101527.2+625911 – the renegade black hole was detected at a distance of about 3.9 billion light years from Earth. It appears to have come from within an elliptical galaxy, and contains the equivalent of 160 million times the mass of our Sun.

Hubble data showing the two bright points near the middle of the galaxy. Credit: NASA/CXC/NRAO/D.-C.Kim/STScI

The team found this black hole while searching through thousands of galaxies for evidence of black holes that showed signs of being in motion. This consisted of sifting through data obtained by the Chandra X-ray telescope for bright X-ray sources – a common feature of rapidly-growing SMBHs – that were observed as part of the Sloan Digital Sky Survey (SDSS).

They then looked at Hubble data of all these X-ray bright galaxies to see if it would reveal two bright peaks at the center of any. These bright peaks would be a telltale indication that a pair of supermassive black holes were present, or that a recoiling black hole was moving away from the center of the galaxy. Last, the astronomers examined the SDSS spectral data, which shows how the amount of optical light varies with wavelength.

From all of this, the researchers invariably found what they considered to be a good candidate for a renegade black hole. With the help data from the SDSS and the Keck telescope in Hawaii, they determined that this candidate was located near, but visibly offset from, the center of its galaxy. They also noted that it had a velocity that was different from the galaxy – properties which suggested that it was moving on its own.

The image below, which was generated from Hubble data, shows the two bright points near the center of the galaxy. Whereas the one on the left was located within the center, the one on the right (the renegade SMBH) was located about 3,000 light years away from the center. Between the X-ray and optical data, all indications pointed towards it being a black hole that was kicked from its galaxy.

The bright X-ray source detected with Chandra (left), and data obtained from the SDSS and the Keck telescope in Hawaii. Credit: NASA/CXC/NRAO/D.-C.Kim/STScI

In terms of what could have caused this, the team ventured that the back hole might have “recoiled” when two smaller SMBHs collided and merged. This collision would have generated gravitational waves that could have then pushed the black hole out of the galaxy’s center. They further ventured that the black hole may have formed and been set in motion by the collision of two smaller black holes.

Another possible explanation is that two SMBHs are located in the center of this galaxy, but one of them is not producing detectable radiation – which would mean that it is growing too slowly. However, the researchers favor the explanation that what they observed was a renegade black hole, as it seems to be more consistent with the evidence. For example, their study showed signs that the host galaxy was experiencing some disturbance in its outer regions.

This is a possible indication that the merger between the two galaxies occurred in the relatively recent past. Since SMBH mergers are thought to occur when their host galaxies merge, this reservation favors the renegade black hole theory. In addition, the data showed that in this galaxy, stars were forming at a high rate. This agrees with computer simulations that predict that merging galaxies experience an enhanced rate of star formation.

But of course, additional researches is needed before any conclusions can be reached. In the meantime, the findings are likely to be of particular interest to astronomers. Not only does this study involve a truly rare phenomenon – a SMBH that is in motion, rather than resting at the center of a galaxy – but the unique properties involved could help us to learn more about these rare and enigmatic features.

Detection of an unusually bright X-Ray flare from Sagittarius A*, a supermassive black hole in the center of the Milky Way galaxy. Credit: NASA/CXC/Stanford/I. Zhuravleva et al.

For one, the study of SMBHs could reveal more about the rate and direction of spin of these enigmatic objects before they merge. From this, astronomers would be able to better predict when and where SMBHs are about to merge. Studying the speed of recoiling black holes could also reveal additional information about gravitational waves, which could unlock additional secrets about the nature of space time.

And above all, witnessing a renegade black hole is an opportunity to see some pretty amazing forces at work. Assuming the observations are correct, there will no doubt be follow-up surveys designed to see where the SMBH is traveling and what effect it is having on the surrounding cosmic environment.

Ever since the 1970s, scientists have been of the opinion that most galaxies have SMBHs at their center. In the years and decades that followed, research confirmed the presence of black holes not only at the center of our galaxy – Sagittarius A* – but at the center of all almost all known massive galaxies. Ranging in mass from the hundreds of thousands to billions of Solar masses, these objects exert a powerful influence on their respective galaxies.

Be sure to enjoy this video, courtesy of the Chandra X-Ray Observatory:

Further Reading: Chandra X-ray Observatory, arXiv

Asteroid Strikes on Mars Spun Out Supersonic Tornadoes that Scoured the Surface

Asteroid impacts on Mars could have generated supersonic winds that shaped the surface, according to a new study. Credit: geol.umd.edu

The study of another planet’s surface features can provide a window into its deep past. Take Mars for example, a planet whose surface is a mishmash of features that speak volumes. In addition to ancient volcanoes and alluvial fans that are indications of past geological activity and liquid water once flowing on the surface, there are also the many impact craters that dot its surface.

In some cases, these impact craters have strange bright streaks emanating from them, ones which reach much farther than basic ejecta patterns would allow. According to a new research study by a team from Brown University, these features are the result of large impacts that generated massive plumes. These would have interacted with Mars’ atmosphere, they argue, causing supersonic winds that scoured the surface.

These features were noticed years ago by Professor Peter H. Schultz, a professor of geological science with the Department of Earth, Environmental, and Planetary Sciences (DEEPS) at Brown University. When studying images taken at night by the Mars Odyssey orbiter using its THEMIS instrument, he noticed steaks that only appeared when imaged in the infrared wavelength.

Artist’s conception of the Mars Odyssey spacecraft. Credit: NASA/JPL

These streaks were only visible in IR because it was only at this wavelength that contrasts in heat retention on the surface were visible. Essentially, brighter regions at night indicate surfaces that retain more heat during the day and take longer to cool. As Schultz explained in a Brown University press release, this allowed for features to be discerned that would otherwise not be noticed:

“You couldn’t see these things at all in visible wavelength images, but in the nighttime infrared they’re very bright. Brightness in the infrared indicates blocky surfaces, which retain more heat than surfaces covered by powder and debris. That tells us that something came along and scoured those surfaces bare.”

Along with Stephanie N. Quintana, a graduate student from DEEPS, the two began to consider other explanations that went beyond basic ejecta patterns. As they indicate in their study – which recently appeared in the journal Icarus under the title “Impact-generated winds on Mars” – this consisted of combining geological observations, laboratory impact experiments and computer modeling of impact processes. 

Ultimately, Schultz and Quintana concluded that crater-forming impacts led to vortex-like storms that reached speeds of up to 800 km/h (500 mph) – in other words, the equivalent of an F8 tornado here on Earth. These storms would have scoured the surface and ultimately led to the observed streak patterns. This conclusion was based in part on work Schultz has done in the past at NASA’s Vertical Gun Range.

An infrared image revealing strange bright streaks extending from Santa Fe crater on Mars. Credit: NASA/JPL-Caltech/Arizona State University.

This high-powered cannon, which can fire projectiles at speeds up to 24,000 km/h (15,000 mph), is used to conduct impact experiments. These experiments have shown that during an impact event, vapor plumes travel outwards from the impact point (just above the surface) at incredible speeds. For the sake of their study, Schultz and Quintana scaled the size of the impacts up, to the point where they corresponded to the impact craters on Mars.

The results indicated that the vapor plume speed would be supersonic, and that its interaction with the Martian atmosphere would generate powerful winds. However, the plume and associated winds would not be responsible for the strange streaks themselves. Since they would be travelling just above the surface, they would not be capable of causing the kind of deep scouring that exists in the streaked areas.

Instead, Schultz and Quintana showed that when the plume struck a raised surface feature – like the ridges of a smaller impact crater – it would create more powerful vortices that would then fall to the surface. It is these, according to their study, that are responsible for the scouring patterns they observed. This conclusion was based on the fact that bright streaks were almost always associated with the downward side of a crater rim.

IR images showing the correlation between the streaks and smaller craters that were in place when the larger crater was formed. Credit: NASA/JPL-Caltech/Arizona State University

As Schultz explained, the study of these streaks could prove useful in helping to establish that rate at which erosion and dust deposition occurs on the Martian surface in certain areas:

“Where these vortices encounter the surface, they sweep away the small particles that sit loose on the surface, exposing the bigger blocky material underneath, and that’s what gives us these streaks. We know these formed at the same time as these large craters, and we can date the age of the craters. So now we have a template for looking at erosion.”

In addition, these streaks could reveal additional information about the state of Mars during the time of impacts. For example, Schultz and Quintana noted that the streaks appear to form around craters that are about 20 km (12.4 mi) in diameter, but not always. Their experiments also revealed that the presence of volatile compounds (such as surface or subsurface water ice) would affect the amount of vapor generated by an impact.

In other words, the presence of streaks around some craters and not others could indicate where and when there was water ice on the Martian surface in the past. It has been known for some time that the disappearance of Mars’ atmosphere over the course of several hundred million years also resulted in the loss of its surface water. By being able to put dates to impact events, we might be able to learn more about Mars’ fateful transformation.

The study of these streaks could also be used to differentiate between the impacts of asteroids and comets on Mars – the latter of which would have had higher concentrations of water ice in them. Once again, detailed studies of Mars’ surface features are allowing scientists to construct a more detailed timeline of its evolution, thus determining how and when it became the cold, dry place we know today!

Further Reading: Brown University, Science Direct

 

Europa Lander Could Carry a Microphone and “Listen” to the Ice to Find Out What’s Underneath

Artist's rendering of a possible Europa Lander mission, which would explore the surface of the icy moon in the coming decades. Credit:: NASA/JPL-Caltech

Between the Europa Clipper and the proposed Europa Lander, NASA has made it clear that it intends to send a mission to this icy moon of Jupiter in the coming decade. Ever since the Voyager 1 and 2 probes conducted their historic flybys of the moon in 1973 and 1974 – which offered the first indications of a warm-water ocean in the moon’s interior – scientists have been eager to peak beneath the surface and see what is there.

Towards this end, NASA has issued a grant to a team of researchers from Arizona State University to build and test a specially-designed seismometer that the lander would use to listen to Europa’s interior. Known as the Seismometer for Exploring the Subsurface of Europa (SESE), this device will help scientists determine if the interior of Europa is conducive to life.

According to the profile for the Europa Lander, this microphone would be mounted to the robotic probe. Once it reached the surface of the moon, the seismometer would begin collecting information on Europa’s subsurface environment. This would include data on its natural tides and movements within the shell, which would determine the icy surface’s thickness.

Image of Europa’s ice shell, taken by the Galileo spacecraft, of fractured “chaos terrain”. Credit: NASA/JPL-Caltech

It would also determine if the surface has pockets of water – i.e. subsurface lakes – and see how often water rises to the surface. For some time, scientists have suspected that Europa’s “chaos terrain” would be the ideal place to search for evidence of life. These features, which are basically a jumbled mess of ridges, cracks, and plains, are believed to be spots where the subsurface ocean is interacting with the icy crust.

As such, any evidence of organic molecules or biological organisms would be easiest to find there. In addition, astronomers have also detected water plumes coming from Europa’s surface. These are also considered to be one of the best bets for finding evidence of life in the interior. But before they can be explored directly, determining where reservoirs of water reside beneath the ice and if they are connected to the interior ocean is paramount.

And this is where instruments like the SESE would come into play. Hongyu Yu is an exploration system engineer from ASU’s School of Earth and Space Exploration and the leader of the SESE team. As he stated in a recent article by ASU Now, “We want to hear what Europa has to tell us. And that means putting a sensitive ‘ear’ on Europa’s surface.”

While the idea of a Europa Lander is still in the concept-development stage, NASA is working to develop all the necessary components for such a mission. As such, they have provided the ASU team with a grant to develop and test their miniature seismometer, which measures no more than 10 cm (4 inches) on a side and could easily be fitted aboard a robotic lander.

Europa’s “Great Lake.” Scientists speculate many more exist throughout the shallow regions of the moon’s icy shell. Credit: Britney Schmidt/Dead Pixel FX/Univ. of Texas at Austin.

More importantly, their seismometer differs from conventional designs in that it does not rely on a mass-and-spring sensor. Such a design would be ill-suited for a mission to another body in our Solar System since it needs to be positioned upright, which requires that it be carefully planted and not disturbed. What’s more, the sensor needs to be placed within a complete vacuum to ensure accurate measurements.

By using a micro-electrical system with a liquid electrolyte for a sensor, Yu and his team have created a seismometer that can operate under a wider range of conditions. “Our design avoids all these problems,” he said. “This design has a high sensitivity to a wide range of vibrations, and it can operate at any angle to the surface. And if necessary, they can hit the ground hard on landing.”

As Lenore Dai – a chemical engineer and the director of the ASU’s School for Engineering of Matter, Transport and Energy – explained, the design also makes the SESE well suited for exploring extreme environments – like Europa’s icy surface. “We’re excited at the opportunity to develop electrolytes and polymers beyond their traditional temperature limits,” she said. “This project also exemplifies collaboration across disciplines.”

The SESE can also take a beating without compromising its sensor readings, which was tested when the team struck it with a sledgehammer and found that it still worked afterwards. According to seismologist Edward Garnero, who is also a member of the SESE team, this will come in handy. Landers typically have six to eight legs, he claims, which could be mated with seismometers to turn them into scientific instruments.

Artist’s concept of chloride salts bubbling up from Europa’s liquid ocean and reaching the frozen surface.  Credit: NASA/JPL-Caltech

Having this many sensors on the lander would give scientists the ability to combine data, allowing them to overcome the issue of variable seismic vibrations recorded by each. As such, ensuring that they are rugged is a must.

“Seismometers need to connect with the solid ground to operate most effectively. If each leg carries a seismometer, these could be pushed into the surface on landing, making good contact with the ground. We can also sort out high frequency signals from longer wavelength ones. For example, small meteorites hitting the surface not too far away would produce high frequency waves, and tides of gravitational tugs from Jupiter and Europa’s neighbor moons would make long, slow waves.”

Such a device could also prove crucial to missions other “ocean worlds” within the Solar System, which include Ceres, Ganymede, Callisto, Enceladus, Titan and others. On these bodies as well, it is believed that life could very well exist in warm-water oceans that lie beneath the surface. As such, a compact, rugged seismometer that is capable of working in extreme-temperature environments would be ideal for studying their interiors.

What’s more, missions of this kind would be able to reveal where the ice sheets on these bodies are thinnest, and hence where the interior oceans are most accessible. Once that’s done, NASA and other space agencies will know exactly where to send in the probe (or possibly the robotic submarine). Though we might have to wait a few decades on that one!

Further Reading: ASU Now

The Universe

Artists illustration of the expansion of the Universe (Credit: NASA, Goddard Space Flight Center)

What is the Universe? That is one immensely loaded question! No matter what angle one took to answer that question, one could spend years answering that question and still barely scratch the surface. In terms of time and space, it is unfathomably large (and possibly even infinite) and incredibly old by human standards. Describing it in detail is therefore a monumental task. But we here at Universe Today are determined to try!

So what is the Universe? Well, the short answer is that it is the sum total of all existence. It is the entirety of time, space, matter and energy that began expanding some 13.8 billion years ago and has continued to expand ever since. No one is entirely certain how extensive the Universe truly is, and no one is entirely sure how it will all end. But ongoing research and study has taught us a great deal in the course of human history.

Definition:

The term “the Universe” is derived from the Latin word “universum”, which was used by Roman statesman Cicero and later Roman authors to refer to the world and the cosmos as they knew it. This consisted of the Earth and all living creatures that dwelt therein, as well as the Moon, the Sun, the then-known planets (Mercury, Venus, Mars, Jupiter, Saturn) and the stars.

Illuminated illustration of the Ptolemaic geocentric conception of the Universe by Portuguese cosmographer and cartographer Bartolomeu Velho (?-1568) in his work Cosmographia (1568). Credit: Bibilotèque nationale de France, Paris

The term “cosmos” is often used interchangeably with the Universe. It is derived from the Greek word kosmos, which literally means “the world”. Other words commonly used to define the entirety of existence include “Nature” (derived from the Germanic word natur) and the English word “everything”, who’s use can be seen in scientific terminology – i.e. “Theory Of Everything” (TOE).

Today, this term is often to used to refer to all things that exist within the known Universe – the Solar System, the Milky Way, and all known galaxies and superstructures. In the context of modern science, astronomy and astrophysics, it also refers to all spacetime, all forms of energy (i.e. electromagnetic radiation and matter) and the physical laws that bind them.

Origin of the Universe:

The current scientific consensus is that the Universe expanded from a point of super high matter and energy density roughly 13.8 billion years ago. This theory, known as the Big Bang Theory, is not the only cosmological model for explaining the origins of the Universe and its evolution – for example, there is the Steady State Theory or the Oscillating Universe Theory.

It is, however, the most widely-accepted and popular. This is due to the fact that the Big Bang theory alone is able to explain the origin of all known matter, the laws of physics, and the large scale structure of the Universe. It also accounts for the expansion of the Universe, the existence of the Cosmic Microwave Background, and a broad range of other phenomena.

Illustration of the Big Bang Theory
The Big Bang Theory: A history of the Universe starting from a singularity and expanding ever since. Credit: grandunificationtheory.com

Working backwards from the current state of the Universe, scientists have theorized that it must have originated at a single point of infinite density and finite time that began to expand. After the initial expansion, the theory maintains that Universe cooled sufficiently to allow for the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies.

This all began roughly 13.8 billion years ago, and is thus considered to be the age of the Universe. Through the testing of theoretical principles, experiments involving particle accelerators and high-energy states, and astronomical studies that have observed the deep Universe, scientists have constructed a timeline of events that began with the Big Bang and has led to the current state of cosmic evolution.

However, the earliest times of the Universe – lasting from approximately 10-43 to 10-11 seconds after the Big Bang –  are the subject of extensive speculation. Given that the laws of physics as we know them could not have existed at this time, it is difficult to fathom how the Universe could have been governed. What’s more, experiments that can create the kinds of energies involved are in their infancy.

Still, many theories prevail as to what took place in this initial instant in time, many of which are compatible. In accordance with many of these theories, the instant following the Big Bang can be broken down into the following time periods: the Singularity Epoch, the Inflation Epoch, and the Cooling Epoch.

Also known as the Planck Epoch (or Planck Era), the Singularity Epoch was the earliest known period of the Universe. At this time, all matter was condensed on a single point of infinite density and extreme heat. During this period, it is believed that the quantum effects of gravity dominated physical interactions and that no other physical forces were of equal strength to gravitation.

This Planck period of time extends from point 0 to approximately 10-43 seconds, and is so named because it can only be measured in Planck time. Due to the extreme heat and density of matter, the state of the Universe was highly unstable. It thus began to expand and cool, leading to the manifestation of the fundamental forces of physics. From approximately 10-43 second and 10-36, the Universe began to cross transition temperatures.

It is here that the fundamental forces that govern the Universe are believed to have began separating from each other. The first step in this was the force of gravitation separating from gauge forces, which account for strong and weak nuclear forces and electromagnetism. Then, from 10-36 to 10-32 seconds after the Big Bang, the temperature of the Universe was low enough (1028 K) that electromagnetism and weak nuclear force were able to separate as well.

With the creation of the first fundamental forces of the Universe, the Inflation Epoch began, lasting from 10-32 seconds in Planck time to an unknown point. Most cosmological models suggest that the Universe at this point was filled homogeneously with a high-energy density, and that the incredibly high temperatures and pressure gave rise to rapid expansion and cooling.

This began at 10-37 seconds, where the phase transition that caused for the separation of forces also led to a period where the Universe grew exponentially. It was also at this point in time that baryogenesis occurred, which refers to a hypothetical event where temperatures were so high that the random motions of particles occurred at relativistic speeds.

As a result of this, particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions, which is believed to have led to the predominance of matter over antimatter in the present Universe. After inflation stopped, the Universe consisted of a quark–gluon plasma, as well as all other elementary particles. From this point onward, the Universe began to cool and matter coalesced and formed.

As the Universe continued to decrease in density and temperature, the Cooling Epoch began. This was characterized by the energy of particles decreasing and phase transitions continuing until the fundamental forces of physics and elementary particles changed into their present form. Since particle energies would have dropped to values that can be obtained by particle physics experiments, this period onward is subject to less speculation.

For example, scientists believe that about 10-11 seconds after the Big Bang, particle energies dropped considerably. At about 10-6 seconds, quarks and gluons combined to form baryons such as protons and neutrons, and a small excess of quarks over antiquarks led to a small excess of baryons over antibaryons.

Since temperatures were not high enough to create new proton-antiproton pairs (or neutron-anitneutron pairs), mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons and none of their antiparticles. A similar process happened at about 1 second after the Big Bang for electrons and positrons.

After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the Universe was dominated by photons – and to a lesser extent, neutrinos. A few minutes into the expansion, the period known as Big Bang nucleosynthesis also began.

Thanks to temperatures dropping to 1 billion kelvin and energy densities dropping to about the equivalent of air, neutrons and protons began to combine to form the Universe’s first deuterium (a stable isotope of hydrogen) and helium atoms. However, most of the Universe’s protons remained uncombined as hydrogen nuclei.

After about 379,000 years, electrons combined with these nuclei to form atoms (again, mostly hydrogen), while the radiation decoupled from matter and continued to expand through space, largely unimpeded. This radiation is now known to be what constitutes the Cosmic Microwave Background (CMB), which today is the oldest light in the Universe.

As the CMB expanded, it gradually lost density and energy, and is currently estimated to have a temperature of 2.7260 ± 0.0013 K (-270.424 °C/ -454.763 °F ) and an energy density of 0.25 eV/cm3 (or 4.005×10-14 J/m3; 400–500 photons/cm3). The CMB can be seen in all directions at a distance of roughly 13.8 billion light years, but estimates of its actual distance place it at about 46 billion light years from the center of the Universe.

Evolution of the Universe:

Over the course of the several billion years that followed, the slightly denser regions of the Universe’s matter (which was almost uniformly distributed) began to become gravitationally attracted to each other. They therefore grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures that we regularly observe today.

This is what is known as the Structure Epoch, since it was during this time that the modern Universe began to take shape. This consisted of visible matter distributed in structures of various sizes (i.e. stars and planets to galaxies, galaxy clusters, and super clusters) where matter is concentrated, and which are separated by enormous gulfs containing few galaxies.

The details of this process depend on the amount and type of matter in the Universe. Cold dark matter, warm dark matter, hot dark matter, and baryonic matter are the four suggested types. However, the Lambda-Cold Dark Matter model (Lambda-CDM), in which the dark matter particles moved slowly compared to the speed of light, is the considered to be the standard model of Big Bang cosmology, as it best fits the available data.

In this model, cold dark matter is estimated to make up about 23% of the matter/energy of the Universe, while baryonic matter makes up about 4.6%. The Lambda refers to the Cosmological Constant, a theory originally proposed by Albert Einstein that attempted to show that the balance of mass-energy in the Universe remains static.

In this case, it is associated with dark energy, which served to accelerate the expansion of the Universe and keep its large-scale structure largely uniform. The existence of dark energy is based on multiple lines of evidence, all of which indicate that the Universe is permeated by it. Based on observations, it is estimated that 73% of the Universe is made up of this energy.

During the earliest phases of the Universe, when all of the baryonic matter was more closely space together, gravity predominated. However, after billions of years of expansion, the growing abundance of dark energy led it to begin dominating interactions between galaxies. This triggered an acceleration, which is known as the Cosmic Acceleration Epoch.

When this period began is subject to debate, but it is estimated to have began roughly 8.8 billion years after the Big Bang (5 billion years ago). Cosmologists rely on both quantum mechanics and Einstein’s General Relativity to describe the process of cosmic evolution that took place during this period and any time after the Inflationary Epoch.

Through a rigorous process of observations and modeling, scientists have determined that this evolutionary period does accord with Einstein’s field equations, though the true nature of dark energy remains illusive. What’s more, there are no well-supported models that are capable of determining what took place in the Universe prior to the period predating 10-15 seconds after the Big Bang.

However, ongoing experiments using CERN’s Large Hadron Collider (LHC) seek to recreate the energy conditions that would have existed during the Big Bang, which is also expected to reveal physics that go beyond the realm of the Standard Model.

Any breakthroughs in this area will likely lead to a unified theory of quantum gravitation, where scientists will finally be able to understand how gravity interacts with the three other fundamental forces of the physics – electromagnetism, weak nuclear force and strong nuclear force. This, in turn, will also help us to understand what truly happened during the earliest epochs of the Universe.

Structure of the Universe:

The actual size, shape and large-scale structure of the Universe has been the subject of ongoing research. Whereas the oldest light in the Universe that can be observed is 13.8 billion light years away (the CMB), this is not the actual extent of the Universe. Given that the Universe has been in a state of expansion for billion of years, and at velocities that exceed the speed of light, the actual boundary extends far beyond what we can see.

Our current cosmological models indicate that the Universe measures some 91 billion light years (28 billion parsecs) in diameter. In other words, the observable Universe extends outwards from our Solar System to a distance of roughly 46 billion light years in all directions. However, given that the edge of the Universe is not observable, it is not yet clear whether the Universe actually has an edge. For all we know, it goes on forever!

Diagram showing the Lambda-CBR Universe, from the Big Bang to the the current era. Credit: Alex Mittelmann/Coldcreation

Within the observable Universe, matter is distributed in a highly structured fashion. Within galaxies, this consists of large concentrations – i.e. planets, stars, and nebulas – interspersed with large areas of empty space (i.e. interplanetary space and the interstellar medium).

Things are much the same at larger scales, with galaxies being separated by volumes of space filled with gas and dust. At the largest scale, where galaxy clusters and superclusters exist, you have a wispy network of large-scale structures consisting of dense filaments of matter and gigantic cosmic voids.

In terms of its shape, spacetime may exist in one of three possible configurations – positively-curved, negatively-curved and flat. These possibilities are based on the existence of at least four dimensions of space-time (an x-coordinate, a y-coordinate, a z-coordinate, and time), and depend upon the nature of cosmic expansion and whether or not the Universe is finite or infinite.

A positively-curved (or closed) Universe would resemble a four-dimensional sphere that would be finite in space and with no discernible edge. A negatively-curved (or open) Universe would look like a four-dimensional “saddle” and would have no boundaries in space or time.

Various possible shapes of the observable Universe – where mass/energy density is too high; too low – or just right, so that Euclidean geometry where the three anles of a triable do add up to 180 degrees. Credit: Wikipedia Commons

In the former scenario, the Universe would have to stop expanding due to an overabundance of energy. In the latter, it would contain too little energy to ever stop expanding. In the third and final scenario – a flat Universe – a critical amount of energy would exist and its expansion woudl only halt after an infinite amount of time.

Fate of the Universe:

Hypothesizing that the Universe had a starting point naturally gives rise to questions about a possible end point. If the Universe began as a tiny point of infinite density that started to expand, does that mean it will continue to expand indefinitely? Or will it one day run out of expansive force, and begin retreating inward until all matter crunches back into a tiny ball?

Answering this question has been a major focus of cosmologists ever since the debate about which model of the Universe was the correct one began. With the acceptance of the Big Bang Theory, but prior to the observation of dark energy in the 1990s, cosmologists had come to agree on two scenarios as being the most likely outcomes for our Universe.

In the first, commonly known as the “Big Crunch” scenario, the Universe will reach a maximum size and then begin to collapse in on itself. This will only be possible if the mass density of the Universe is greater than the critical density. In other words, as long as the density of matter remains at or above a certain value (1-3 ×10-26 kg of matter per m³), the Universe will eventually contract.

Alternatively, if the density in the Universe were equal to or below the critical density, the expansion would slow down but never stop. In this scenario, known as the “Big Freeze”, the Universe would go on until star formation eventually ceased with the consumption of all the interstellar gas in each galaxy. Meanwhile, all existing stars would burn out and become white dwarfs, neutron stars, and black holes.

Very gradually, collisions between these black holes would result in mass accumulating into larger and larger black holes. The average temperature of the Universe would approach absolute zero, and black holes would evaporate after emitting the last of their Hawking radiation. Finally, the entropy of the Universe would increase to the point where no organized form of energy could be extracted from it (a scenarios known as “heat death”).

Modern observations, which include the existence of dark energy and its influence on cosmic expansion, have led to the conclusion that more and more of the currently visible Universe will pass beyond our event horizon (i.e. the CMB, the edge of what we can see) and become invisible to us. The eventual result of this is not currently known, but “heat death” is considered a likely end point in this scenario too.

Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion. This scenario is known as the “Big Rip”, in which the expansion of the Universe itself will eventually be its undoing.

History of Study:

Strictly speaking, human beings have been contemplating and studying the nature of the Universe since prehistoric times. As such, the earliest accounts of how the Universe came to be were mythological in nature and passed down orally from one generation to the next. In these stories, the world, space, time, and all life began with a creation event, where a God or Gods were responsible for creating everything.

Astronomy also began to emerge as a field of study by the time of the Ancient Babylonians. Systems of constellations and astrological calendars prepared by Babylonian scholars as early as the 2nd millennium BCE would go on to inform the cosmological and astrological traditions of cultures for thousands of years to come.

By Classical Antiquity, the notion of a Universe that was dictated by physical laws began to emerge. Between Greek and Indian scholars, explanations for creation began to become philosophical in nature, emphasizing cause and effect rather than divine agency. The earliest examples include Thales and Anaximander, two pre-Socratic Greek scholars who argued that everything was born of a primordial form of matter.

By the 5th century BCE, pre-Socratic philosopher Empedocles became the first western scholar to propose a Universe composed of four elements – earth, air, water and fire. This philosophy became very popular in western circles, and was similar to the Chinese system of five elements – metal, wood, water, fire, and earth – that emerged around the same time.

Early atomic theory stated that different materials had differently shaped atoms. Credit: github.com

It was not until Democritus, the 5th/4th century BCE Greek philosopher, that a Universe composed of indivisible particles (atoms) was proposed. The Indian philosopher Kanada (who lived in the 6th or 2nd century BCE) took this philosophy further by proposing that light and heat were the same substance in different form. The 5th century CE Buddhist philosopher Dignana took this even further, proposing that all matter was made up of energy.

The notion of finite time was also a key feature of the Abrahamic religions – Judaism, Christianity and Islam. Perhaps inspired by the Zoroastrian concept of the Day of Judgement, the belief that the Universe had a beginning and end would go on to inform western concepts of cosmology even to the present day.

Between the 2nd millennium BCE and the 2nd century CE, astronomy and astrology continued to develop and evolve. In addition to monitoring the proper motions of the planets and the movement of the constellations through the Zodiac, Greek astronomers also articulated the geocentric model of the Universe, where the Sun, planets and stars revolve around the Earth.

These traditions are best described in the 2nd century CE mathematical and astronomical treatise, the Almagest, which was written by Greek-Egyptian astronomer Claudius Ptolemaeus (aka. Ptolemy). This treatise and the cosmological model it espoused would be considered canon by medieval European and Islamic scholars for over a thousand years to come.

A comparison of the geocentric and heliocentric models of the Universe. Credit: history.ucsb.edu

However, even before the Scientific Revolution (ca. 16th to 18th centuries), there were astronomers who proposed a heliocentric model of the Universe – where the Earth, planets and stars revolved around the Sun. These included Greek astronomer Aristarchus of Samos (ca. 310 – 230 BCE), and Hellenistic astronomer and philosopher Seleucus of Seleucia (190 – 150 BCE).

During the Middle Ages, Indian, Persian and Arabic philosophers and scholars maintained and expanded on Classical astronomy. In addition to keeping Ptolemaic and non-Aristotelian ideas alive, they also proposed revolutionary ideas like the rotation of the Earth. Some scholars – such as Indian astronomer Aryabhata and Persian astronomers Albumasar and Al-Sijzi – even advanced versions of a heliocentric Universe.

By the 16th century, Nicolaus Copernicus proposed the most complete concept of a heliocentric Universe by resolving lingering mathematical problems with the theory. His ideas were first expressed in the 40-page manuscript titled Commentariolus (“Little Commentary”), which described a heliocentric model based on seven general principles. These seven principles stated that:

  1. Celestial bodies do not all revolve around a single point
  2. The center of Earth is the center of the lunar sphere—the orbit of the moon around Earth; all the spheres rotate around the Sun, which is near the center of the Universe
  3. The distance between Earth and the Sun is an insignificant fraction of the distance from Earth and Sun to the stars, so parallax is not observed in the stars
  4. The stars are immovable – their apparent daily motion is caused by the daily rotation of Earth
  5. Earth is moved in a sphere around the Sun, causing the apparent annual migration of the Sun
  6. Earth has more than one motion
  7. Earth’s orbital motion around the Sun causes the seeming reverse in direction of the motions of the planets.

Frontispiece and title page of the Dialogue, 1632. Credit: moro.imss.fi.it

A more comprehensive treatment of his ideas was released in 1532, when Copernicus completed his magnum opus – De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres). In it, he advanced his seven major arguments, but in more detailed form and with detailed computations to back them up. Due to fears of persecution and backlash, this volume was not released until his death in 1542.

His ideas would be further refined by 16th/17th century mathematicians, astronomer and inventor Galileo Galilei. Using a telescope of his own creation, Galileo would make recorded observations of the Moon, the Sun, and Jupiter which demonstrated flaws in the geocentric model of the Universe while also showcasing the internal consistency of the Copernican model.

His observations were published in several different volumes throughout the early 17th century. His observations of the cratered surface of the Moon and his observations of Jupiter and its largest moons were detailed in 1610 with his Sidereus Nuncius (The Starry Messenger) while his observations were sunspots were described in On the Spots Observed in the Sun (1610).

Galileo also recorded his observations about the Milky Way in the Starry Messenger, which was previously believed to be nebulous. Instead, Galileo found that it was a multitude of stars packed so densely together that it appeared from a distance to look like clouds, but which were actually stars that were much farther away than previously thought.

In 1632, Galileo finally addressed the “Great Debate” in his treatise Dialogo sopra i due massimi sistemi del mondo (Dialogue Concerning the Two Chief World Systems), in which he advocated the heliocentric model over the geocentric. Using his own telescopic observations, modern physics and rigorous logic, Galileo’s arguments effectively undermined the basis of Aristotle’s and Ptolemy’s system for a growing and receptive audience.

Johannes Kepler advanced the model further with his theory of the elliptical orbits of the planets. Combined with accurate tables that predicted the positions of the planets, the Copernican model was effectively proven. From the middle of the seventeenth century onward, there were few astronomers who were not Copernicans.

The next great contribution came from Sir Isaac Newton (1642/43 – 1727), who’s work with Kepler’s Laws of Planetary Motion led him to develop his theory of Universal Gravitation. In 1687, he published his famous treatise Philosophiæ Naturalis Principia Mathematica (“Mathematical Principles of Natural Philosophy”), which detailed his Three Laws of Motion. These laws stated that:

  1. When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.
  2. The vector sum of the external forces (F) on an object is equal to the mass (m) of that object multiplied by the acceleration vector (a) of the object. In mathematical form, this is expressed as: F=ma
  3. When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

Animated diagram showing the spacing of the Solar Systems planet’s, the unusually closely spaced orbits of six of the most distant KBOs, and the possible “Planet 9”. Credit: Caltech/nagualdesign

Together, these laws described the relationship between any object, the forces acting upon it, and the resulting motion, thus laying the foundation for classical mechanics. The laws also allowed Newton to calculate the mass of each planet, calculate the flattening of the Earth at the poles and the bulge at the equator, and how the gravitational pull of the Sun and Moon create the Earth’s tides.

His calculus-like method of geometrical analysis was also able to account for the speed of sound in air (based on Boyle’s Law), the precession of the equinoxes – which he showed were a result of the Moon’s gravitational attraction to the Earth – and determine the orbits of the comets. This volume would have a profound effect on the sciences, with its principles remaining canon for the following 200 years.

Another major discovery took place in 1755, when Immanuel Kant proposed that the Milky Way was a large collection of stars held together by mutual gravity. Just like the Solar System, this collection of stars would be rotating and flattened out as a disk, with the Solar System embedded within it.

Astronomer William Herschel attempted to actually map out the shape of the Milky Way in 1785, but he didn’t realize that large portions of the galaxy are obscured by gas and dust, which hides its true shape. The next great leap in the study of the Universe and the laws that govern it did not come until the 20th century, with the development of Einstein’s theories of Special and General Relativity.

Einstein’s groundbreaking theories about space and time (summed up simply as E=mc²) were in part the result of his attempts to resolve Newton’s laws of mechanics with the laws of electromagnetism (as characterized by Maxwell’s equations and the Lorentz force law). Eventually, Einstein would resolve the inconsistency between these two fields by proposing Special Relativity in his 1905 paper, “On the Electrodynamics of Moving Bodies“.

Basically, this theory stated that the speed of light is the same in all inertial reference frames. This broke with the previously-held consensus that light traveling through a moving medium would be dragged along by that medium, which meant that the speed of the light is the sum of its speed through a medium plus the speed of that medium. This theory led to multiple issues that proved insurmountable prior to Einstein’s theory.

Special Relativity not only reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, but also simplified the mathematical calculations by doing away with extraneous explanations used by other scientists. It also made the existence of a medium entirely superfluous, accorded with the directly observed speed of light, and accounted for the observed aberrations.

Between 1907 and 1911, Einstein began considering how Special Relativity could be applied to gravity fields – what would come to be known as the Theory of General Relativity. This culminated in 1911 with the publications of  “On the Influence of Gravitation on the Propagation of Light“, in which he predicted that time is relative to the observer  and dependent on their position within a gravity field.

He also advanced what is known as the Equivalence Principle, which states that gravitational mass is identical to inertial mass. Einstein also predicted the phenomenon of gravitational time dilation – where two observers situated at varying distances from a gravitating mass perceive a difference in the amount of time between two events. Another major outgrowth of his theories were the existence of Black Holes and an expanding Universe.

In 1915, a few months after Einstein had published his Theory of General Relativity, German physicist and astronomer Karl Schwarzschild found a solution to the Einstein field equations that described the gravitational field of a point and spherical mass. This solution, now called the Schwarzschild radius, describes a point where the mass of a sphere is so compressed that the escape velocity from the surface would equal the speed of light.

In 1931, Indian-American astrophysicist Subrahmanyan Chandrasekhar calculated, using Special Relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass would collapse in on itself. In 1939, Robert Oppenheimer and others concurred with Chandrasekhar’s analysis, claiming that neutron stars above a prescribed limit would collapse into black holes.

Another consequence of General Relativity was the prediction that the Universe was either in a state of expansion or contraction. In 1929, Edwin Hubble confirmed that the former was the case. At the time, this appeared to disprove Einstein’s theory of a Cosmological Constant, which was a force which “held back gravity” to ensure that the distribution of matter in the Universe remained uniform over time.

To this, Edwin Hubble demonstrated using redshift  measurements that galaxies were moving away from the Milky Way. What’s more, he showed that the galaxies that were farther from Earth appeared to be receding faster – a phenomena that would come to be known as Hubble’s Law. Hubble attempted to constrain the value of the expansion factor – which he estimated at 500 km/sec per Megaparsec of space (which has since been revised).

And then in 1931, Georges Lemaitre, a Belgian physicist and Roman Catholic priest, articulated an idea that would give rise to the Big Bang Theory. After confirming independently that the Universe was in a state of expansion, he suggested that the current expansion of the Universe meant that the father back in time one went, the smaller the Universe would be.

In other words, at some point in the past, the entire mass of the Universe would have been concentrated on a single point. These discoveries triggered a debate between physicists throughout the 1920s and 30s, with the majority advocating that the Universe was in a steady state (i.e. the Steady State Theory). In this model, new matter is continuously created as the Universe expands, thus preserving the uniformity and density of matter over time.

After World War II, the debate came to a head between proponents of the Steady State Model and proponents of the Big Bang Theory – which was growing in popularity. Eventually, the observational evidence began to favor the Big Bang over the Steady State, which included the discovery and confirmation of the CMB in 1965. Since that time, astronomers and cosmologists have sought to resolve theoretical problems arising from this model.

In the 1960s, for example, Dark Matter (originally proposed in 1932 by Jan Oort) was proposed as an explanation for the apparent “missing mass” of the Universe. In addition, papers submitted by Stephen Hawking and other physicists showed that singularities were an inevitable initial condition of general relativity and a Big Bang model of cosmology.

In 1981, physicist Alan Guth theorized a period of rapid cosmic expansion (aka. the “Inflation” Epoch) that resolved other theoretical problems. The 1990s also saw the rise of Dark Energy as an attempt to resolve outstanding issues in cosmology. In addition to providing an explanation as to the Universe’s missing mass (along with Dark Matter) it also provided an explanation as to why the Universe is still accelerating, and offered a resolution to Einstein’s Cosmological Constant.

Significant progress has been made in our study of the Universe thanks to advances in telescopes, satellites, and computer simulations. These have allowed astronomers and cosmologists to see farther into the Universe (and hence, farther back in time). This has in turn helped them to gain a better understanding of its true age, and make more precise calculations of its matter-energy density.

The introduction of space telescopes – such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope, Wilkinson Microwave Anisotropy Probe (WMAP) and the Planck Observatory – has also been of immeasurable value. These have not only allowed for deeper views of the cosmos, but allowed astronomers to test theoretical models to observations.

Illustration of the depth by which Hubble imaged galaxies in prior Deep Field initiatives, in units of the Age of the Universe. Credit: NASA and A. Feild (STScI)

For example, in June of 2016, NASA announced findings that indicate that the Universe is expanding even faster than previously thought. Based on new data provided by the Hubble Space Telescope (which was then compared to data from the WMAP and the Planck Observatory) it appeared that the Hubble Constant was 5% to 9% greater than expected.

Next-generation telescopes like the James Webb Space Telescope (JWST) and ground-based telescopes like the Extremely Large Telescope (ELT) are also expected to allow for additional breakthroughs in our understanding of the Universe in the coming years and decades.

Without a doubt, the Universe is beyond the reckoning of our minds. Our best estimates say hat it is unfathomably vast, but for all we know, it could very well extend to infinity. What’s more, its age in almost impossible to contemplate in strictly human terms. In the end, our understanding of it is nothing less than the result of thousands of years of constant and progressive study.

And in spite of that, we’ve only really begun to scratch the surface of the grand enigma that it is the Universe. Perhaps some day we will be able to see to the edge of it (assuming it has one) and be able to resolve the most fundamental questions about how all things in the Universe interact. Until that time, all we can do is measure what we don’t know by what we do, and keep exploring!

To speed you on your way, here is a list of topics we hope you will enjoy and that will answer your questions. Good luck with your exploration!

Further Reading:

Sources: