Two new Super-Earths Discovered Around a Red Dwarf Star

K2-18b and its neighbour, newly discovered K2-18c, orbit the red-dwarf star k2-18 locataed 111 light years away in the constellation Leo. Credit: Alex Boersma

The search for extra-solar planets has turned up some very interesting discoveries. Aside planets that are more-massive versions of their Solar counterparts (aka. Super-Jupiters and Super-Earths), there have been plenty of planets that straddle the line between classifications. And then there were times when follow-up observations have led to the discovery of multiple planetary systems.

This was certainly the case when it came to K2-18, a red dwarf star system located about 111 light-years from Earth in the constellation Leo. Using the ESO’s High Accuracy Radial Velocity Planet Searcher (HARPS), an international team of astronomers was recently examining a previously-discovered exoplanet in this system (K2-18b) when they noted the existence of a second exoplanet.

The study which details their findings – “Characterization of the K2-18 multi-planetary system with HARPS” – is scheduled to be published in the journal Astronomy and Astrophysics. The research was supported by the Natural Sciences and Research Council of Canada (NSERC) and the Institute for Research on Exoplanets – a consortium of scientists and students from the University of Montreal and McGill University.

Artist’s impression of a Super-Earth planet orbiting a Sun-like star. Credit: ESO/M. Kornmesser

Led by Ryan Cloutier, a PhD student at the University of Toronto’s Center for Planet Science and the University of Montréal’s Institute for Research on Exoplanets (iREx), the team included members from the University of Geneva, the University Grenoble Alpes, and the University of Porto. Together, the team conducted a study of K2-18b in the hopes of characterizing this exoplanet and determining its true nature.

When K2-18b was first discovered in 2015, it was found to be orbiting within the star’s habitable zone (aka. “Goldilocks Zone“). The team responsible for the discovery also determined that given its distance from its star, K2-18b’s surface received similar amounts of radiation as Earth. However, the initial estimates of the planet’s size left astronomers uncertain as to whether the planet was a Super-Earth or a mini-Neptune.

For this reason, Cloutier and his team sought to characterize the planet’s mass, a necessary step towards determining it’s atmospheric properties and bulk composition. To this end, they obtained radial velocity measurements of K2-18 using the HARPS spectrograph. These measurements allowed them to place mass constraints on previously-discovered exoplanet, but also revealed something extra.

As Ryan Cloutier explained in a UTSc press statement:

“Being able to measure the mass and density of K2-18b was tremendous, but to discover a new exoplanet was lucky and equally exciting… If you can get the mass and radius, you can measure the bulk density of the planet and that can tell you what the bulk of the planet is made of.”

Artist’s impression of a super-Earth with a dense atmosphere, which is what scientists now believe K2-18b is. Credit: NASA/JPL

Essentially, their radial velocity measurements revealed that K2-18b has a mass of about 8.0 ± 1.9 Earth masses and a bulk density of 3.3 ± 1.2 g/cm³. This is consistent with a terrestrial (aka. rocky) planet with a significant gaseous envelop and a water mass fraction that is equal to or less than 50%. In other words, it is either a Super-Earth with a small gaseous atmosphere (like Earths) or “water world” with a thick layer of ice on top.

They also found evidence for a second “warm” Super Earth named K2-18c, which has a mass of 7.5 ± 1.3 Earth masses, an orbital period of 9 days, and a semi-major axis roughly 2.4 times smaller than K2-18b. After re-examining the original light curves obtained from K2-18, they concluded that K2-18c was not detected because it has an orbit that does not lie on the same plane. As Cloutier described the discovery:

“When we first threw the data on the table we were trying to figure out what it was. You have to ensure the signal isn’t just noise, and you need to do careful analysis to verify it, but seeing that initial signal was a good indication there was another planet… It wasn’t a eureka moment because we still had to go through a checklist of things to do in order to verify the data. Once all the boxes were checked it sunk in that, wow, this actually is a planet.”

Unfortunately, the newly-discovered K2-18c orbits too closely to its star for it to be within it’s habitable zone. However, the likelihood of K2-18b being habitable remains probable, thought that depends on its bulk composition. In the end, this system will benefit from additional surveys that will more than likely involve NASA’s James Webb Space Telescope (JWST) – which is scheduled for launch in 2019.

Artist’s impression of Super-Earth orbiting closely to its red dwarf star. Credit: M. Weiss/CfA

These surveys are expecting to resolve the latest mysteries about this planet, which is whether it is Earth-like or a “water world”. “With the current data, we can’t distinguish between those two possibilities,” said Cloutier. “But with the James Webb Space Telescope (JWST) we can probe the atmosphere and see whether it has an extensive atmosphere or it’s a planet covered in water.”

As René Doyon – the principal investigator for the Near-Infrared Imager and Slitless Spectrograph (NIRISS), the Canadian Space Agency instrument on board JWST, and a co-author on the paper – explained:

“There’s a lot of demand to use this telescope, so you have to be meticulous in choosing which exoplanets to look at. K2-18b is now one of the best targets for atmospheric study, it’s going to the near top of the list.”

The discovery of this second Super-Earth in the K2-18 system is yet another indication of how prevalent multi-planet systems are around M-type (red dwarf) stars. The proximity of this system, which has at least one planet with a thick atmosphere, also makes it well-suited to studies that will teach astronomers more about the nature of exoplanet atmospheres.

Expect to hear more about this star and its planetary system in the coming years!

Further Reading: University of Toronto Scarborough, Astronomy and Astrophysics

What is the Transit Method?

In a series of papers, Professor Loeb and Michael Hippke indicate that conventional rockets would have a hard time escaping from certain kinds of extra-solar planets. Credit: NASA/Tim Pyle
In a series of papers, Professor Loeb and Michael Hippke indicate that conventional rockets would have a hard time escaping from certain kinds of extra-solar planets. Credit: NASA/Tim Pyle

Welcome all to the first in our series on Exoplanet-hunting methods. Today we begin with the most popular and widely-used, known as the Transit Method (aka. Transit Photometry).

For centuries, astronomers have speculated about the existence of planets beyond our Solar System. After all, with between 100 and 400 billion stars in the Milky Way Galaxy alone, it seemed unlikely that ours was the only one to have a system of planets. But it has only been within the past few decades that astronomers have confirmed the existence of extra-solar planets (aka. exoplanets).

Astronomers use various methods to confirm the existence of exoplanets, most of which are indirect in nature. Of these, the most widely-used and effective to date has been Transit Photometry, a method that measures the light curve of distant stars for periodic dips in brightness. These are the result of exoplanets passing in front of the star (i.e. transiting) relative to the observer.

Description:

These changes in brightness are characterized by very small dips and for fixed periods of time, usually in the vicinity of 1/10,000th of the star’s overall brightness and only for a matter of hours. These changes are also periodic, causing the same dips in brightness each time and for the same amount of time. Based on the extent to which stars dim, astronomers are also able to obtain vital information about exoplanets.

For all of these reasons, Transit Photometry is considered a very robust and reliable method of exoplanet detection. Of the 3,526 extra-solar planets that have been confirmed to date, the transit method has accounted for 2,771 discoveries – which is more than all the other methods combined.

Advantages:

One of the greatest advantages of Transit Photometry is the way it can provide accurate constraints on the size of detected planets. Obviously, this is based on the extent to which a star’s light curve changes as a result of a transit.  Whereas a small planet will cause a subtle change in brightness, a larger planet will cause a more noticeable change.

When combined with the Radial Velocity method (which can determine the planet’s mass) one can determine the density of the planet. From this, astronomers are able to assess a planet’s physical structure and composition – i.e. determining if it is a gas giant or rocky planet. The planets that have been studied using both of these methods are by far the best-characterized of all known exoplanets.

In addition to revealing the diameter of planets, Transit Photometry can allow for a planet’s atmosphere to be investigated through spectroscopy. As light from the star passes through the planet’s atmosphere, the resulting spectra can be analyzed to determine what elements are present, thus providing clues as to the chemical composition of the atmosphere.

Artist’s impression of an extra-solar planet transiting its star. Credit: QUB Astrophysics Research Center

Last, but not least, the transit method can also reveal things about a planet’s temperature and radiation based on secondary eclipses (when the planet passes behind it’s sun). On this occasion, astronomers measure the star’s photometric intensity and then subtract it from measurements of the star’s intensity before the secondary eclipse. This allows for measurements of the planet’s temperature and can even determine the presence of clouds formations in the planet’s atmosphere.

Disadvantages:

Transit Photometry also suffers from a few major drawbacks. For one, planetary transits are observable only when the planet’s orbit happens to be perfectly aligned with the astronomers’ line of sight. The probability of a planet’s orbit coinciding with an observer’s vantage point is equivalent to the ratio of the diameter of the star to the diameter of the orbit.

Only about 10% of planets with short orbital periods experience such an alignment, and this decreases for planets with longer orbital periods. As a result, this method cannot guarantee that a particular star being observed does indeed host any planets. For this reason, the transit method is most effective when surveying thousands or hundreds of thousands of stars at a time.

It also suffers from a substantial rate of false positives; in some cases, as high as 40% in single-planet systems (based on a 2012 study of the Kepler mission). This necessitates that follow-up observations be conducted, often relying on another method. However, the rate of false positives drops off for stars where multiple candidates have been detected.

Number of extrasolar planet discoveries per year through September 2014, with colors indicating method of detection – radial velocity (blue), transit (green), timing (yellow), direct imaging (red), microlensing (orange). Credit: Public domain

While transits can reveal much about a planet’s diameter, they cannot place accurate constraints on a planet’s mass. For this, the Radial Velocity method (as noted earlier) is the most reliable, where astronomers look for signs of “wobble” in a star’s orbit to the measure the gravitational forces acting on them (which are caused by planets).

In short, the transit method has some limitations and is most effective when paired with other methods. Nevertheless, it remains the most widely-used means of “primary detection” – detecting candidates which are later confirmed using a different method – and is responsible for more exoplanet discoveries than all other methods combined.

Examples of Transit Photometry Surveys:

Transit Photometry is performed by multiple Earth-based and space-based observatories around the world. The majority, however, are Earth-based, and rely on existing telescopes combined with state-of-the-art photometers. Examples include the Super Wide Angle Search for Planets (SuperWASP) survey, an international exoplanet-hunting survey that relies on the Roque de los Muchachos Observatory and the South African Astronomical Observatory.

There’s also the Hungarian Automated Telescope Network (HATNet), which consists of six small, fully-automated  telescopes and is maintained by the Harvard-Smithsonian Center for Astrophysics. The MEarth Project is another, a National Science Foundation-funded robotic observatory that combines the Fred Lawrence Whipple Observatory (FLWO) in Arizona with the Cerro Tololo Inter-American Observatory (CTIO) in Chile.

The SuperWasp Cameras at the South African Astronomical Observatory. Credit: SuperWASP project & David Anderson

Then there’s the Kilodegree Extremely Little Telescope (KELT), an astronomical survey jointly administered by Ohio State University, Vanderbilt University, Lehigh University, and the South African Astronomical Society (SAAO). This survey consists of two telescopes, the Winer Observatory in southeastern Arizona and the Sutherland Astronomical Observation Station in South Africa.

In terms of space-based observatories, the most notable example is NASA’s Kepler Space Telescope. During its initial mission, which ran from 2009 to 2013, Kepler detected 4,496 planetary candidates and confirmed the existence of 2,337 exoplanets. In November of 2013, after the failure of two of its reaction wheels, the telescope began its K2 mission, during which time an additional 515 planets have been detected and 178 have been confirmed.

The Hubble Space Telescope also conducted transit surveys during its many years in orbit. For instance, the Sagittarius Window Eclipsing Extrasolar Planet Search (SWEEPS) – which took place in 2006 – consisted of Hubble observing 180,000 stars in the central bulge of the Milky Way Galaxy. This survey revealed the existence of 16 additional exoplanets.

Other examples include the ESA’s COnvection ROtation et Transits planétaires (COROT) – in English “Convection rotation and planetary transits” – which operated from 2006 to 2012. Then there’s the ESA’s Gaia mission, which launched in 2013 with the purpose of creating the largest 3D catalog ever made, consisting of over 1 billion astronomical objects.

NASA’s Kepler space telescope was the first agency mission capable of detecting Earth-size planets. Credit: NASA/Wendy Stenzel

In March of 2018, the NASA Transiting Exoplanet Survey Satellite (TESS) is scheduled to be launched into orbit. Using the transit method, TESS will detect exoplanets and also select targets for further study by the James Webb Space Telescope (JSWT), which will be deployed in 2019. Between these two missions, the confirmation and characterization or many thousands of exoplanets is anticipated.

Thanks to improvements in terms of technology and methodology, exoplanet discovery has grown by leaps and bounds in recent years. With thousands of exoplanets confirmed, the focus has gradually shifted towards the characterizing of these planets to learn more about their atmospheres and conditions on their surface.

In the coming decades, thanks in part to the deployment of new missions, some very profound discoveries are expected to be made!

We have many interesting articles about exoplanet-hunting here at Universe Today. Here’s What are Extra Solar Planets?, What are Planetary Transits?, What is the Radial Velocity Method?, What is the Direct Imaging Method?, What is the Gravitational Microlensing Method?, and Kepler’s Universe: More Planets in our Galaxy than Stars.

Astronomy Cast also has some interesting episodes on the subject. Here’s Episode 364: The COROT Mission.

For more information, be sure to check out NASA’s page on Exoplanet Exploration, the Planetary Society’s page on Extrasolar Planets, and the NASA/Caltech Exoplanet Archive.

Sources:

A New Survey Takes the Hubble Deep Field to the Next Level, Analyzing Distance and Properties of 1,600 Galaxies

Images from the Hubble Ultra Deep Field (HUDF). Credit: NASA/ESA/S. Beckwith (STScI)/HUDF Team

Since its deployment in 1990, the Hubble Space Telescope has given us some of the richest and most detailed images of our Universe. Many of these images were taken while observing a patch of sky located in the Fornax constellation between September 2003 and January 2004. This region, known as the Hubble Ultra Deep Field (HUDF), contains an estimated 10,000 galaxies, all of which existed roughly 13 billion years ago.

Looking to this region of space, multiple teams of astronomers used the MUSE instrument on the ESO’s Very Large Telescope (VLT) to discover 72 previously unseen galaxies. In a series of ten recently released studies, these teams indicate how they measured the distance and properties of 1600 very faint galaxies in the Ultra Deep Field, revealing new information about star formation and the motions of galaxies in the early Universe.

The original HUDF images, which were published in 2004, were a major milestone for astronomy and cosmology. The thousands of galaxies it observed were dated to less than just a billion years after the Big Bang, ranging from 400 to 800 million years of age. This area was subsequently observed many times using the Hubble and other telescopes, which has resulted in the deepest views of the Universe to date.

One such telescope is the European Southern Observatory‘s (ESO) Very Large Telescope, located in the Paranal Observatory in Chile. Intrinsic to the studies of the HUDF was the Multi Unit Spectroscopic Explorer (MUSE), a panoramic integral-field spectrograph operating in the visible wavelength range. It was the data accumulated by this instrument that allowed for 72 new galaxies to be discovered from this tiny area of sky.

The MUSE HUDF Survey team, which was led by Roland Bacon of the Centre de recherche astrophysique de Lyon (CRAL) and the National Center for Scientific Research (CNRS), included members from multiple European observatories, research institutes and universities. Together, they produced ten studies detailing the precise spectroscopic measurements they conducted of 1600 HUDF galaxies.

This was an unprecedented accomplishment, given that this is ten times as many galaxies that have had similar measurements performed on them in the last decade using ground-based telescopes. As Bacon indicated in an ESO press release:

MUSE can do something that Hubble can’t — it splits up the light from every point in the image into its component colors to create a spectrum. This allows us to measure the distance, colors and other properties of all the galaxies we can see — including some that are invisible to Hubble itself.

The galaxies detected in this survey were also 100 times fainter than any galaxies studied in previous surveys. Given their age and their very dim and distant nature, the study of these 1600 galaxies is sure to add to any already very richly-observed field. This,in turn, can only deepen our understanding of how galaxies formed and evolved during the past 13 billions years.

The 72 newly-discovered galaxies that the survey observed are known as Lyman-alpha emitters, a class of galaxy that is extremely distant and only detectable in Lyman-alpha light. This form of radiation is emitted by excited hydrogen atoms, and is thought to be the result of ongoing star formation. Our current understanding of star formation cannot fully explain these galaxies, and they were not visible in the original Hubble images.

Thanks to MUSE’s ability to disperse light into its component colors, these galaxies became more apparent. As Jarle Brinchmann – an astronomer at the University of Leiden and the University of Porto’s (CAUP) Institute of Astrophysics and Space Sciences, and the lead author of one of the papers – described the results of the survey:

MUSE has the unique ability to extract information about some of the earliest galaxies in the Universe — even in a part of the sky that is already very well studied. We learn things about these galaxies that is only possible with spectroscopy, such as chemical content and internal motions — not galaxy by galaxy but all at once for all the galaxies!

Another major finding of this survey was the systematic detection of luminous hydrogen halos around galaxies in the early Universe. This finding is expected to give astronomers a new and promising way to study how material flowed in and out of early galaxies, which was central to early star formation and galactic evolution. The series of studies produced by Bacon and his colleagues also indicate a range of other possibilities.

These include studying the role faint galaxies played during cosmic reionization, the period that took place between 150 million to billion years after the Big Bang. It was during this period, which followed the “dark ages” (380 thousand to 150 million years ago) that the first stars and quasars formed and sent ionizing radiation throughout the early Universe. And as Roland Bacon explained, the best may yet be to come:

Remarkably, these data were all taken without the use of MUSE’s recent Adaptive Optics Facility upgrade. The activation of the AOF after a decade of intensive work by ESO’s astronomers and engineers promises yet more revolutionary data in the future.”

Even before Einstein proposed his groundbreaking Theory of General Relativity – which established that space and time are inextricably linked – scientists have understood that probing deeper into the cosmic field is to also probe farther back in time. The farther we are able to see, the more we are able to learn about how the Universe evolved over the course of billions of years.

Further Reading: ESO

There Could be Hundreds More Icy Worlds with Life Than on Rocky Planets Out There in the Galaxy

The moons of Europa and Enceladus, as imaged by the Galileo and Cassini spacecraft. Credit: NASA/ESA/JPL-Caltech/SETI Institute

In the hunt for extra-terrestrial life, scientists tend to take what is known as the “low-hanging fruit approach”. This consists of looking for conditions similar to what we experience here on Earth, which include at oxygen, organic molecules, and plenty of liquid water. Interestingly enough, some of the places where these ingredients are present in abundance include the interiors of icy moons like Europa, Ganymede, Enceladus and Titan.

Whereas there is only one terrestrial planet in our Solar System that is capable of supporting life (Earth), there are multiple “Ocean Worlds” like these moons. Taking this a step further, a team of researchers from the Harvard Smithsonian Center for Astrophysics (CfA) conducted a study that showed how potentially-habitable icy moons with interior oceans are far more likely than terrestrial planets in the Universe.

The study, titled “Subsurface Exolife“, was performed by Manasvi Lingam and Abraham Loeb of the Harvard Smithsonain Center for Astrophysics (CfA) and the Institute for Theory and Computation (ITC) at Harvard University. For the sake of their study, the authors consider all that what defines a circumstellar habitable zone (aka. “Goldilocks Zone“) and likelihood of there being life inside moons with interior oceans.

Cutaway showing the interior of Saturn’s moon Enceladus. Credit: ESA

To begin, Lingam and Loeb address the tendency to confuse habitable zones (HZs) with habitability, or to treat the two concepts as interchangeable. For instance, planets that are located within an HZ are not necessarily capable of supporting life – in this respect, Mars and Venus are perfect examples. Whereas Mars is too cold and it’s atmosphere too thin to support life, Venus suffered a runaway greenhouse effect that caused it to become a hot, hellish place.

On the other hand, bodies that are located beyond HZs have been found to be capable of having liquid water and the necessary ingredients to give rise to life. In this case, the moons of Europa, Ganymede, Enceladus, Dione, Titan, and several others serve as perfect examples. Thanks to the prevalence of water and geothermal heating caused by tidal forces, these moons all have interior oceans that could very well support life.

As Lingam, a post-doctoral researcher at the ITC and CfA and the lead author on the study, told Universe Today via email:

“The conventional notion of planetary habitability is the habitable zone (HZ), namely the concept that the “planet” must be situated at the right distance from the star such that it may be capable of having liquid water on its surface. However, this definition assumes that life is: (a) surface-based, (b) on a planet orbiting a star, and (c) based on liquid water (as the solvent) and carbon compounds. In contrast, our work relaxes assumptions (a) and (b), although we still retain (c).”

As such, Lingam and Loeb widen their consideration of habitability to include worlds that could have subsurface biospheres. Such environments go beyond icy moons such as Europa and Enceladus and could include many other types deep subterranean environments. On top of that, it has also been speculated that life could exist in Titan’s methane lakes (i.e. methanogenic organisms). However, Lingam and Loeb chose to focus on icy moons instead.

A “true color” image of the surface of Jupiter’s moon Europa as seen by the Galileo spacecraft. Image credit: NASA/JPL-Caltech/SETI Institute

“Even though we consider life in subsurface oceans under ice/rock envelopes, life could also exist in hydrated rocks (i.e. with water) beneath the surface; the latter is sometimes referred to as subterranean life,” said Lingam. “We did not delve into the second possibility since many of the conclusions (but not all of them) for subsurface oceans are also applicable to these worlds. Similarly, as noted above, we do not consider lifeforms based on exotic chemistries and solvents, since it is not easy to predict their properties.”

Ultimately, Lingam and Loeb chose to focus on worlds that would orbit stars and likely contain subsurface life humanity would be capable of recognizing. They then went about assessing the likelihood that such bodies are habitable, what advantages and challenges life will have to deal with in these environments, and the likelihood of such worlds existing beyond our Solar System (compared to potentially-habitable terrestrial planets).

For starters, “Ocean Worlds” have several advantages when it comes to supporting life. Within the Jovian system (Jupiter and its moons) radiation is a major problem, which is the result of charged particles becoming trapped in the gas giants powerful magnetic field. Between that and the moon’s tenuous atmospheres, life would have a very hard time surviving on the surface, but life dwelling beneath the ice would fare far better.

“One major advantage that icy worlds have is that the subsurface oceans are mostly sealed off from the surface,” said Lingam. “Hence, UV radiation and cosmic rays (energetic particles), which are typically detrimental to surface-based life in high doses, are unlikely to affect putative life in these subsurface oceans.”

Artist rendering showing an interior cross-section of the crust of Enceladus, which shows how hydrothermal activity may be causing the plumes of water at the moon’s surface. Credits: NASA-GSFC/SVS, NASA/JPL-Caltech/Southwest Research Institute

“On the negative side,’ he continued, “the absence of sunlight as a plentiful energy source could lead to a biosphere that has far less organisms (per unit volume) than Earth. In addition, most organisms in these biospheres are likely to be microbial, and the probability of complex life evolving may be low compared to Earth. Another issue is the potential availability of nutrients (e.g. phosphorus) necessary for life; we suggest that these nutrients might be available only in lower concentrations than Earth on these worlds.”

In the end, Lingam and Loeb determined that a wide range of worlds with ice shells of moderate thickness may exist in a wide range of habitats throughout the cosmos. Based on how statistically likely such worlds are, they concluded that “Ocean Worlds” like Europa, Enceladus, and others like them are about 1000 times more common than rocky planets that exist within the HZs of stars.

These findings have some drastic implications for the search for extra-terrestrial and extra-solar life. It also has significant implications for how life may be distributed through the Universe. As Lingam summarized:

“We conclude that life on these worlds will undoubtedly face noteworthy challenges. However, on the other hand, there is no definitive factor that prevents life (especially microbial life) from evolving on these planets and moons. In terms of panspermia, we considered the possibility that a free-floating planet containing subsurface exolife could be temporarily “captured” by a star, and that it may perhaps seed other planets (orbiting that star) with life. As there are many variables involved, not all of them can be quantified accurately.”

Exogenesis
A new instrument called the Search for Extra-Terrestrial Genomes (STEG)
is being developed to find evidence of life on other worlds. Credit: NASA/Jenny Mottor

Professor Leob – the Frank B. Baird Jr. Professor of Science at Harvard University, the director of the ITC, and the study’s co-author – added that finding examples of this life presents its own share of challenges. As he told Universe Today via email:

“It is very difficult to detect sub-surface life remotely (from a large distance) using telescopes. One could search for excess heat but that can result from natural sources, such as volcanos. The most reliable way to find sub-surface life is to land on such a planet or moon and drill through the surface ice sheet. This is the approach contemplated for a future NASA mission to Europa in the solar system.”

Exploring the implications for panspermia further, Lingam and Loeb also considered what might happen if a planet like Earth were ever ejected from the Solar System. As they note in their study, previous research has indicated how planets with thick atmospheres or subsurface oceans could still support life while floating in interstellar space. As Loeb explained, they also considered what would happen if this ever happened with Earth someday:

“An interesting question is what would happen to the Earth if it was ejected from the solar system into cold space without being warmed by the Sun. We have found that the oceans would freeze down to a depth of 4.4 kilometers but pockets of liquid water would survive in the deepest regions of the Earth’s ocean, such as the Mariana Trench, and life could survive in these remaining sub-surface lakes. This implies that sub-surface life could be transferred between planetary systems.”

The Drake Equation, a mathematical formula for the probability of finding life or advanced civilizations in the universe. Credit: University of Rochester

This study also serves as a reminder that as humanity explores more of the Solar System (largely for the sake of finding extra-terrestrial life) what we find also has implications in the hunt for life in the rest of the Universe. This is one of the benefits of the “low-hanging fruit” approach. What we don’t know is informed but what we do, and what we find helps inform our expectations of what else we might find.

And of course, it’s a very vast Universe out there. What we may find is likely to go far beyond what we are currently capable of recognizing!

Further Reading: arXiv

Juno Isn’t Exactly Where it’s Supposed To Be. The Flyby Anomaly is Back, But Why Does it Happen?

Jupiter’s south pole. captured by the JunoCam on Feb. 2, 2017, from an altitude of about 62,800 miles (101,000 kilometers) above the cloud tops. Credits: NASA/JPL-Caltech/SwRI/MSSS/John Landino

In the early 1960s, scientists developed the gravity-assist method, where a spacecraft would conduct a flyby of a major body in order to increase its speed. Many notable missions have used this technique, including the Pioneer, Voyager, Galileo, Cassini, and New Horizons missions. In the course of many of these flybys, scientists have noted an anomaly where the increase in the spacecraft’s speed did not accord with orbital models.

This has come to be known as the “flyby anomaly”, which has endured despite decades of study and resisted all previous attempts at explanation. To address this, a team of researchers from the University Institute of Multidisciplinary Mathematics at the Universitat Politecnica de Valencia have developed a new orbital model based on the maneuvers conducted by the Juno probe.

The study, which recently appeared online under the title “A Possible Flyby Anomaly for Juno at Jupiter“, was conducted by Luis Acedo, Pedro Piqueras and Jose A. Morano. Together, they examined the possible causes of the so-called “flyby anomaly” using the perijove orbit of the Juno probe. Based on Juno’s many pole-to-pole orbits, they not only determined that it too experienced an anomaly, but offered a possible explanation for this.

Artist’s impression of the Pioneer 10 probe, launched in 1972 and now making its way out towards the star Aldebaran. Credit: NASA

To break it down, the speed of a spacecraft is determined by measuring the Doppler shift of radio signals from the spacecraft to the antennas on the Deep Space Network (DSN). During the 1970s when the Pioneer 10 and 11 probes were launched, visiting Jupiter and Saturn before heading off towards the edge of the Solar System, these probes both experienced something strange as they passed between 20 to 70 AU (Uranus to the Kuiper Belt) from the Sun.

Basically, the probes were both 386,000 km (240,000 mi) farther from where existing models predicted they would be. This came to be known as the “Pioneer anomaly“, which became common lore within the space physics community. While the Pioneer anomaly was resolved, the same phenomena has occurred many times since then with subsequent missions. As Dr. Acebo told Universe Today via email:

“The “flyby anomaly” is a problem in astrodynamics discovered by a JPL’s team of researchers lead by John Anderson in the early 90s. When they tried to fit the whole trajectory of the Galileo spacecraft as it approached the Earth on December, 8th, 1990, they found that this only can be done by considering that the ingoing and outgoing pieces of the trajectory correspond to asymptotic velocities that differ in 3.92 mm/s from what is expected in theory.

“The effect appears both in the Doppler data and in the ranging data, so it is not a consequence of the measurement technique. Later on, it has also been found in several flybys performed by Galileo again in 1992, the NEAR [Near Earth Asteroid Rendezvous mission] in 1998, Cassini in 1999 or Rosetta and Messenger in 2005. The largest discrepancy was found for the NEAR (around 13 mm/s) and this is attributed to the very close distance of 532 Km to the surface of the Earth at the perigee.”

NASA’s Juno spacecraft launched on August 6, 2011 and should arrive at Jupiter on July 4, 2016. Credit: NASA / JPL

Another mystery is that while in some cases the anomaly was clear, in others it was on the threshold of detectability or simply absent – as was the case with Juno‘s flyby of Earth in October of 2013. The absence of any convincing explanation has led to a number of explanations, ranging from the influence or dark matter and tidal effects to extensions of General Relativity and the existence of new physics.

However, none of these have produced a substantive explanation that could account for flyby anomalies. To address this, Acedo and his colleagues sought to create a model that was optimized for the Juno mission while at perijove – i.e. the point in the probe’s orbit where it is closest to Jupiter’s center. As Acedo explained:

After the arrival of Juno at Jupiter on July, 4th, 2016, we had the idea of developing our independent orbital model to compare with the fitted trajectories that were being calculated by the JPL team at NASA. After all, Juno is performing very close flybys of Jupiter because the altitude over the top clouds (around 4000 km) is a small fraction of the planet’s radius. So, we expected to find the anomaly here.  This would be an interesting addition to our knowledge of this effect because it would prove that it is not only a particular problem with Earth flybys but that it is universal.”

Their model took into account the tidal forces exerted by the Sun and by Jupiter’s larger satellites – Io, Europa, Ganymede and Callisto – and also the contributions of the known zonal harmonics. They also accounted for Jupiter’s multipolar fields, which are the result of the planet oblate shape, since these play a far more important role than tidal forces as Juno reaches perijove.

Illustration of NASA’s Juno spacecraft firing its main engine to slow down and go into orbit around Jupiter. Lockheed Martin built the Juno spacecraft for NASA’s Jet Propulsion Laboratory. Credit: NASA/Lockheed Martin

In the end, they determined that an anomaly could also be present during the Juno flybys of Jupiter. They also noted a significant radial component in this anomaly, one which decayed the farther the probe got from the center of Jupiter. As Acebo explained:

“Our conclusion is that an anomalous acceleration is also acting upon the Juno spacecraft in the vicinity of the perijove (in this case, the asymptotic velocity is not a useful concept because the trajectory is closed). This acceleration is almost one hundred times larger than the typical anomalous accelerations responsible for the anomaly in the case of the Earth flybys. This was already expected in connection with Anderson et al.’s initial intuition that the effect increases with the angular rotational velocity of the planet (a period of 9.8 hours for Jupiter vs the 24 hours of the Earth), the radius of the planet and probably its mass.”

They also determined that this anomaly appears to be dependent on the ratio between the spacecraft’s radial velocity and the speed of light, and that this decreases very fast as the craft’s altitude over Jupiter’s clouds changes. These issues were not predicted by General Relativity, so there is a chance that flyby anomalies are the result of novel gravitational phenomena – or perhaps, a more conventional effect that has been overlooked.

In the end, the model that resulted from their calculations accorded closely with telemetry data provided by the Juno mission, though questions remain. Further research is necessary because the pattern of the anomaly seems very complex and a single orbit (or a sequence of similar orbits as in the case of Juno) cannot map the whole field,” said Acebo. “A dedicated mission is required but financial cuts and limited interest in experimental gravity may prevent us to see this mission in the near future.”

It is a testament to the complexities of physics that even after sixty years of space exploration – and one hundred years since General Relativity was first proposed – that we are still refining our models. Perhaps someday we will find there are no mysteries left to solve, and the Universe will make perfect sense to us. What a terrible day that will be!

Further Reading: Earth and Planetary Astrophysics

Oops, low energy LEDs are increasing light pollution

The city of Denver, Colorado, as seen from space. Credit: NASA

When it comes to technology and the environment, it often seems like it’s “one step forward, two steps back.” Basically, sometimes the new and innovative technologies that are intended correct for one set of problems inevitably lead to new ones. This appears to be the case with the transition to solid-state lighting technology, aka. the “lighting revolution”.

Basically, as nations transition from traditional lights to the energy-saving Light-Emitting Diodes (LEDs), there is the potential for a rebound effect. According to an international study led by Christopher Kyba from the GFZ German Research Center for Geoscience, the widespread use of LED lights could mean more usage and more light pollution, thus counter-acting their economic and environmental benefits.

The study, titled “Artificially Lit Surface of Earth at Night Increasing in Radiance and Extent“, recently appeared in the journal Science Advances. Led by Christopher C. M. Kyba, the team also included members from the Leibniz Institute of Freshwater Ecology and Inland Fisheries, the Instituto de Astrofísica de Andalucía (CSIS), the Complutense University of Madrid, the University of Colorado, the University of Exeter, and the National Oceanic and Atmospheric Administration (NOAA).

Photograph of Calgary, Alberta, Canada, taken from the International Space Station on Nov. 27th, 2015. Credit: NASA’s Earth Observatory/Kyba, GFZ

To put it simply, the cost-saving effects of LED lights make them attractive from a consumer standpoint. From an environmental standpoint, they are also attractive because they reduce our carbon footprint. Unfortunately, as more people are using them for residential, commercial and industrial purposes, overall energy consumption appears to be going up instead of down, leading to an increased environmental impact.

For the sake of their study, the team relied on satellite radiometer data calibrated for nightlights collected by the Visible/Infrared Imager Radiometer Suite (VIIRS), an instrument on the NOAA’s Suomi-NPP satellite that has been monitoring Earth since October of 2011. After examining data obtained between 2012 and 2016, the team noted a discernible increase in power consumption associated with LED use. As they explain in their study:

“[F]rom 2012 to 2016, Earth’s artificially lit outdoor area grew by 2.2% per year, with a total radiance growth of 1.8% per year. Continuously lit areas brightened at a rate of 2.2% per year. Large differences in national growth rates were observed, with lighting remaining stable or decreasing in only a few countries.”

This data is not consistent with energy reductions on a global scale, but rather an increase in light pollution. The increase corresponded to increases in the Gross Domestic Product (GDP) of the fastest-growing developing nations. Moreover, it was also found to be happening in developed nations. In all cases, increased power consumption and light pollution has natural consequences for plants, animals, and human well-being.

As Kevin Gaston – a professor from the Environment and Sustainability Institute at the University of Exeter and a co-author on the study – explained in a University of Exeter press release:

“The great hope was that LED lighting would lead to lower energy usage, but what we’re seeing is those savings being used for increased lighting. We’re not just seeing this in developing countries, but also in developed countries. For example, Britain is getting brighter. You now struggle to find anywhere in Europe with a natural night sky – without that sky glow we’re all familiar with.”

The team also compared the VIIRS data to photographs taken from the International Space Station (ISS) which showed that the Suomi-NPP satellite sometimes record a dimming of some cities. This is due to the fact that the sensor can’t pick up light at wavelengths below 500 nanometers (nm) – i.e. blue light. When cities replace orange lamps with white LEDs, they emit more radiation below 500 nm.

The effect of this is that cities that are at the same brightness or have experienced an increase in brightness may actually appear dimmer. In other words, even in cases where satellites are detecting less radiation coming from the surface, Earth’s night-time brightness is actually increasing. But before anyone gets to thinking that it’s all bad news, there is a ray of light (no pun!) to be found in this research.

In previous studies, Kyba has shown that light emissions per capita in the US are 3 to 5 times higher than that in Germany. As he indicated, this could be seen as a sign that prosperity and conservative light use can coexist:

“Other studies and the experience of cities like Tucson, Arizona, show that well designed LED lamps allow a two-third or more decrease of light emission without any noticeable effect for human perception. There is a potential for the solid state lighting revolution to save energy and reduce light pollution, but only if we don’t spend the savings on new light”.

Reducing humanity’s impact on Earth’s natural environment is challenging work; and in the end, many of the technologies we depend upon to reduce our footprint can have the opposite effect. However, if there’s one thing that can prevent this from continually happening, it’s research that helps us to identifies our bad habits (and fix them!)

Further Reading: Eureka Alert!, University of Exeter, Science Advances

The Genesis Project: Using Robotic Gene Factories to Seed the Galaxy with Life

Project Genesis aims to seed "transiently habitable worlds" with life in order to create more life in the Universe. Credit: NASA/Serge Brunier

In the past decade, the rate at which extra-solar planets have been discovered and characterized has increased prodigiously. Because of this, the question of when we might explore these distant planets directly has repeatedly come up. In addition, the age-old question of what we might find once we get there – i.e. is humanity alone in the Universe or not? – has also come up with renewed vigor.

These questions have led to a number of interesting and ambitious proposals. These include Project Blue, a space telescope which would directly observe any planets orbiting Alpha Centauri, and Breakthrough Starshot – which aims to send a laser-driven nanocraft to Alpha Centauri in just 20 years. But perhaps the most daring proposal comes in the form of Project Genesis, which would attempt to seed distant planets with life.

This proposal was put forth by Dr. Claudius Gros, a theoretical physicist from the Institute for Theoretical Physics at Goethe University Frankfurt. In 2016, he published a paper that described how robotic missions equipped with gene factories (or cryogenic pods) could be used to distribute microbial life to “transiently habitable exoplanets – i.e. planets capable of supporting life, but not likely to give rise to it on their own.

Exogenesis
The purpose of Project Genesis would be to seed “transiently habitable” worlds with life, thus giving them a jump start on evolution. Credit: NASA/Jenny Mottor

Not long ago, Universe Today wrote about Dr. Gros’ recent study where he proposed using a magnetic sail to slow down an interstellar spacecraft. We were fortunate to catch up with Dr. Gros again and had a chance to ask him about Project Genesis. You can find our Q&A below, and be sure to check out his seminal paper that describes this project – “Developing Ecospheres on Transiently Habitable Planets: The Genesis Project“.

What is the purpose of Project Genesis?

Exoplanets come in all sizes, temperatures and compositions. The purpose of the Genesis project is to offer terrestrial life alternative evolutionary pathways on those exoplanets that are potentially habitable but yet lifeless. The basic philosophy of most scientists nowadays is that simple life is common in the universe and complex life is rare. We don’t know that for sure, but at the moment, that is the consensus.

If you had good conditions, simple life can develop very fast, but complex life will have a hard time. At least on Earth, it took a very long time for complex life to arrive. The Cambrian Explosion only happened about 500 million years ago, roughly 4 billion years after Earth was formed. If we give planets the opportunity to fast forward evolution, we can give them the chance to have their own Cambrian Explosions.

Early trilobite species (Eoredlichia takooensis) from the Lower Cambrian period, found in Emu Bay Shale, Kangaroo Island, Australia. Credit and ©: Royal Ontario Museum/David Rudkin

What worlds would be targeted?

The prime candidates are habitable “oxygen planets” around M-dwarfs like TRAPPIST-1. It is very likely that the oxygen-rich primordial atmosphere of these planets will have prevented abiogenesis in first place, that is the formation of life. Our galaxy could potentially harbor billions of habitable but lifeless oxygen planets.

Nowadays, astronomers are looking for planets around M-stars. These are very different from planets around Sun-like stars. Once a star forms, it takes a certain amount of time to contract to the point where fusion begins, and it starts to produce energy. For the Sun, this took 10 million years, which is very fast. For stars like TRAPPIST-1, it would take 100 million to 1 billion years. Then they have to contract to dissipate their initial heat.

The planets around TRAPPIST-1 would have been very hot, because the star was very hot for a long time. All the water that was in their stratospheres, the UV radiation would have disassociated it into hydrogen and oxygen – the hydrogen escaped, and the oxygen remained. All surveys have showed that they have oxygen atmospheres, but this is the product of chemical disassociation and not from plants (as with Earth).

There’s a good chance that oxygen planets are sterile, because oxygen planets eat up prebiotic conditions. We believe there may be billions of oxygen planets in our galaxy. They would have no life, and complex life needs oxygen. In science fiction, you have all these planets that look alike. We could imagine that in half a billion years, we could have this because we seeded oxygen planets (only we couldn’t travel there quickly since we have no FTL).

Illustration of what the TRAPPIST-1 system might look like from a vantage point near planet TRAPPIST-1f (at right). Credits: NASA/JPL-Caltech

What kind of organisms would be sent?

The first wave would consist of unicellular autotrophs. That is photo-synthesizing bacteria, like cyanobacteria, and eukaryotes (the cell type making up all complex life, that is animals and plants). Heterotrophs would follow in a second stage, organisms that feed on other organisms and can only exist after autotrophs exist and take root.

How would these organisms be sent?

That depends on the technology. If it can advance, we can miniaturize a gene factory. In principle, nature is a miniature gene factory. Everything we want to produce is very small. If it’s possible that would be the best option. Send in a gene bank, and then select the most optimal organism to send down. If that is not possible, you would have to have frozen germs. In the end, it depends on what would be the technically available.

You could also send in synthetic life. Synthetic biology is a very active research field, which involves reprogramming the genetic code. In science fiction, you have alien life with a different genetic code. Today, people are trying to produce this here on Earth. The end goal is to have new life forms that are based on a different code. This would be very dangerous on Earth, but on a far-distant planet, it would be beneficial.

What if these worlds are not sterile?

Genesis is all about life, not destroying life, so we’d want to avoid that. The probes would have to go into orbit, so we are pretty sure that from orbit, we could detect complex life on the surface. The Genesis Project was intended for planets that are not habitable for eternity. Earth is habitable for billions of years, but we are not sure about habitable exoplanets.

This illustration shows a star’s light illuminating the atmosphere of a planet. Credits: NASA Goddard Space Flight Center

Exoplanets come in all kinds of sized, temperatures, and habitabilities. Many of these planets will only be habitable for some time, maybe 1 billion years. Life there will not have time to evolve into complex life forms. So you have a decision: leave them like they are, or take a chance at developing complex life there.

Some believe that all bacteria are worth saving. On Earth, there is no protection for bacteria. But bacteria living on different planets are treated differently. Planetary protection, why do we do that? So we can study the life, or for the sake of protecting life itself? Mars most likely had life at one time, but now not, except for maybe a few bacteria. Still, we plan manned missions to Mars, which means planetary protection is off. It’s a contradiction.

I am very enthusiastic about finding life, but what about the planets where we don’t find life? This offers the possibility about doing something about it.

Could humanity benefit from this someday (i.e. colonize “seeded” planets)?

Yes and no. Yes, because nothing would keep our decedents (or any other intelligence living on Earth by then), to visit Genesis planets in 10-100 million years (the minimal time for the life initially seeded to fully unfold). No, because the involved time spans are so long, that it is not rational to speak of a ‘benefit’.

Project Starshot, an initiative sponsored by the Breakthrough Foundation, is intended to be humanity’s first interstellar voyage. Credit: breakthroughinitiatives.org

How soon could such a mission be mounted?

Genesis probes could be launched by the same directed-energy launch system planned for the Breakthrough Starshot initiative. Breakthrough Starshot aims to send very fast, very small, very light probes of about 1 gram to another star system. The same laser technology could send something more massive, but slower. Slow is relative, of course. So the in the end it depends on what is optimal.

The magnetic sail paper I recently wrote was a sample mission to show that it was possible. The probe would be about the size of a car (1 tonne) and would travel at a speed of about 1000 km/s – slow for interstellar travel relative to speed of light, but fast for Earth. If you reduce the velocity by a factor of 100, the mass you can propel is 10,000 heavier. You could accelerate a 1-tonne Genesis Probe and it would still fit into the layout of Breakthrough Starshot.

Therefore, the launch facility could see dual use and you wouldn’t need to build something new. Once that is in place one would need to test the magnetic sail. A realistic time span would hence be in the 50-100 years window.

What counter-arguments are there against this?

There are three main lines of counter-arguments. The first is the religious counter-argument, which says that humanity should not play God. The Genesis project is however not about creating life, but to give life the possibility to further develop. Just not on Earth, but elsewhere in the cosmos.

Mars, according to multiple studies, could still support life, raising issues of “planetary protection”. Credit: YONHAP/EPA

The second is the Planetary protection argument, which argues that we should not interfere. Some people objecting to the Genesis Project cite the ‘first directive’ of the Star Trek TV series. The Genesis Project fully supports planetary protection of planets which harbor complex life and of planets on which complex life could potentially develop in the future. The Genesis project will target only planets on which complex life could not develop on its own.

The third argument is about the lack of benefit to humanity. The Genesis Project is expressively not for human benefit. It is reasonable to argue, from the perspective of survival, that the ethical values of a species (like humanity) has to put the good of the species at the center.  Ethical is therefore “what is good for our own species”. Spending a large amount of money on a project, like the Genesis Project, which is expressively not for the benefit of our own species, would then be unethical.

___

Our thanks go out to Dr. Gros for taking the time to talk to us! We hope to hear more from him in the future and wish him the best of luck with Project Genesis.

Astronomers Think They Know Why Hot Jupiters Get So Enormous

Artist's impression of the K2-132 system, along with schematics of the star during its main sequence and Red Branch Phase. Credit: Karen Teramura/UH IfA

The study of extra-solar planets has revealed some fantastic and fascinating things. For instance, of the thousands of planets discovered so far, many have been much larger than their Solar counterparts. For instance, most of the gas giants that have been observed orbiting closely to their stars (aka. “Hot Jupiters”) have been similar in mass to Jupiter or Saturn, but have also been significantly larger in size.

Ever since astronomers first placed constraints on the size of a extra-solar gas giant seven years ago, the mystery of why these planets are so massive has endured. Thanks to the recent discovery of twin planets in the K2-132 and K2-97 system – made by a team from the University of Hawaii’s Institute for Astronomy using data from the Kepler mission – scientists believe we are getting closer to the answer.

The study which details the discovery – “Seeing Double with K2: Testing Re-inflation with Two Remarkably Similar Planets around Red Giant Branch Stars” – recently appeared in The Astrophysical Journal. The team was led by Samuel K. Grunblatt, a graduate student at the University of Hawaii, and included members from the Sydney Institute for Astronomy (SIfA), Caltech, the Harvard-Smithsonian Center for Astrophysics (CfA), NASA Goddard Space Flight Center, the SETI Institute, and multiple universities and research institutes.

Artist’s concept of Jupiter-sized exoplanet that orbits relatively close to its star (aka. a “hot Jupiter”). Credit: NASA/JPL-Caltech)

Because of the “hot” nature of these planets, their unusual sizes are believed to be related to heat flowing in and out of their atmospheres. Several theories have been developed to explain this process, but no means of testing them have been available. As Grunblatt explained, “since we don’t have millions of years to see how a particular planetary system evolves, planet inflation theories have been difficult to prove or disprove.”

To address this, Grunblatt and his colleagues searched through the data collected by NASA’s Kepler mission (specifically from its K2 mission) to look for “Hot Jupiters” orbiting red giant stars. These are stars that have exited the main sequence of their lifespans and entered the Red Giant Branch (RGB) phase, which is characterized by massive expansion and a decrease in surface temperature.

As a result, red giants may overtake planets that orbit closely to them while planets that were once distant will begin to orbit closely. In accordance with a theory put forth by Eric Lopez – a member of NASA Goddard’s Science and Exploration Directorate – hot Jupiter’s that orbit red giants should become inflated if direct energy output from their host star is the dominant process inflating planets.

So far, their search has turned up two planets – K2-132b and K2-97 b – which were almost identical in terms of their orbital periods (9 days), radii and masses. Based on their observations, the team was able to precisely calculate the radii of both planets and determine that they were 30% larger than Jupiter. Follow-up observations from the W.M. Keck Observatory at Maunakea, Hawaii, also showed that the planets were only half as massive as Jupiter.

The life-cycle of a Sun-like star from protostar (left side) to red giant (near the right side) to white dwarf (far right). Credit: ESO/M. Kornmesser

The team then used models to track the evolution of the planets and their stars over time, which allowed them to calculate how much heat the planets absorbed from their stars. As this heat was transferring from their outer layers to their deep interiors, the planets increased in size and decreased in density. Their results indicated that while the planets likely needed the increased radiation to inflate, the amount they got was lower than expected.

While the study is limited in scope, Grunblatt and his team’s study is consistent with the theory that huge gas giants are inflated by the heat of their host stars. It is bolstered by other lines of evidence that hint that stellar radiation is all a gas giant needs to dramatically alter its size and density. This is certainly significant, given that our own Sun will exit its main sequence someday, which will have a drastic effect on our system of planets.

As such, studying distant red giant stars and what their planets are going through will help astronomers to predict what our Solar System will experience, albeit in a few billion years. As Grunblatt explained in a IfA press statement:

“Studying how stellar evolution affects planets is a new frontier, both in other solar systems as well as our own. With a better idea of how planets respond to these changes, we can start to determine how the Sun’s evolution will affect the atmosphere, oceans, and life here on Earth.”

It is hoped that future surveys which are dedicated to the study of gas giants around red giant stars will help settle the debate between competing planet inflation theories. For their efforts, Grunblatt and his team were recently awarded time with NASA’s Spitzer Space Telescope, which they plan to use to conduct further observations of K2-132 and K2-97, and their respective gas giants.

The search for planets around red giant stars is also expected to intensify in the coming years with he deployment of NASA’s Transiting Exoplanet Survey Satellite (TESS) and the  James Webb Space Telescope (JWST). These missions will be launching in 2018 and 2019, respectively, while the K2 mission is expected to last for at least another year.

Further Reading: IfA, The Astronomical Journal

Every Time Lightning Strikes, Matter-Antimatter Annihilation Happens too

A Kyoto University-based team has unraveled the mystery of gamma-ray emission cascades caused by lightning strikes. Credit: Kyoto University/Teruaki Enoto

Lighting has always been a source of awe and mystery for us lowly mortals. In ancient times, people associated it with Gods like Zeus and Thor, the fathers of the Greek and Norse pantheons. With the birth of modern science and meteorology, lighting is no longer considered the province of the divine. However, this does not mean that the sense of mystery it carries has diminished one bit.

For example, scientists have found that lightning occurs in the atmospheres of other planets, like the gas giant Jupiter (appropriately!) and the hellish world of Venus. And according to a recent study from Kyoto University, gamma rays caused by lighting interact with air molecules, regularly producing radioisotopes and even positrons – the antimatter version of electrons.

The study, titled “Photonuclear Reactions Triggered by Lightning Discharge“, recently appeared in the scientific journal Nature. The study was led by Teruaki Enoto, a researcher from The Hakubi Center for Advanced Research at Kyoto University, and included members from the University of Tokyo, Hokkaido University, Nagoya University, the RIKEN Nishina Center, the MAXI Team, and the Japan Atomic Energy Agency.

For some time, physicists have been aware that small bursts of high-energy gamma rays can be produced by lightning storms – what are known as “terrestrial gamma-ray flashes”. They are believed to be the result of static electrical fields accelerating electrons, which are then slowed by the atmosphere. This phenomenon was first discovered by space-based observatories, and rays of up to 100,000 electron volts (100 MeV) have been observed.

Given the energy levels involved, the Japanese research team sought to examine how these bursts of gamma rays interact with air molecules. As Teruaki Enoto from Kyoto University, who leads the project, explained in a Kyoto University press release:

“We already knew that thunderclouds and lightning emit gamma rays, and hypothesized that they would react in some way with the nuclei of environmental elements in the atmosphere. In winter, Japan’s western coastal area is ideal for observing powerful lightning and thunderstorms. So, in 2015 we started building a series of small gamma-ray detectors, and placed them in various locations along the coast.”

Unfortunately, the team ran into funding problems along the way. As Enoto explained, they decided to reach out to the general public and established a crowdfunding campaign to fund their work. “We set up a crowdfunding campaign through the ‘academist’ site,” he said, “in which we explained our scientific method and aims for the project. Thanks to everybody’s support, we were able to make far more than our original funding goal.”

Thanks to the success of their campaign, the team built and installed particle detectors across the northwest coast of Honshu. In February of 2017, they installed four more detectors in Kashiwazaki city, which is a few hundred meters away from the neighboring town of Niigata. Immediately after the detectors were installed, a lightning strike took place in Niigata, and the team was able to study it.

What they found was something entirely new and unexpected. After analyzing the data, the team detected three distinct gamma-ray bursts of varying duration. The first was less than a millisecond long, the second was gamma ray-afterglow that took several milliseconds to decay, and the last was a prolonged emission lasting about one minute. As Enoto explained:

“We could tell that the first burst was from the lightning strike. Through our analysis and calculations, we eventually determined the origins of the second and third emissions as well.”

They determined that the second afterglow was caused by the lightning reacting with nitrogen in the atmosphere. Essentially, gamma rays are capable of causing nitrogen molecules to lose a neutron, and it was the reabsorption of these neutrons by other atmospheric particles that produced the gamma-ray afterglow. The final, prolonged emission was the result of unstable nitrogen atoms breaking down.

It was here that things really got interesting. As the unstable nitrogen broke down, it released positrons that then collided with electrons, causing matter-antimatter annihilations that released more gamma rays. As Enoto explained, this demonstrated, for the first time that antimatter is something that can occur in nature due to common mechanisms.

“We have this idea that antimatter is something that only exists in science fiction,” he said. “Who knew that it could be passing right above our heads on a stormy day? And we know all this thanks to our supporters who joined us through ‘academist’. We are truly grateful to all.”

If these results are indeed correct, than antimatter is not the extremely rare substance that we tend to think it is. In addition, the study could present new opportunities for high-energy physics and antimatter research. All of this research could also lead to the development of new or refined techniques for creating it.

Looking ahead, Enoto and his team hopes to conduct more research using the ten detectors they still have operating along the coast of Japan. They also hope to continue involving the public with their research, a process that goes far beyond crowdfunding and includes the efforts of citizen scientists to help process and interpret data.

Further Reading: University of Kyoto, Nature, NASA Goddard Media Studios

The Earth Does Stop the Occasional Neutrino

This image shows a visual representation of one of the highest-energy neutrino detections superimposed on a view of the IceCube Lab at the South Pole. Credit: IceCube Collaboration
This image shows a visual representation of one of the highest-energy neutrino detections superimposed on a view of the IceCube Lab at the South Pole. Credit: IceCube Collaboration

At the Amundsen–Scott South Pole Station in Antarctica lies the IceCube Neutrino Observatory – a facility dedicated to the study of elementary particles known as neutrino. This array consists of 5,160 spherical optical sensors – Digital Optical Modules (DOMs) – buried within a cubic kilometer of clear ice. At present, this observatory is the largest neutrino detector in the world and has spent the past seven years studying how these particles behave and interact.

The most recent study released by the IceCube collaboration, with the assistance of physicists from Pennsylvania State University, has measured the Earth’s ability to block neutrinos for the first time. Consistent with the Standard Model of Particle Physics, they determined that while trillions of neutrinos pass through Earth (and us) on a regular basis, some are occasionally stopped by it.

The study, titled “Measurement of the Multi-TeV Neutrino Interaction Cross-Section with IceCube Using Earth Absorption“, recently appeared in the scientific journal Nature. The study team’s results were based on the observation of 10,784 interactions made by high-energy, upward moving neutrinos, which were recorded over the course of a year at the observatory.

The IceCube Neutrino Observatory at the South Pole. Credit: Emanuel Jacobi/NSF

Back in 2013, the first detections of high-energy neutrinos were made by IceCube collaboration. These neutrinos – which were believed to be astrophysical in origin – were in the peta-electron volt range, making them the highest energy neutrinos discovered to date. IceCube searches for signs of these interactions by looking for Cherenkov radiation, which is produced after fast-moving charged particles are slowed down by interacting with normal matter.

By detecting neutrinos that interact with the clear ice, the IceCube instruments were able to estimate the energy and direction of travel of the neutrinos. Despite these detections, however, the mystery remained as to whether or not any kind of matter could stop a neutrino as it journeyed through space. In accordance with the Standard Model of Particle Physics, this is something that should happen on occasion.

After observing interactions at IceCube for a year, the science team found that the neutrinos that had to travel the farthest through Earth were less likely to reach the detector. As Doug Cowen, a professor of physics and astronomy/astrophysics at Penn State, explained in a Penn State press release:

“This achievement is important because it shows, for the first time, that very-high-energy neutrinos can be absorbed by something – in this case, the Earth. We knew that lower-energy neutrinos pass through just about anything, but although we had expected higher-energy neutrinos to be different, no previous experiments had been able to demonstrate convincingly that higher-energy neutrinos could be stopped by anything.”

The Icetop Tank, the neutrino detectors at the heart of the IceCube Neutrino Observatory. Credit: Dan Hubert

The existence of neutrinos was first proposed in 1930 by theoretical physicist Wolfgang Pauli, who postulated their existence as a way of explaining beta decay in terms of the conservation of energy law. They are so-named because they are electrically neutral, and only interact with matter very weakly – i.e. through the weak subatomic force and gravity. Because of this, neutrinos pass through normal matter on a regular basis.

Whereas neutrinos are produced regularly by stars and nuclear reactors here on Earth, the first neutrinos were formed during the Big Bang. The study of their interaction with normal matter can therefore tell us much about how the Universe evolved over the course of billions of years. Many scientists anticipate that the study of neutrinos will indicate the existence of new physics, ones which go beyond the Standard Model.

Because of this, the science team was somewhat surprised (and perhaps disappointed) with their results. As Francis Halzen – the principal investigator for the IceCube Neutrino Observatory and a professor of physics at the University of Wisconsin-Madison – explained:

“Understanding how neutrinos interact is key to the operation of IceCube. We were of course hoping for some new physics to appear, but we unfortunately find that the Standard Model, as usual, withstands the test.

Looking down one of IceCube’s detector bore holes. Credit: IceCube Collaboration/NSF

For the most part, the neutrinos selected for this study were more than one million times more energetic than those that are produced by our Sun or nuclear power plants. The analysis also included some that were astrophysical in nature – i.e. produced beyond Earth’s atmosphere – and may have been accelerated towards Earth by supermassive black holes (SMBHs).

Darren Grant, a professor of physics at the University of Alberta, is also the spokesperson for the IceCube Collaboration. As he indicated, this latest interaction study opens doors for future neutrino research. “Neutrinos have quite a well-earned reputation of surprising us with their behavior,” he said. “It is incredibly exciting to see this first measurement and the potential it holds for future precision tests.”

This study not only provided the first measurement of the Earth’s absorption of neutrinos, it also offers opportunities for geophysical researchers who are hoping to use neutrinos to explore Earth’s interior. Given that Earth is capable of stopping some of the billions of high-energy particles that routinely pass through it, scientists could develop a method for studying the Earth’s inner and outer core, placing more accurate constraints on their sizes and densities.

It also shows that the IceCube Observatory is capable of reaching beyond its original purpose, which was particle physics research and the study of neutrinos. As this latest study clearly shows, it is capable of contributing to planetary science research and nuclear physics as well. Physicists also hope to use the full 86-string IceCube array to conduct a multi-year analysis, examining even higher ranges of neutrino energies.

This event display shows “Bert,” one of two neutrino events discovered at IceCube whose energies exceeded one petaelectronvolt (PeV). Credit: Berkeley Labs.

As James Whitmore – the program director in the National Science Foundation’s (NSF) physics division (which provides support for IceCube) – indicated, this could allow them to truly search for physics that go beyond the Standard Model.

“IceCube was built to both explore the frontiers of physics and, in doing so, possibly challenge existing perceptions of the nature of universe. This new finding and others yet to come are in that spirit of scientific discovery.”

Ever since the discovery of the Higgs boson in 2012, physicists have been secure in the knowledge that the long journey to confirm the Standard Model was now complete. Since then, they have set their sets farther, hoping to find new physics that could resolve some of the deeper mysteries of the Universe – i.e. supersymmetry, a Theory of Everything (ToE), etc.

This, as well as studying how physics work at the highest energy levels (similar to those that existed during the Big Bang) is the current preoccupation of physicists. If they are successful, we might just come to understand how this massive thing known as the Universe works.

Further Reading: Penn State, Nature