In August of 2017, astronomers made another major breakthrough when the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected gravitational waves that were believed to be caused by the merger of two neutron stars. Since that time, scientists at multiple facilities around the world have conducted follow-up observations to determine the aftermath this merger, as even to test various cosmological theories.
For instance, in the past, some scientists have suggested that the inconsistencies between Einstein’s Theory of General Relativity and the nature of the Universe over large-scales could be explained by the presence of extra dimensions. However, according to a new study by a team of American astrophysicists, last year’s kilonova event effectively rules out this hypothesis.
In 1915, Albert Einstein published his famous Theory of General Relativity, which provided a unified description of gravity as a geometric property of space and time. This theory gave rise to the modern theory of gravitation and revolutionized our understanding of physics. Even though a century has passed since then, scientists are still conducting experiments that confirm his theory’s predictions.
The new infrared observations collected by these instruments allowed the team to monitor one of the stars (S2) that orbits Sagittarius A* as it passed in front of the black hole – which took place in May of 2018. At the closest point in its orbit, the star was at a distance of less than 20 billion km (12.4 billion mi) from the black hole and was moving at a speed in excess of 25 million km/h (15 million mph) – almost three percent of the speed of light.
Whereas the SINFONI instrument was used to measure the velocity of S2 towards and away from Earth, the GRAVITY instrument in the VLT Interferometer (VLTI) made extraordinarily precise measurements of the changing position of S2 in order to define the shape of its orbit. The GRAVITY instrument then created the sharp images that revealed the motion of the star as it passed close to the black hole.
The team then compared the position and velocity measurements to previous observations of S2 using other instruments. They then compared these results with predictions made by Newton’s Law of Universal Gravitation, General Relativity, and other theories of gravity. As expected, the new results were consistent with the predictions made by Einstein over a century ago.
As Reinhard Genzel, who in addition to being the leader of the GRAVITY collaboration was a co-author on the paper, explained in a recent ESO press release:
“This is the second time that we have observed the close passage of S2 around the black hole in our galactic center. But this time, because of much improved instrumentation, we were able to observe the star with unprecedented resolution. We have been preparing intensely for this event over several years, as we wanted to make the most of this unique opportunity to observe general relativistic effects.”
When observed with the VLT’s new instruments, the team noted an effect called gravitational redshift, where the light coming from S2 changed color as it drew closer to the black hole. This was caused by the very strong gravitational field of the black hole, which stretched the wavelength of the star’s light, causing it to shift towards the red end of the spectrum.
The change in the wavelength of light from S2 agrees precisely with what Einstein’s field equation’s predicted. As Frank Eisenhauer – a researcher from the Max Planck Institute of Extraterrestrial Physics, the Principal Investigator of GRAVITY and the SINFONI spectrograph, and a co-author on the study – indicated:
“Our first observations of S2 with GRAVITY, about two years ago, already showed that we would have the ideal black hole laboratory. During the close passage, we could even detect the faint glow around the black hole on most of the images, which allowed us to precisely follow the star on its orbit, ultimately leading to the detection of the gravitational redshift in the spectrum of S2.”
Whereas other tests have been performed that have confirmed Einstein’s predictions, this is the first time that the effects of General Relativity have been observed in the motion of a star around a supermassive black hole. In this respect, Einstein has been proven right once again, using one the most extreme laboratory to date! What’s more, it confirmed that tests involving relativistic effects can provide consistent results over time and space.
“Here in the Solar System we can only test the laws of physics now and under certain circumstances,” said Françoise Delplancke, head of the System Engineering Department at ESO. “So it’s very important in astronomy to also check that those laws are still valid where the gravitational fields are very much stronger.”
In the near future, another relativistic test will be possible as S2 moves away from the black hole. This is known as a Schwarzschild precession, where the star is expected to experience a small rotation in its orbit. The GRAVITY Collaboration will be monitoring S2 to observe this effect as well, once again relying on the VLT’s very precise and sensitive instruments.
As Xavier Barcons (the ESO’s Director General) indicated, this accomplishment was made possible thanks to the spirit of international cooperation represented by the GRAVITY collaboration and the instruments they helped the ESO develop:
“ESO has worked with Reinhard Genzel and his team and collaborators in the ESO Member States for over a quarter of a century. It was a huge challenge to develop the uniquely powerful instruments needed to make these very delicate measurements and to deploy them at the VLT in Paranal. The discovery announced today is the very exciting result of a remarkable partnership.”
And be sure to check out this video of the GRAVITY Collaboration’s successful test, courtesy of the ESO:
When looking to study the most distant objects in the Universe, astronomers often rely on a technique known as Gravitational Lensing. Based on the principles of Einstein’s Theory of General Relativity, this technique involves relying on a large distribution of matter (such as a galaxy cluster or star) to magnify the light coming from a distant object, thereby making it appear brighter and larger.
This technique has allowed for the study of individual stars in distant galaxies. In a recent study, an international team of astronomers used a galaxy cluster to study the farthest individual star ever seen in the Universe. Although it normally to faint to observe, the presence of a foreground galaxy cluster allowed the team to study the star in order to test a theory about dark matter.
For the sake of their study, Prof. Kelly and his associates used the galaxy cluster known as MACS J1149+2223 as their lens. Located about 5 billion light-years from Earth, this galaxy cluster sits between the Solar System and the galaxy that contains Icarus. By combining Hubble’s resolution and sensitivity with the strength of this gravitational lens, the team was able to see and study Icarus, a blue giant.
Icarus, named after the Greek mythological figure who flew too close to the Sun, has had a rather interesting history. At a distance of roughly 9 billion light-years from Earth, the star appears to us as it did when the Universe was just 4.4 billion years old. In April of 2016, the star temporarily brightened to 2,000 times its normal luminosity thanks to the gravitational amplification of a star in MACS J1149+2223.
As Prof. Kelly explained in a recent UCLA press release, this temporarily allowed Icarus to become visible for the first time to astronomers:
“You can see individual galaxies out there, but this star is at least 100 times farther away than the next individual star we can study, except for supernova explosions.”
Kelly and a team of astronomers had been using Hubble and MACS J1149+2223 to magnify and monitor a supernova in the distant spiral galaxy at the time when they spotted the new point of light not far away. Given the position of the new source, they determined that it should be much more highly magnified than the supernova. What’s more, previous studies of this galaxy had not shown the light source, indicating that it was being lensed.
As Tommaso Treu, a professor of physics and astronomy in the UCLA College and a co-author of the study, indicated:
“The star is so compact that it acts as a pinhole and provides a very sharp beam of light. The beam shines through the foreground cluster of galaxies, acting as a cosmic magnifying glass… Finding more such events is very important to make progress in our understanding of the fundamental composition of the universe.
In this case, the star’s light provided a unique opportunity to test a theory about the invisible mass (aka. “dark matter”) that permeates the Universe. Basically, the team used the pinpoint light source provided by the background star to probe the intervening galaxy cluster and see if it contained huge numbers of primordial black holes, which are considered to be a potential candidate for dark matter.
These black holes are believed to have formed during the birth of the Universe and have masses tens of times larger than the Sun. However, the results of this test showed that light fluctuations from the background star, which had been monitored by Hubble for thirteen years, disfavor this theory. If dark matter were indeed made up of tiny black holes, the light coming from Icarus would have looked much different.
Since it was discovered in 2016 using the gravitational lensing method, Icarus has provided a new way for astronomers to observe and study individual stars in distant galaxies. In so doing, astronomers are able to get a rare and detailed look at individual stars in the early Universe and see how they (and not just galaxies and clusters) evolved over time.
When the James Webb Space Telescope (JWST) is deployed in 2020, astronomers expect to get an even better look and learn so much more about this mysterious period in cosmic history.
The Multiverse Theory, which states that there may be multiple or even an infinite number of Universes, is a time-honored concept in cosmology and theoretical physics. While the term goes back to the late 19th century, the scientific basis of this theory arose from quantum physics and the study of cosmological forces like black holes, singularities, and problems arising out of the Big Bang Theory.
One of the most burning questions when it comes to this theory is whether or not life could exist in multiple Universes. If indeed the laws of physics change from one Universe to the next, what could this mean for life itself? According to a new series of studies by a team of international researchers, it is possible that life could be common throughout the Multiverse (if it actually exists).
Together, the research team sought to determine how the accelerated expansion of the cosmos could have effected the rate of star and galaxy formation in our Universe. This accelerate rate of expansion, which is an integral part of the Lambda-Cold Dark Matter (Lambda-CDM) model of cosmology, arose out of problems posed by Einstein’s Theory of General Relativity.
As a consequence of Einstein’s field equations, physicist’s understood that the Universe would either be in a state of expansion or contraction since the Big Bang. In 1919, Einstein responded by proposing the “Cosmological Constant” (represented by Lambda), which was a force that “held back” the effects of gravity and thus ensured that the Universe was static and unchanging.
Shortly thereafter, Einstein retracted this proposal when Edwin Hubble revealed (based on redshift measurements of other galaxies) that the Universe was indeed in a state of expansion. Einstein apparently went as far as to declare the Cosmological Constant “the biggest blunder” of his career as a result. However, research into cosmological expansion during the late 1990s caused his theory to be reevaluated.
In short, ongoing studies of the large-scale Universe revealed that during the past 5 billion years, cosmic expansion has accelerated. As such, astronomers began to hypothesize the existence of a mysterious, invisible force that was driving this acceleration. Popularly known as “Dark Energy”, this force is also referred to as the Cosmological Constant (CC) since it is responsible for counter-effecting the effects of gravity.
Since that time, astrophysicists and cosmologists have sought to understand how Dark Energy could have effected cosmic evolution. This is an issue since our current cosmological models predict that there must be more Dark Energy in our Universe than has been observed. However, accounting for larger amounts of Dark Energy would cause such a rapid expansion that it would dilute matter before any stars, planets or life could form.
For the first study, Salcido and the team therefore sought to determine how the presence of more Dark Energy could effect the rate of star formation in our Universe. To do this, they conducted hydrodynamical simulations using the EAGLE (Evolution and Assembly of GaLaxies and their Environments) project – one of the most realistic simulations of the observed Universe.
Using these simulations, the team considered the effects that Dark Energy (at its observed value) would have on star formation over the past 13.8 billion years, and an additional 13.8 billion years into the future. From this, the team developed a simple analytic model that indicated that Dark Energy – despite the difference in the rate of cosmic expansion – would have a negligible impact on star formation in the Universe.
They further showed that the impact of Lambda only becomes significant when the Universe has already produced most of its stellar mass and only causes decreases in the total density of star formation by about 15%. As Salcido explained in a Durham University press release:
“For many physicists, the unexplained but seemingly special amount of dark energy in our Universe is a frustrating puzzle. Our simulations show that even if there was much more dark energy or even very little in the Universe then it would only have a minimal effect on star and planet formation, raising the prospect that life could exist throughout the Multiverse.”
For the second study, the team used the same simulation from the EAGLE collaboration to investigate the effect of varying degrees of the CC on the formation on galaxies and stars. This consisted of simulating Universes that had Lambda values ranging from 0 to 300 times the current value observed in our Universe.
However, since the Universe’s rate of star formation peaked at around 3.5 billion years before the onset of accelerating expansion (ca. 8.5 billion years ago and 5.3 billion years after the Big Bang), increases in the CC had only a small effect on the rate of star formation.
Taken together, these simulations indicated that in a Multiverse, where the laws of physics may differ widely, the effects of more dark energy cosmic accelerated expansion would not have a significant impact on the rates of star or galaxy formation. This, in turn, indicates that other Universes in the Multiverse would be just about as habitable as our own, at least in theory. As Dr. Barnes explained:
“The Multiverse was previously thought to explain the observed value of dark energy as a lottery – we have a lucky ticket and live in the Universe that forms beautiful galaxies which permit life as we know it. Our work shows that our ticket seems a little too lucky, so to speak. It’s more special than it needs to be for life. This is a problem for the Multiverse; a puzzle remains.”
However, the team’s studies also cast doubt on the ability of Multiverse Theory to explain the observed value of Dark Energy in our Universe. According to their research, if we do live in a Multiverse, we would be observing as much as 50 times more Dark Energy than what we are. Although their results do not rule out the possibility of the Multiverse, the tiny amount of Dark Energy we’ve observed would be better explained by the presence of a as-yet undiscovered law of nature.
As Professor Richard Bower, a member of Durham University’s Institute for Computational Cosmology and a co-author on the paper, explained:
“The formation of stars in a universe is a battle between the attraction of gravity, and the repulsion of dark energy. We have found in our simulations that Universes with much more dark energy than ours can happily form stars. So why such a paltry amount of dark energy in our Universe? I think we should be looking for a new law of physics to explain this strange property of our Universe, and the Multiverse theory does little to rescue physicists’ discomfort.”
These studies are timely since they come on the heels of Stephen Hawking’s final theory, which cast doubt on the existence of the Multiverse and proposed a finite and reasonably smooth Universe instead. Basically, all three studies indicate that the debate about whether or not we live in a Multiverse and the role of Dark Energy in cosmic evolution is far from over. But we can look forward to next-generation missions providing some helpful clues in the future.
What’s more, all of these missions are expected to be gathering their first light sometime in the 2020s. So stay tuned, because more information – with cosmological implications – will be arriving in just a few years time!
Stephen Hawking is rightly seen as one of the most influential scientists of our time. In his time on this planet, the famed physicist, science communicator, author and luminary became a household name, synonymous with the likes of Einstein, Newton and Galileo. What is even more impressive is the fact that he managed to maintain his commitment to science, education and humanitarian efforts despite suffering from a slow, degenerative disease.
Even though Hawking recently passed away, his influence is still being felt. Shortly before his death, Hawking submitted a paper offering his final theory on the origins of the Universe. The paper, which was published earlier this week (on Wednesday, May 2nd), offers a new take on the Big Bang Theory that could revolutionize the way we think of the Universe, how it was created, and how it evolved.
The paper, titled “A smooth exit from eternal inflation?“, was published in the Journal of High Energy Physics. The theory was first announced at a conference at the University of Cambridge in July of last year, where Professor Thomas Hertog (a Belgian physicist at KU Leuven University) shared Hawking’s paper (which Hertog co-authored) on the occasion of his 75th birthday.
According to the current scientific consensus, all of the current and past matter in the Universe came into existence at the same time – roughly 13.8 billion years ago. At this time, all matter was compacted into a very small ball with infinite density and intense heat. Suddenly, this ball started to inflate at an exponential rate, and the Universe as we know it began.
However, it is widely believed that since this inflation started, quantum effects will keep it going forever in some regions of the Universe. This means that globally, the Universe’s inflation is eternal. In this respect, the observable part of our Universe (measuring 13.8 billion light-years in any direction) is just a region in which inflation has ended and stars and galaxies formed.
“The usual theory of eternal inflation predicts that globally our universe is like an infinite fractal, with a mosaic of different pocket universes, separated by an inflating ocean. The local laws of physics and chemistry can differ from one pocket universe to another, which together would form a multiverse. But I have never been a fan of the multiverse. If the scale of different universes in the multiverse is large or infinite the theory can’t be tested. ”
In their new paper, Hawking and Hertog offer a new theory that predicts that the Universe is not an infinite fractal-like multiverse, but is finite and reasonably smooth. In short, they theorize that the eternal inflation, as part of the theory of the Big Bang, is wrong. As Hertog explained:
“The problem with the usual account of eternal inflation is that it assumes an existing background universe that evolves according to Einstein’s theory of general relativity and treats the quantum effects as small fluctuations around this. However, the dynamics of eternal inflation wipes out the separation between classical and quantum physics. As a consequence, Einstein’s theory breaks down in eternal inflation.”
In contrast to this, Hawking and Hertog offer an explanation based on String Theory, a branch of theoretical physics that attempts to unify General Relativity with quantum physics. This theory was proposed to explain how gravity interacts with the three other fundamental forces of the Universe (weak and strong nuclear forces and electromagnetism), thus producing a Theory of Everything (ToE).
To put it simply, this theory describes the fundamental constituents of the Universe as tiny, one-dimensional vibrating strings. Hawking and Hertog’s approach uses the holography concept of string theory, which postulates that the Universe is a large and complex hologram. In this theory, physical reality in certain 3D spaces can be mathematically reduced to 2D projections on a surface.
Together, Hawking and Hertog developed a variation of this concept to project out the dimension of time in eternal inflation. This enabled them to describe eternal inflation without having to rely on General Relativity, thus reducing inflation to a timeless state defined on a spatial surface at the beginning of time. In this respect, the new theory represents a change from Hawking’s earlier work on “no boundary theory”.
Also known as the Hartle and Hawking No Bounary Proposal, this theory viewed the Universe like a quantum particle – assigning it a wave function that described all possible Universes. This theory also predicted that if you go back in time to the beginning of the Universe, it would shrink and close off like a sphere. Lastly, it predicted that the Universe would eventually stop expanding and collapse in on itself.
As Hertog explains, this new theory is a departure from that earlier work:
“When we trace the evolution of our universe backwards in time, at some point we arrive at the threshold of eternal inflation, where our familiar notion of time ceases to have any meaning. Now we’re saying that there is a boundary in our past.”
Using this theory, Hawking and Hertog were able to derive more reliable predictions about the global structure of the Universe. In addition, a Universe predicted to emerge from eternal inflation on the past boundary is also finite and much simpler. Last, but not least, the theory is more predictive and testable than the infinite Multiverse predicted by the old theory of eternal inflation.
“We are not down to a single, unique universe, but our findings imply a significant reduction of the multiverse, to a much smaller range of possible universes,” said Hawking. In theory, a finite and smooth Universe is one we can observe (at least locally) and will be governed by physical laws that we are already familiar with. Compared to an infinite number of Universes governed by different physical laws, it certainly simplifies the math!
Looking ahead, Hertog plans to study the implications of this theory on smaller scales using data obtained by space telescopes about the local Universe. In addition, he hopes to take advantage of recent studies concerning gravitational waves (GWs) and the many events that have been detected. Essentially, Hertog believes that primordial GWs generated at the exit from eternal inflation are the most promising means to test the model.
Even though he is longer with us, Hawking’s final theory could be his profound contribution to science. If future research should prove him correct, then Hawking will have resolved one of the most daunting problems in modern astrophysics and cosmology. Just one more achievement from a man who spent his life changing how people think about the Universe!
And now, an international team led by MIT astrophysicist Carl Rodriguez has produced a study that suggests that black holes may merge multiple times. According to their study, these “second-generation mergers” likely occur within globular clusters, the large and compact star clusters that typically orbit at the edges of galaxies – and which are densely-packed with hundreds of thousands to millions of stars.
“We think these clusters formed with hundreds to thousands of black holes that rapidly sank down in the center. These kinds of clusters are essentially factories for black hole binaries, where you’ve got so many black holes hanging out in a small region of space that two black holes could merge and produce a more massive black hole. Then that new black hole can find another companion and merge again.”
Globular clusters have been a source of fascination ever since astronomers first observed them in the 17th century. These spherical collections of stars are among the oldest known stars in the Universe, and can be found in most galaxies. Depending on the size and type of galaxy they orbit, the number of clusters varies, with elliptical galaxies hosting tens of thousands while galaxies like the Milky Way have over 150.
For years, Rodriguez has been investigating the behavior of black holes within globular clusters to see if they interact with their stars differently from black holes that occupy less densely-populated regions in space. To test this hypothesis, Rodriguez and his colleagues used the Quest supercomputer at Northwestern University to conduct simulations on 24 stellar clusters.
These clusters ranged in size from 200,000 to 2 million stars and covered a range of different densities and metallic compositions. The simulations modeled the evolution of individual stars within these clusters over the course of 12 billion years. This span of time was enough to follow these stars as they interacted with each other, and eventually formed black holes.
The simulations also modeled the evolution and trajectories of black holes once they formed. As Rodriguez explained:
“The neat thing is, because black holes are the most massive objects in these clusters, they sink to the center, where you get a high enough density of black holes to form binaries. Binary black holes are basically like giant targets hanging out in the cluster, and as you throw other black holes or stars at them, they undergo these crazy chaotic encounters.”
Whereas previous simulations were based on Newton’s physics, the team decided to add Einstein’s relativistic effects into their simulations of globular clusters. This was due to the fact that gravitational waves were not predicted by Newton’s theories, but by Einstein’s Theory of General Relativity. As Rodriguez indicated, this allowed for them to see how gravitational waves played a role:
“What people had done in the past was to treat this as a purely Newtonian problem. Newton’s theory of gravity works in 99.9 percent of all cases. The few cases in which it doesn’t work might be when you have two black holes whizzing by each other very closely, which normally doesn’t happen in most galaxies… In Einstein’s theory of general relativity, where I can emit gravitational waves, then when one black hole passes near another, it can actually emit a tiny pulse of gravitational waves. This can subtract enough energy from the system that the two black holes actually become bound, and then they will rapidly merge.”
What they observed was that inside the stellar clusters, black holes merge with each other to create new black holes. In previous simulations, Newtonian gravity predicted that most binary black holes would be kicked out of the cluster before they could merge. But by taking relativistic effects into account, Rodriguez and his team found that nearly half of the binary black holes merged to form more massive ones.
As Rodriguez explained, the difference between those that merged and those that were kicked out came down to spin:
“If the two black holes are spinning when they merge, the black hole they create will emit gravitational waves in a single preferred direction, like a rocket, creating a new black hole that can shoot out as fast as 5,000 kilometers per second — so, insanely fast. It only takes a kick of maybe a few tens to a hundred kilometers per second to escape one of these clusters.”
This raised another interesting fact about previous simulations, where astronomers believed that the product of any black hole merger would be kicked out of the cluster since most black holes are assumed to be rapidly spinning. However, the gravity wave measurements recently obtained from LIGO appear to contradict this, which has only detected the mergers of binary black holes with low spins.
This assumption, however, seems to contradict the measurements from LIGO, which has so far only detected binary black holes with low spins. To test the implications of this, Rodriguez and his colleagues reduced the spin rates of the black holes in their simulations. What they found was that nearly 20% of the binary black holes from clusters had at least one black hole that ranged from being 50 to 130 solar masses.
Essentially, this indicated that these were “second generation” black holes, since scientists believe that this mass cannot be achieved by a black hole that formed from a single star. Looking ahead, Rodriguez and his team anticipate that if LIGO detects an object with a mass within this range, it is likely the result of black holes merging within dense stellar cluster, rather than from a single star.
“If we wait long enough, then eventually LIGO will see something that could only have come from these star clusters, because it would be bigger than anything you could get from a single star,” Rodriguez says. “My co-authors and I have a bet against a couple people studying binary star formation that within the first 100 LIGO detections, LIGO will detect something within this upper mass gap. I get a nice bottle of wine if that happens to be true.”
The detection of gravitational waves was a historic accomplishment, and one that has enabled astronomers to conduct new and exciting research. Already, scientists are gaining new insight into black holes by studying the byproduct of their mergers. In the coming years, we can expect to learn a great deal more thanks to improve methods and increased cooperation between observatories.
Black holes have been an endless source of fascination ever since Einstein’s Theory of General Relativity predicted their existence. In the past 100 years, the study of black holes has advanced considerably, but the awe and mystery of these objects remains. For instance, scientists have noted that in some cases, black holes have massive jets of charged particles emanating from them that extend for millions of light years.
These “relativistic jets” – so-named because they propel charged particles at a fraction of the speed of light – have puzzled astronomers for years. But thanks to a recent study conducted by an international team of researchers, new insight has been gained into these jets. Consistent with General Relativity, the researchers showed that these jets gradually precess (i.e. change direction) as a result of space-time being dragged into the rotation of the black hole.
For the sake of their study, the team conducted simulations using the Blue Waters supercomputer at the University of Illinois. The simulations they conducted were the first ever to model the behavior of relativistic jets coming from Supermassive Black Holes (SMBHs). With close to a billion computational cells, it was also the highest-resolution simulation of an accreting black hole ever achieved.
“Understanding how rotating black holes drag the space-time around them and how this process affects what we see through the telescopes remains a crucial, difficult-to-crack puzzle. Fortunately, the breakthroughs in code development and leaps in supercomputer architecture are bringing us ever closer to finding the answers.”
Much like all Supermassive Black Holes, rapidly spinning SMBHs regularly engulf (aka. accrete) matter. However, rapidly spinning black holes are also known for the way they emit energy in the form of relativistic jets. The matter that feeds these black holes forms a rotating disk around them – aka. an accretion disk – which is characterized by hot, energized gas and magnetic field lines.
It is the presence of these field lines that allows black holes to propel energy in the form of these jets. Because these jets are so large, they are easier to study than the black holes themselves. In so doing, astronomers are able to understand how quickly the direction of these jets change, which reveals things about the rotation of the black holes themselves – such as the orientation and size of their rotating disks.
Advanced computer simulations are necessary when it comes to the study of black holes, largely because they are not observable in visible light and are typically very far away. For instance, the closest SMBH to Earth is Sagittarius A*, which is located about 26,000 light-years away at the center of our galaxy. As such, simulations are the only way to determine how a highly complex system like a black hole operates.
In previous simulations, scientists operated under the assumption that black hole disks were aligned. However, most SMBHs have been found to have tilted disks – i.e. the disks rotate around a separate axis than the black hole itself. This study was therefore seminal in that it showed how disks can change direction relative to their black hole, leading to precessing jets that periodically change their direction.
This was previously unknown because of the incredibly amount of computing power that is needed to construct 3-D simulations of the region surrounding a rapidly spinning black hole. With the support of a National Science Foundation (NSF) grant, the team was able to achieve this by using the Blue Waters, one of the largest supercomputers in the world.
With this supercomputer at their disposal, the team was able to construct the first black hole simulation code, which they accelerated using graphical processing units (GPUs). Thanks to this combination, the team was able to carry out simulations that had the highest level of resolution ever achieved – i.e. close to a billion computational cells. As Tchekhovskoy explained:
“The high resolution allowed us, for the first time, to ensure that small-scale turbulent disk motions are accurately captured in our models. To our surprise, these motions turned out to be so strong that they caused the disk to fatten up and the disk precession to stop. This suggests that precession can come about in bursts.”
The precession of relativistic jets could explain why light fluctuations have been observed coming from around black holes in the past – which are known as quasi-periodic oscillations (QPOs). These beams, which were first discovered by Michiel van der Klis (one of the co-authors on the study), operate in much the same way as a quasar’s beams, which appear to have a strobing effect.
This study is one of many that is being conducting on rotating black holes around the world, the purpose of which is to gain a better understanding about recent discoveries like gravitational waves, which are caused by the merger of black holes. These studies are also being applied to observations from the Event Horizon Telescope, which captured the first images of Sagittarius A*’s shadow. What they will reveal is sure to excite and amaze, and potentially deepen the mystery of black holes.
In the past century, the study of black holes has advanced considerably – from the purely theoretical, to indirect studies of the effects they have on surrounding matter, to the study of gravitational waves themselves. Perhaps one day, we might actually be able to study them directly or (if it’s not too much to hope for) peer directly inside them!
Welcome back to our series on Exoplanet-Hunting methods! Today, we look at the curious and unique method known as Gravitational Microlensing.
The hunt for extra-solar planets sure has heated up in the past decade. Thanks to improvements made in technology and methodology, the number of exoplanets that have been observed (as of December 1st, 2017) has reached 3,710 planets in 2,780 star systems, with 621 system boasting multiple planets. Unfortunately, due to various limits astronomers are forced to contend with, the vast majority have been discovered using indirect methods.
One of the more commonly-used methods for indirectly detecting exoplanets is known as Gravitational Microlensing. Essentially, this method relies on the gravitational force of distant objects to bend and focus light coming from a star. As a planet passes in front of the star relative to the observer (i.e. makes a transit), the light dips measurably, which can then be used to determine the presence of a planet.
In this respect, Gravitational Microlensing is a scaled-down version of Gravitational Lensing, where an intervening object (like a galaxy cluster) is used to focus light coming from a galaxy or other object located beyond it. It also incorporates a key element of the highly-effective Transit Method, where stars are monitored for dips in brightness to indicate the presence of an exoplanet.
In accordance with Einstein’s Theory of General Relativity, gravity causes the fabric of spacetime to bend. This effect can cause light affected by an object’s gravity to become distorted or bent. It can also act as a lens, causing light to become more focused and making distant objects (like stars) appear brighter to an observer. This effect occurs only when the two stars are almost exactly aligned relative to the observer (i.e. one positioned in front of the other).
These “lensing events” are brief, but plentiful, as Earth and stars in our galaxy are always moving relative to each other. In the past decade, over one thousand such events have been observed, and typically lasted for a few days or weeks at a time. In fact, this effect was used by Sir Arthur Eddington in 1919 to provide the first empirical evidence for General Relativity.
This took place during the solar eclipse of May 29th, 1919, where Eddington and a scientific expedition traveled to the island of Principe off the coast of West Africa to take pictures of the stars that were now visible in the region around the Sun. The pictures confirmed Einstein’s prediction by showing how light from these stars was shifted slightly in response to the Sun’s gravitational field.
The technique was originally proposed by astronomers Shude Mao and Bohdan Paczynski in 1991 as a means of looking for binary companions to stars. Their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method of detecting exoplanets. This method is most effective when looking for planets towards the center of the galaxy, as the galactic bulge provides a large number of background stars.
Microlensing is the only known method capable of discovering planets at truly great distances from the Earth and is capable of finding the smallest of exoplanets. Whereas the Radial Velocity Method is effective when looking for planets up to 100 light years from Earth and Transit Photometry can detect planets hundreds of light-years away, microlensing can find planets that are thousands of light-years away.
While most other methods have a detection bias towards smaller planets, the microlensing method is the most sensitive means of detecting planets that are around 1-10 astronomical units (AU) away from Sun-like stars. This makes it especially effective when paired with the Radial Velocity and Transit Methods, which can confirm the existence of exoplanets as well as yield accurate estimates of a planet’s radius and mass.
Taken together, these benefits make microlensing the most effective method for finding Earth-like planets around Sun-like stars (alone or in combination with other methods). In addition, microlensing surveys can be effectively mounted using ground-based facilities. Like Transit Photometry, the Microlensing Method benefits from the fact that it can be used to survey tens of thousands of stars simultaneously.
Because microlensing events are unique and not subject to repeat, any planets detected using this method will not be observable again. In addition, those planets that are detected tend to be very far way, which makes follow-up investigations virtually impossible. This makes microlensing a good means for detecting exoplanet candidates, but a very poor means for confirming candidates.
Another problem with microlensing is that it is subject to a considerable margin of error when placing constraints on a planet’s characteristics. For example, microlensing surveys can only produce rough estimations of a planet’s distance, leaving large margins for error. This means that planets that are tens of thousands of light-years from Earth would produce distance estimates with a margin of several thousand light-years.
Microlensing is also unable to yield accurate estimates of a planet’s size, and mass estimates are subject to loose constraints. Orbital properties are also difficult to determine, since the only orbital characteristic that can be directly determined with this method is the planet’s current semi-major axis. As such, planet’s with an eccentric orbit will only be detectable for a tiny portion of its orbit (when it is far away from its star).
The gravitational microlensing effect increases as a result of the planet-to-star mass ratio, which means that it is easiest to detect planets around low-mass stars. This makes microlensing effective in the search for rocky planets around low mass, M-type (red dwarf) stars, but limits its effectiveness with more massive stars. Finally, microlensing is dependent on rare and random events – the passage of one star precisely in front of another, as seen from Earth – which makes detections both rare and unpredictable.
Examples of Gravitational Microlensing Surveys:
Surveys that rely on the Microlensing Method include the Optical Gravitational Lensing Experiment (OGLE) at the University of Warsaw. Led by Andrzej Udalski, the director of the University’s Astronomical Observatory, this international project uses the 1.3 meter “Warsaw” telescope at Las Campanas, Chile, to search for microlensing events in a field of 100 stars around the galactic bulge.
There is also the Microlensing Observations in Astrophysics (MOA) group, a collaborative effort between researchers in New Zealand and Japan. Led by Professor Yasushi Muraki of Nagoya University, this group uses the Microlensing Method to conduct surveys for dark matter, extra-solar planets, and stellar atmospheres from the southern hemisphere.
And then there’s the Probing Lensing Anomalies NETwork (PLANET), which consists of five 1-meter telescopes distributedaroundthe southernhemisphere. In collaboration with RoboNet, this project is able to provide near-continuous observations for microlensing events caused by planets with masses as low as Earth’s.
Not only was the first-time detection of gravity waves an historic accomplishment, it ushered in a new era of astrophysics. It is little wonder then why the three researchers who were central to the first detection have been awarded the 2017 Nobel Prize in Physics. The prize was awarded jointly to Caltech professors emeritus Kip S. Thorne and Barry C. Barish, along with MIT professor emeritus Rainer Weiss.
To put it simply, gravitational waves are ripples in space-time that are formed by major astronomical events – such as the merger of a binary black hole pair. They were first predicted over a century ago by Einstein’s Theory of General Relativity, which indicated that massive perturbations would alter the structure of space-time. However, it was not until recent years that evidence of these waves was observed for the first time.
The first signal was detected by LIGO’s twin observatories – in Hanford, Washington, and Livingston, Louisiana, respectively – and traced to a black mole merger 1.3 billion light-years away. To date, four detections have been, all of which were due to the mergers of black-hole pairs. These took place on December 26, 2015, January 4, 2017, and August 14, 2017, the last being detected by LIGO and the European Virgo gravitational-wave detector.
For the role they played in this accomplishment, one half of the prize was awarded jointly to Caltech’s Barry C. Barish – the Ronald and Maxine Linde Professor of Physics, Emeritus – and Kip S. Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus. The other half was awarded to Rainer Weiss, Professor of Physics, Emeritus, at the Massachusetts Institute of Technology (MIT).
As Caltech president Thomas F. Rosenbaum – the Sonja and William Davidow Presidential Chair and Professor of Physics – said in a recent Caltech press statement:
“I am delighted and honored to congratulate Kip and Barry, as well as Rai Weiss of MIT, on the award this morning of the 2017 Nobel Prize in Physics. The first direct observation of gravitational waves by LIGO is an extraordinary demonstration of scientific vision and persistence. Through four decades of development of exquisitely sensitive instrumentation—pushing the capacity of our imaginations—we are now able to glimpse cosmic processes that were previously undetectable. It is truly the start of a new era in astrophysics.”
This accomplishment was all the more impressive considering that Albert Einstein, who first predicted their existence, believed gravitational waves would be too weak to study. However, by the 1960s, advances in laser technology and new insights into possible astrophysical sources led scientists to conclude that these waves might actually be detectable.
The first gravity wave detectors were built by Joseph Weber, an astrophysics from the University of Maryland. His detectors, which were built in the 1960s, consisted of large aluminum cylinders that would be driven to vibrate by passing gravitational waves. Other attempts followed, but all proved unsuccessful; prompting a shift towards a new type of detector involving interferometry.
One such instrument was developed by Weiss at MIT, which relied on the technique known as laser interferometry. In this kind of instrument, gravitational waves are measured using widely spaced and separated mirrors that reflect lasers over long distances. When gravitational waves cause space to stretch and squeeze by infinitesimal amounts, it causes the reflected light inside the detector to shift minutely.
At the same time, Thorne – along with his students and postdocs at Caltech – began working to improve the theory of gravitational waves. This included new estimates on the strength and frequency of waves produced by objects like black holes, neutron stars and supernovae. This culminated in a 1972 paper which Throne co-published with his student, Bill Press, which summarized their vision of how gravitational waves could be studied.
That same year, Weiss also published a detailed analysis of interferometers and their potential for astrophysical research. In this paper, he stated that larger-scale operations – measuring several km or more in size – might have a shot at detecting gravitational waves. He also identified the major challenges to detection (such as vibrations from the Earth) and proposed possible solutions for countering them.
In 1975, Weiss invited Thorne to speak at a NASA committee meeting in Washington, D.C., and the two spent an entire night talking about gravitational experiments. As a result of their conversation, Thorne went back to Calteh and proposed creating a experimental gravity group, which would work on interferometers in parallel with researchers at MIT, the University of Glasgow and the University of Garching (where similar experiments were being conducted).
Development on the first interferometer began shortly thereafter at Caltech, which led to the creation of a 40-meter (130-foot) prototype to test Weiss’ theories about gravitational waves. In 1984, all of the work being conducted by these respective institutions came together. Caltech and MIT, with the support of the National Science Foundation (NSF) formed the LIGO collaboration and began work on its two interferometers in Hanford and Livingston.
The construction of LIGO was a major challenge, both logistically and technically. However, things were helped immensely when Barry Barish (then a Caltech particle physicist) became the Principal Investigator (PI) of LIGO in 1994. After a decade of stalled attempts, he was also made the director of LIGO and put its construction back on track. He also expanded the research team and developed a detailed work plan for the NSF.
As Barish indicated, the work he did with LIGO was something of a dream come true:
“I always wanted to be an experimental physicist and was attracted to the idea of using continuing advances in technology to carry out fundamental science experiments that could not be done otherwise. LIGO is a prime example of what couldn’t be done before. Although it was a very large-scale project, the challenges were very different from the way we build a bridge or carry out other large engineering projects. For LIGO, the challenge was and is how to develop and design advanced instrumentation on a large scale, even as the project evolves.”
By 1999, construction had wrapped up on the LIGO observatories and by 2002, LIGO began to obtain data. In 2008, work began on improving its original detectors, known as the Advanced LIGO Project. The process of converting the 40-m prototype to LIGO’s current 4-km (2.5 mi) interferometers was a massive undertaking, and therefore needed to be broken down into steps.
The first step took place between 2002 and 2010, when the team built and tested the initial interferometers. While this did not result in any detections, it did demonstrate the observatory’s basic concepts and solved many of the technical obstacles. The next phase – called Advanced LIGO, which took placed between 2010 and 2015 – allowed the detectors to achieve new levels of sensitivity.
These upgrades, which also happened under Barish’s leadership, allowed for the development of several key technologies which ultimately made the first detection possible. As Barish explained:
“In the initial phase of LIGO, in order to isolate the detectors from the earth’s motion, we used a suspension system that consisted of test-mass mirrors hung by piano wire and used a multiple-stage set of passive shock absorbers, similar to those in your car. We knew this probably would not be good enough to detect gravitational waves, so we, in the LIGO Laboratory, developed an ambitious program for Advanced LIGO that incorporated a new suspension system to stabilize the mirrors and an active seismic isolation system to sense and correct for ground motions.”
Given how central Thorne, Weiss and Barish were to the study of gravitational waves, all three were rightly-recognized as this year’s recipients of the Nobel Prize in Physics. Both Thorne and Barish were notified that they had won in the early morning hours on October 3rd, 2017. In response to the news, both scientists were sure to acknowledge the ongoing efforts of LIGO, the science teams that have contributed to it, and the efforts of Caltech and MIT in creating and maintaining the observatories.
“The prize rightfully belongs to the hundreds of LIGO scientists and engineers who built and perfected our complex gravitational-wave interferometers, and the hundreds of LIGO and Virgo scientists who found the gravitational-wave signals in LIGO’s noisy data and extracted the waves’ information,” said Thorne. “It is unfortunate that, due to the statutes of the Nobel Foundation, the prize has to go to no more than three people, when our marvelous discovery is the work of more than a thousand.”
“I am humbled and honored to receive this award,” said Barish. “The detection of gravitational waves is truly a triumph of modern large-scale experimental physics. Over several decades, our teams at Caltech and MIT developed LIGO into the incredibly sensitive device that made the discovery. When the signal reached LIGO from a collision of two stellar black holes that occurred 1.3 billion years ago, the 1,000-scientist-strong LIGO Scientific Collaboration was able to both identify the candidate event within minutes and perform the detailed analysis that convincingly demonstrated that gravitational waves exist.”
Looking ahead, it is also pretty clear that Advanved LIGO, Advanced Virgo and other gravitational wave observatories around the world are just getting started. In addition to having detected four separate events, recent studies have indicated that gravitational wave detection could also open up new frontiers for astronomical and cosmological research.
For instance, a recent study by a team of researchers from the Monash Center for Astrophysics proposed a theoretical concept known as ‘orphan memory’. According to their research, gravitational waves not only cause waves in space-time, but leave permanent ripples in its structure. By studying the “orphans” of past events, gravitational waves can be studied both as they reach Earth and long after they pass.
In addition, a study was released in August by a team of astronomers from the Center of Cosmology at the University of California Irvine that indicated that black hole mergers are far more common than we thought. After conducting a survey of the cosmos intended to calculate and categorize black holes, the UCI team determined that there could be as many as 100 million black holes in the galaxy.
Another recent study indicated that the Advanced LIGO, GEO 600, and Virgo gravitational-wave detector network could also be used to detect the gravitational waves created by supernovae. By detecting the waves created by star that explode near the end of their lifespans, astronomers could be able to see inside the hearts of collapsing stars for the first time and probe the mechanics of black hole formation.
The Nobel Prize in Physics is one of the highest honors that can be bestowed upon a scientist. But even greater than that is the knowledge that great things resulted from one’s own work. Decades after Thorne, Weiss and Barish began proposing gravitational wave studies and working towards the creation of detectors, scientists from all over the world are making profound discoveries that are revolutionizing the way we think of the Universe.
And as these scientists will surely attest, what we’ve seen so far is just the tip of the iceberg. One can imagine that somewhere, Einstein is also beaming with pride. As with other research pertaining to his theory of General Relativity, the study of gravitational waves is demonstrating that even after a century, his predictions were still bang on!
And be sure to check out this video of the Caltech Press Conference where Barish and Thorn were honored for their accomplishments:
The latest detection took place on August 14th, 2017, when three observatories – the Advanced LIGO and the Advanced Virgo detectors – simultaneously detected the gravitational waves created by merging black holes. This was the first time that gravitational waves were detected by three different facilities from around the world, thus ushering in a new era of globally-networked research into this cosmic phenomena.
Though not the first instance of gravitational waves being detected, this was the first time that an event was detected by three observatories simultaneously. As France Córdova, the director of the NSF, said in a recent LIGO press release:
“Little more than a year and a half ago, NSF announced that its Laser Interferometer Gravitational Wave Observatory had made the first-ever detection of gravitational waves, which resulted from the collision of two black holes in a galaxy a billion light-years away. Today, we are delighted to announce the first discovery made in partnership between the Virgo gravitational-wave observatory and the LIGO Scientific Collaboration, the first time a gravitational wave detection was observed by these observatories, located thousands of miles apart. This is an exciting milestone in the growing international scientific effort to unlock the extraordinary mysteries of our universe.”
Based on the waves detected, the LIGO Scientific Collaboration (LSC) and Virgo collaboration were able to determine the type of event, as well as the mass of the objects involved. According to their study, the event was triggered by the merger of two black holes – which were 31 and 25 Solar Masses, respectively. The event took place about 1.8 billion light years from Earth, and resulted in the formation of a spinning black hole with about 53 Solar Masses.
What this means is that about three Solar Masses were converted into gravitational-wave energy during the merger, which was then detected by LIGO and Virgo. While impressive on its own, this latest detection is merely a taste of what gravitational wave detectors like the LIGO and Virgo collaborations can do now that they have entered their advanced stages, and into cooperation with each other.
Both Advanced LIGO and Advanced Virgo are second-generation gravitational-wave detectors that have taken over from previous ones. The LIGO facilities, which were conceived, built, and are operated by Caltech and MIT, collected data unsuccessfully between 2002 and 2010. However, as of September of 2015, Advanced LIGO went online and began conducting two observing runs – O1 and O2.
Meanwhile, the original Virgo detector conducted observations between 2003 and October of 2011, once again without success. By February of 2017, the integration of the Advanced Virgo detector began, and the instruments went online by the following April. In 2007, Virgo and LIGO also partnered to share and jointly analyze the data recorded by their respective detectors.
In August of 2017, the Virgo detector joined the O2 run, and the first-ever simultaneous detection took place on August 14th, with data being gathered by all three LIGO and Virgo instruments. As LSC spokesperson David Shoemaker – a researcher with the Massachusetts Institute of Technology (MIT) – indicated, this detection is just the first of many anticipated events.
“This is just the beginning of observations with the network enabled by Virgo and LIGO working together,” he said. “With the next observing run planned for fall 2018, we can expect such detections weekly or even more often.”
Not only will this mean that scientists have a better shot of detecting future events, but they will also be able to pinpoint them with far greater accuracy. In fact, the transition from a two- to a three-detector network is expected to increase the likelihood of pinpointing the source of GW170814 by a factory of 20. The sky region for GW170814 is just 60 square degrees – more than 10 times smaller than with data from LIGO’s interferometers alone.
In addition, the accuracy with which the distance to the source is measured has also benefited from this partnership. As Laura Cadonati, a Georgia Tech professor and the deputy spokesperson of the LSC, explained:
“This increased precision will allow the entire astrophysical community to eventually make even more exciting discoveries, including multi-messenger observations. A smaller search area enables follow-up observations with telescopes and satellites for cosmic events that produce gravitational waves and emissions of light, such as the collision of neutron stars.”
In the end, bringing more detectors into the gravitational-wave network will also allow for more detailed test’s of Einstein’s theory of General Relativity. Caltech’s David H. Reitze, the executive director of the LIGO Laboratory, also praised the new partnership and what it will allow for.
“With this first joint detection by the Advanced LIGO and Virgo detectors, we have taken one step further into the gravitational-wave cosmos,” he said. “Virgo brings a powerful new capability to detect and better locate gravitational-wave sources, one that will undoubtedly lead to exciting and unanticipated results in the future.”
The study of gravitational waves is a testament to the growing capability of the world’s science teams and the science of interferometry. For decades, the existence of gravitational waves was merely a theory; and by the turn of the century, all attempts to detect them had yielded nothing. But in just the past eighteen months, multiple detections have been made, and dozens more are expected in the coming years.
What’s more, thanks to the new global network and the improved instruments and methods, these events are sure to tell us volumes about our Universe and the physics that govern it.