Since the 1970s, astronomers have theorized that at the center of our galaxy, about 26,000 light-years from Earth, there exists a supermassive black hole (SMBH) known as Sagittarius A*. Measuring an estimated 44 million km (27.3 million mi) in diameter and weighing in at roughly 4 million Solar masses, this black hole is believed to have had a profound influence on the formation and evolution of our galaxy.
This discovery not only opened up an exciting new field of research, but has opened the door to many intriguing possibilities. One such possibility, according to a new study by a team of Russian scientists, is that gravitational waves could be used to transmit information. In much the same way as electromagnetic waves are used to communicate via antennas and satellites, the future of communications could be gravitationally-based.
In August of 2017, astronomers made another major breakthrough when the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected gravitational waves that were believed to be caused by the merger of two neutron stars. Since that time, scientists at multiple facilities around the world have conducted follow-up observations to determine the aftermath this merger, as even to test various cosmological theories.
For instance, in the past, some scientists have suggested that the inconsistencies between Einstein’s Theory of General Relativity and the nature of the Universe over large-scales could be explained by the presence of extra dimensions. However, according to a new study by a team of American astrophysicists, last year’s kilonova event effectively rules out this hypothesis.
In 1915, Albert Einstein published his famous Theory of General Relativity, which provided a unified description of gravity as a geometric property of space and time. This theory gave rise to the modern theory of gravitation and revolutionized our understanding of physics. Even though a century has passed since then, scientists are still conducting experiments that confirm his theory’s predictions.
The new infrared observations collected by these instruments allowed the team to monitor one of the stars (S2) that orbits Sagittarius A* as it passed in front of the black hole – which took place in May of 2018. At the closest point in its orbit, the star was at a distance of less than 20 billion km (12.4 billion mi) from the black hole and was moving at a speed in excess of 25 million km/h (15 million mph) – almost three percent of the speed of light.
Whereas the SINFONI instrument was used to measure the velocity of S2 towards and away from Earth, the GRAVITY instrument in the VLT Interferometer (VLTI) made extraordinarily precise measurements of the changing position of S2 in order to define the shape of its orbit. The GRAVITY instrument then created the sharp images that revealed the motion of the star as it passed close to the black hole.
The team then compared the position and velocity measurements to previous observations of S2 using other instruments. They then compared these results with predictions made by Newton’s Law of Universal Gravitation, General Relativity, and other theories of gravity. As expected, the new results were consistent with the predictions made by Einstein over a century ago.
As Reinhard Genzel, who in addition to being the leader of the GRAVITY collaboration was a co-author on the paper, explained in a recent ESO press release:
“This is the second time that we have observed the close passage of S2 around the black hole in our galactic center. But this time, because of much improved instrumentation, we were able to observe the star with unprecedented resolution. We have been preparing intensely for this event over several years, as we wanted to make the most of this unique opportunity to observe general relativistic effects.”
When observed with the VLT’s new instruments, the team noted an effect called gravitational redshift, where the light coming from S2 changed color as it drew closer to the black hole. This was caused by the very strong gravitational field of the black hole, which stretched the wavelength of the star’s light, causing it to shift towards the red end of the spectrum.
The change in the wavelength of light from S2 agrees precisely with what Einstein’s field equation’s predicted. As Frank Eisenhauer – a researcher from the Max Planck Institute of Extraterrestrial Physics, the Principal Investigator of GRAVITY and the SINFONI spectrograph, and a co-author on the study – indicated:
“Our first observations of S2 with GRAVITY, about two years ago, already showed that we would have the ideal black hole laboratory. During the close passage, we could even detect the faint glow around the black hole on most of the images, which allowed us to precisely follow the star on its orbit, ultimately leading to the detection of the gravitational redshift in the spectrum of S2.”
Whereas other tests have been performed that have confirmed Einstein’s predictions, this is the first time that the effects of General Relativity have been observed in the motion of a star around a supermassive black hole. In this respect, Einstein has been proven right once again, using one the most extreme laboratory to date! What’s more, it confirmed that tests involving relativistic effects can provide consistent results over time and space.
“Here in the Solar System we can only test the laws of physics now and under certain circumstances,” said Françoise Delplancke, head of the System Engineering Department at ESO. “So it’s very important in astronomy to also check that those laws are still valid where the gravitational fields are very much stronger.”
In the near future, another relativistic test will be possible as S2 moves away from the black hole. This is known as a Schwarzschild precession, where the star is expected to experience a small rotation in its orbit. The GRAVITY Collaboration will be monitoring S2 to observe this effect as well, once again relying on the VLT’s very precise and sensitive instruments.
As Xavier Barcons (the ESO’s Director General) indicated, this accomplishment was made possible thanks to the spirit of international cooperation represented by the GRAVITY collaboration and the instruments they helped the ESO develop:
“ESO has worked with Reinhard Genzel and his team and collaborators in the ESO Member States for over a quarter of a century. It was a huge challenge to develop the uniquely powerful instruments needed to make these very delicate measurements and to deploy them at the VLT in Paranal. The discovery announced today is the very exciting result of a remarkable partnership.”
And be sure to check out this video of the GRAVITY Collaboration’s successful test, courtesy of the ESO:
When looking to study the most distant objects in the Universe, astronomers often rely on a technique known as Gravitational Lensing. Based on the principles of Einstein’s Theory of General Relativity, this technique involves relying on a large distribution of matter (such as a galaxy cluster or star) to magnify the light coming from a distant object, thereby making it appear brighter and larger.
This technique has allowed for the study of individual stars in distant galaxies. In a recent study, an international team of astronomers used a galaxy cluster to study the farthest individual star ever seen in the Universe. Although it normally to faint to observe, the presence of a foreground galaxy cluster allowed the team to study the star in order to test a theory about dark matter.
For the sake of their study, Prof. Kelly and his associates used the galaxy cluster known as MACS J1149+2223 as their lens. Located about 5 billion light-years from Earth, this galaxy cluster sits between the Solar System and the galaxy that contains Icarus. By combining Hubble’s resolution and sensitivity with the strength of this gravitational lens, the team was able to see and study Icarus, a blue giant.
Icarus, named after the Greek mythological figure who flew too close to the Sun, has had a rather interesting history. At a distance of roughly 9 billion light-years from Earth, the star appears to us as it did when the Universe was just 4.4 billion years old. In April of 2016, the star temporarily brightened to 2,000 times its normal luminosity thanks to the gravitational amplification of a star in MACS J1149+2223.
As Prof. Kelly explained in a recent UCLA press release, this temporarily allowed Icarus to become visible for the first time to astronomers:
“You can see individual galaxies out there, but this star is at least 100 times farther away than the next individual star we can study, except for supernova explosions.”
Kelly and a team of astronomers had been using Hubble and MACS J1149+2223 to magnify and monitor a supernova in the distant spiral galaxy at the time when they spotted the new point of light not far away. Given the position of the new source, they determined that it should be much more highly magnified than the supernova. What’s more, previous studies of this galaxy had not shown the light source, indicating that it was being lensed.
As Tommaso Treu, a professor of physics and astronomy in the UCLA College and a co-author of the study, indicated:
“The star is so compact that it acts as a pinhole and provides a very sharp beam of light. The beam shines through the foreground cluster of galaxies, acting as a cosmic magnifying glass… Finding more such events is very important to make progress in our understanding of the fundamental composition of the universe.
In this case, the star’s light provided a unique opportunity to test a theory about the invisible mass (aka. “dark matter”) that permeates the Universe. Basically, the team used the pinpoint light source provided by the background star to probe the intervening galaxy cluster and see if it contained huge numbers of primordial black holes, which are considered to be a potential candidate for dark matter.
These black holes are believed to have formed during the birth of the Universe and have masses tens of times larger than the Sun. However, the results of this test showed that light fluctuations from the background star, which had been monitored by Hubble for thirteen years, disfavor this theory. If dark matter were indeed made up of tiny black holes, the light coming from Icarus would have looked much different.
Since it was discovered in 2016 using the gravitational lensing method, Icarus has provided a new way for astronomers to observe and study individual stars in distant galaxies. In so doing, astronomers are able to get a rare and detailed look at individual stars in the early Universe and see how they (and not just galaxies and clusters) evolved over time.
When the James Webb Space Telescope (JWST) is deployed in 2020, astronomers expect to get an even better look and learn so much more about this mysterious period in cosmic history.
The Multiverse Theory, which states that there may be multiple or even an infinite number of Universes, is a time-honored concept in cosmology and theoretical physics. While the term goes back to the late 19th century, the scientific basis of this theory arose from quantum physics and the study of cosmological forces like black holes, singularities, and problems arising out of the Big Bang Theory.
One of the most burning questions when it comes to this theory is whether or not life could exist in multiple Universes. If indeed the laws of physics change from one Universe to the next, what could this mean for life itself? According to a new series of studies by a team of international researchers, it is possible that life could be common throughout the Multiverse (if it actually exists).
Together, the research team sought to determine how the accelerated expansion of the cosmos could have effected the rate of star and galaxy formation in our Universe. This accelerate rate of expansion, which is an integral part of the Lambda-Cold Dark Matter (Lambda-CDM) model of cosmology, arose out of problems posed by Einstein’s Theory of General Relativity.
As a consequence of Einstein’s field equations, physicist’s understood that the Universe would either be in a state of expansion or contraction since the Big Bang. In 1919, Einstein responded by proposing the “Cosmological Constant” (represented by Lambda), which was a force that “held back” the effects of gravity and thus ensured that the Universe was static and unchanging.
Shortly thereafter, Einstein retracted this proposal when Edwin Hubble revealed (based on redshift measurements of other galaxies) that the Universe was indeed in a state of expansion. Einstein apparently went as far as to declare the Cosmological Constant “the biggest blunder” of his career as a result. However, research into cosmological expansion during the late 1990s caused his theory to be reevaluated.
In short, ongoing studies of the large-scale Universe revealed that during the past 5 billion years, cosmic expansion has accelerated. As such, astronomers began to hypothesize the existence of a mysterious, invisible force that was driving this acceleration. Popularly known as “Dark Energy”, this force is also referred to as the Cosmological Constant (CC) since it is responsible for counter-effecting the effects of gravity.
Since that time, astrophysicists and cosmologists have sought to understand how Dark Energy could have effected cosmic evolution. This is an issue since our current cosmological models predict that there must be more Dark Energy in our Universe than has been observed. However, accounting for larger amounts of Dark Energy would cause such a rapid expansion that it would dilute matter before any stars, planets or life could form.
For the first study, Salcido and the team therefore sought to determine how the presence of more Dark Energy could effect the rate of star formation in our Universe. To do this, they conducted hydrodynamical simulations using the EAGLE (Evolution and Assembly of GaLaxies and their Environments) project – one of the most realistic simulations of the observed Universe.
Using these simulations, the team considered the effects that Dark Energy (at its observed value) would have on star formation over the past 13.8 billion years, and an additional 13.8 billion years into the future. From this, the team developed a simple analytic model that indicated that Dark Energy – despite the difference in the rate of cosmic expansion – would have a negligible impact on star formation in the Universe.
They further showed that the impact of Lambda only becomes significant when the Universe has already produced most of its stellar mass and only causes decreases in the total density of star formation by about 15%. As Salcido explained in a Durham University press release:
“For many physicists, the unexplained but seemingly special amount of dark energy in our Universe is a frustrating puzzle. Our simulations show that even if there was much more dark energy or even very little in the Universe then it would only have a minimal effect on star and planet formation, raising the prospect that life could exist throughout the Multiverse.”
For the second study, the team used the same simulation from the EAGLE collaboration to investigate the effect of varying degrees of the CC on the formation on galaxies and stars. This consisted of simulating Universes that had Lambda values ranging from 0 to 300 times the current value observed in our Universe.
However, since the Universe’s rate of star formation peaked at around 3.5 billion years before the onset of accelerating expansion (ca. 8.5 billion years ago and 5.3 billion years after the Big Bang), increases in the CC had only a small effect on the rate of star formation.
Taken together, these simulations indicated that in a Multiverse, where the laws of physics may differ widely, the effects of more dark energy cosmic accelerated expansion would not have a significant impact on the rates of star or galaxy formation. This, in turn, indicates that other Universes in the Multiverse would be just about as habitable as our own, at least in theory. As Dr. Barnes explained:
“The Multiverse was previously thought to explain the observed value of dark energy as a lottery – we have a lucky ticket and live in the Universe that forms beautiful galaxies which permit life as we know it. Our work shows that our ticket seems a little too lucky, so to speak. It’s more special than it needs to be for life. This is a problem for the Multiverse; a puzzle remains.”
However, the team’s studies also cast doubt on the ability of Multiverse Theory to explain the observed value of Dark Energy in our Universe. According to their research, if we do live in a Multiverse, we would be observing as much as 50 times more Dark Energy than what we are. Although their results do not rule out the possibility of the Multiverse, the tiny amount of Dark Energy we’ve observed would be better explained by the presence of a as-yet undiscovered law of nature.
As Professor Richard Bower, a member of Durham University’s Institute for Computational Cosmology and a co-author on the paper, explained:
“The formation of stars in a universe is a battle between the attraction of gravity, and the repulsion of dark energy. We have found in our simulations that Universes with much more dark energy than ours can happily form stars. So why such a paltry amount of dark energy in our Universe? I think we should be looking for a new law of physics to explain this strange property of our Universe, and the Multiverse theory does little to rescue physicists’ discomfort.”
These studies are timely since they come on the heels of Stephen Hawking’s final theory, which cast doubt on the existence of the Multiverse and proposed a finite and reasonably smooth Universe instead. Basically, all three studies indicate that the debate about whether or not we live in a Multiverse and the role of Dark Energy in cosmic evolution is far from over. But we can look forward to next-generation missions providing some helpful clues in the future.
What’s more, all of these missions are expected to be gathering their first light sometime in the 2020s. So stay tuned, because more information – with cosmological implications – will be arriving in just a few years time!
Stephen Hawking is rightly seen as one of the most influential scientists of our time. In his time on this planet, the famed physicist, science communicator, author and luminary became a household name, synonymous with the likes of Einstein, Newton and Galileo. What is even more impressive is the fact that he managed to maintain his commitment to science, education and humanitarian efforts despite suffering from a slow, degenerative disease.
Even though Hawking recently passed away, his influence is still being felt. Shortly before his death, Hawking submitted a paper offering his final theory on the origins of the Universe. The paper, which was published earlier this week (on Wednesday, May 2nd), offers a new take on the Big Bang Theory that could revolutionize the way we think of the Universe, how it was created, and how it evolved.
The paper, titled “A smooth exit from eternal inflation?“, was published in the Journal of High Energy Physics. The theory was first announced at a conference at the University of Cambridge in July of last year, where Professor Thomas Hertog (a Belgian physicist at KU Leuven University) shared Hawking’s paper (which Hertog co-authored) on the occasion of his 75th birthday.
According to the current scientific consensus, all of the current and past matter in the Universe came into existence at the same time – roughly 13.8 billion years ago. At this time, all matter was compacted into a very small ball with infinite density and intense heat. Suddenly, this ball started to inflate at an exponential rate, and the Universe as we know it began.
However, it is widely believed that since this inflation started, quantum effects will keep it going forever in some regions of the Universe. This means that globally, the Universe’s inflation is eternal. In this respect, the observable part of our Universe (measuring 13.8 billion light-years in any direction) is just a region in which inflation has ended and stars and galaxies formed.
“The usual theory of eternal inflation predicts that globally our universe is like an infinite fractal, with a mosaic of different pocket universes, separated by an inflating ocean. The local laws of physics and chemistry can differ from one pocket universe to another, which together would form a multiverse. But I have never been a fan of the multiverse. If the scale of different universes in the multiverse is large or infinite the theory can’t be tested. ”
In their new paper, Hawking and Hertog offer a new theory that predicts that the Universe is not an infinite fractal-like multiverse, but is finite and reasonably smooth. In short, they theorize that the eternal inflation, as part of the theory of the Big Bang, is wrong. As Hertog explained:
“The problem with the usual account of eternal inflation is that it assumes an existing background universe that evolves according to Einstein’s theory of general relativity and treats the quantum effects as small fluctuations around this. However, the dynamics of eternal inflation wipes out the separation between classical and quantum physics. As a consequence, Einstein’s theory breaks down in eternal inflation.”
In contrast to this, Hawking and Hertog offer an explanation based on String Theory, a branch of theoretical physics that attempts to unify General Relativity with quantum physics. This theory was proposed to explain how gravity interacts with the three other fundamental forces of the Universe (weak and strong nuclear forces and electromagnetism), thus producing a Theory of Everything (ToE).
To put it simply, this theory describes the fundamental constituents of the Universe as tiny, one-dimensional vibrating strings. Hawking and Hertog’s approach uses the holography concept of string theory, which postulates that the Universe is a large and complex hologram. In this theory, physical reality in certain 3D spaces can be mathematically reduced to 2D projections on a surface.
Together, Hawking and Hertog developed a variation of this concept to project out the dimension of time in eternal inflation. This enabled them to describe eternal inflation without having to rely on General Relativity, thus reducing inflation to a timeless state defined on a spatial surface at the beginning of time. In this respect, the new theory represents a change from Hawking’s earlier work on “no boundary theory”.
Also known as the Hartle and Hawking No Bounary Proposal, this theory viewed the Universe like a quantum particle – assigning it a wave function that described all possible Universes. This theory also predicted that if you go back in time to the beginning of the Universe, it would shrink and close off like a sphere. Lastly, it predicted that the Universe would eventually stop expanding and collapse in on itself.
As Hertog explains, this new theory is a departure from that earlier work:
“When we trace the evolution of our universe backwards in time, at some point we arrive at the threshold of eternal inflation, where our familiar notion of time ceases to have any meaning. Now we’re saying that there is a boundary in our past.”
Using this theory, Hawking and Hertog were able to derive more reliable predictions about the global structure of the Universe. In addition, a Universe predicted to emerge from eternal inflation on the past boundary is also finite and much simpler. Last, but not least, the theory is more predictive and testable than the infinite Multiverse predicted by the old theory of eternal inflation.
“We are not down to a single, unique universe, but our findings imply a significant reduction of the multiverse, to a much smaller range of possible universes,” said Hawking. In theory, a finite and smooth Universe is one we can observe (at least locally) and will be governed by physical laws that we are already familiar with. Compared to an infinite number of Universes governed by different physical laws, it certainly simplifies the math!
Looking ahead, Hertog plans to study the implications of this theory on smaller scales using data obtained by space telescopes about the local Universe. In addition, he hopes to take advantage of recent studies concerning gravitational waves (GWs) and the many events that have been detected. Essentially, Hertog believes that primordial GWs generated at the exit from eternal inflation are the most promising means to test the model.
Even though he is longer with us, Hawking’s final theory could be his profound contribution to science. If future research should prove him correct, then Hawking will have resolved one of the most daunting problems in modern astrophysics and cosmology. Just one more achievement from a man who spent his life changing how people think about the Universe!
And now, an international team led by MIT astrophysicist Carl Rodriguez has produced a study that suggests that black holes may merge multiple times. According to their study, these “second-generation mergers” likely occur within globular clusters, the large and compact star clusters that typically orbit at the edges of galaxies – and which are densely-packed with hundreds of thousands to millions of stars.
“We think these clusters formed with hundreds to thousands of black holes that rapidly sank down in the center. These kinds of clusters are essentially factories for black hole binaries, where you’ve got so many black holes hanging out in a small region of space that two black holes could merge and produce a more massive black hole. Then that new black hole can find another companion and merge again.”
Globular clusters have been a source of fascination ever since astronomers first observed them in the 17th century. These spherical collections of stars are among the oldest known stars in the Universe, and can be found in most galaxies. Depending on the size and type of galaxy they orbit, the number of clusters varies, with elliptical galaxies hosting tens of thousands while galaxies like the Milky Way have over 150.
For years, Rodriguez has been investigating the behavior of black holes within globular clusters to see if they interact with their stars differently from black holes that occupy less densely-populated regions in space. To test this hypothesis, Rodriguez and his colleagues used the Quest supercomputer at Northwestern University to conduct simulations on 24 stellar clusters.
These clusters ranged in size from 200,000 to 2 million stars and covered a range of different densities and metallic compositions. The simulations modeled the evolution of individual stars within these clusters over the course of 12 billion years. This span of time was enough to follow these stars as they interacted with each other, and eventually formed black holes.
The simulations also modeled the evolution and trajectories of black holes once they formed. As Rodriguez explained:
“The neat thing is, because black holes are the most massive objects in these clusters, they sink to the center, where you get a high enough density of black holes to form binaries. Binary black holes are basically like giant targets hanging out in the cluster, and as you throw other black holes or stars at them, they undergo these crazy chaotic encounters.”
Whereas previous simulations were based on Newton’s physics, the team decided to add Einstein’s relativistic effects into their simulations of globular clusters. This was due to the fact that gravitational waves were not predicted by Newton’s theories, but by Einstein’s Theory of General Relativity. As Rodriguez indicated, this allowed for them to see how gravitational waves played a role:
“What people had done in the past was to treat this as a purely Newtonian problem. Newton’s theory of gravity works in 99.9 percent of all cases. The few cases in which it doesn’t work might be when you have two black holes whizzing by each other very closely, which normally doesn’t happen in most galaxies… In Einstein’s theory of general relativity, where I can emit gravitational waves, then when one black hole passes near another, it can actually emit a tiny pulse of gravitational waves. This can subtract enough energy from the system that the two black holes actually become bound, and then they will rapidly merge.”
What they observed was that inside the stellar clusters, black holes merge with each other to create new black holes. In previous simulations, Newtonian gravity predicted that most binary black holes would be kicked out of the cluster before they could merge. But by taking relativistic effects into account, Rodriguez and his team found that nearly half of the binary black holes merged to form more massive ones.
As Rodriguez explained, the difference between those that merged and those that were kicked out came down to spin:
“If the two black holes are spinning when they merge, the black hole they create will emit gravitational waves in a single preferred direction, like a rocket, creating a new black hole that can shoot out as fast as 5,000 kilometers per second — so, insanely fast. It only takes a kick of maybe a few tens to a hundred kilometers per second to escape one of these clusters.”
This raised another interesting fact about previous simulations, where astronomers believed that the product of any black hole merger would be kicked out of the cluster since most black holes are assumed to be rapidly spinning. However, the gravity wave measurements recently obtained from LIGO appear to contradict this, which has only detected the mergers of binary black holes with low spins.
This assumption, however, seems to contradict the measurements from LIGO, which has so far only detected binary black holes with low spins. To test the implications of this, Rodriguez and his colleagues reduced the spin rates of the black holes in their simulations. What they found was that nearly 20% of the binary black holes from clusters had at least one black hole that ranged from being 50 to 130 solar masses.
Essentially, this indicated that these were “second generation” black holes, since scientists believe that this mass cannot be achieved by a black hole that formed from a single star. Looking ahead, Rodriguez and his team anticipate that if LIGO detects an object with a mass within this range, it is likely the result of black holes merging within dense stellar cluster, rather than from a single star.
“If we wait long enough, then eventually LIGO will see something that could only have come from these star clusters, because it would be bigger than anything you could get from a single star,” Rodriguez says. “My co-authors and I have a bet against a couple people studying binary star formation that within the first 100 LIGO detections, LIGO will detect something within this upper mass gap. I get a nice bottle of wine if that happens to be true.”
The detection of gravitational waves was a historic accomplishment, and one that has enabled astronomers to conduct new and exciting research. Already, scientists are gaining new insight into black holes by studying the byproduct of their mergers. In the coming years, we can expect to learn a great deal more thanks to improve methods and increased cooperation between observatories.
Black holes have been an endless source of fascination ever since Einstein’s Theory of General Relativity predicted their existence. In the past 100 years, the study of black holes has advanced considerably, but the awe and mystery of these objects remains. For instance, scientists have noted that in some cases, black holes have massive jets of charged particles emanating from them that extend for millions of light years.
These “relativistic jets” – so-named because they propel charged particles at a fraction of the speed of light – have puzzled astronomers for years. But thanks to a recent study conducted by an international team of researchers, new insight has been gained into these jets. Consistent with General Relativity, the researchers showed that these jets gradually precess (i.e. change direction) as a result of space-time being dragged into the rotation of the black hole.
For the sake of their study, the team conducted simulations using the Blue Waters supercomputer at the University of Illinois. The simulations they conducted were the first ever to model the behavior of relativistic jets coming from Supermassive Black Holes (SMBHs). With close to a billion computational cells, it was also the highest-resolution simulation of an accreting black hole ever achieved.
“Understanding how rotating black holes drag the space-time around them and how this process affects what we see through the telescopes remains a crucial, difficult-to-crack puzzle. Fortunately, the breakthroughs in code development and leaps in supercomputer architecture are bringing us ever closer to finding the answers.”
Much like all Supermassive Black Holes, rapidly spinning SMBHs regularly engulf (aka. accrete) matter. However, rapidly spinning black holes are also known for the way they emit energy in the form of relativistic jets. The matter that feeds these black holes forms a rotating disk around them – aka. an accretion disk – which is characterized by hot, energized gas and magnetic field lines.
It is the presence of these field lines that allows black holes to propel energy in the form of these jets. Because these jets are so large, they are easier to study than the black holes themselves. In so doing, astronomers are able to understand how quickly the direction of these jets change, which reveals things about the rotation of the black holes themselves – such as the orientation and size of their rotating disks.
Advanced computer simulations are necessary when it comes to the study of black holes, largely because they are not observable in visible light and are typically very far away. For instance, the closest SMBH to Earth is Sagittarius A*, which is located about 26,000 light-years away at the center of our galaxy. As such, simulations are the only way to determine how a highly complex system like a black hole operates.
In previous simulations, scientists operated under the assumption that black hole disks were aligned. However, most SMBHs have been found to have tilted disks – i.e. the disks rotate around a separate axis than the black hole itself. This study was therefore seminal in that it showed how disks can change direction relative to their black hole, leading to precessing jets that periodically change their direction.
This was previously unknown because of the incredibly amount of computing power that is needed to construct 3-D simulations of the region surrounding a rapidly spinning black hole. With the support of a National Science Foundation (NSF) grant, the team was able to achieve this by using the Blue Waters, one of the largest supercomputers in the world.
With this supercomputer at their disposal, the team was able to construct the first black hole simulation code, which they accelerated using graphical processing units (GPUs). Thanks to this combination, the team was able to carry out simulations that had the highest level of resolution ever achieved – i.e. close to a billion computational cells. As Tchekhovskoy explained:
“The high resolution allowed us, for the first time, to ensure that small-scale turbulent disk motions are accurately captured in our models. To our surprise, these motions turned out to be so strong that they caused the disk to fatten up and the disk precession to stop. This suggests that precession can come about in bursts.”
The precession of relativistic jets could explain why light fluctuations have been observed coming from around black holes in the past – which are known as quasi-periodic oscillations (QPOs). These beams, which were first discovered by Michiel van der Klis (one of the co-authors on the study), operate in much the same way as a quasar’s beams, which appear to have a strobing effect.
This study is one of many that is being conducting on rotating black holes around the world, the purpose of which is to gain a better understanding about recent discoveries like gravitational waves, which are caused by the merger of black holes. These studies are also being applied to observations from the Event Horizon Telescope, which captured the first images of Sagittarius A*’s shadow. What they will reveal is sure to excite and amaze, and potentially deepen the mystery of black holes.
In the past century, the study of black holes has advanced considerably – from the purely theoretical, to indirect studies of the effects they have on surrounding matter, to the study of gravitational waves themselves. Perhaps one day, we might actually be able to study them directly or (if it’s not too much to hope for) peer directly inside them!
Welcome back to our series on Exoplanet-Hunting methods! Today, we look at the curious and unique method known as Gravitational Microlensing.
The hunt for extra-solar planets sure has heated up in the past decade. Thanks to improvements made in technology and methodology, the number of exoplanets that have been observed (as of December 1st, 2017) has reached 3,710 planets in 2,780 star systems, with 621 system boasting multiple planets. Unfortunately, due to various limits astronomers are forced to contend with, the vast majority have been discovered using indirect methods.
One of the more commonly-used methods for indirectly detecting exoplanets is known as Gravitational Microlensing. Essentially, this method relies on the gravitational force of distant objects to bend and focus light coming from a star. As a planet passes in front of the star relative to the observer (i.e. makes a transit), the light dips measurably, which can then be used to determine the presence of a planet.
In this respect, Gravitational Microlensing is a scaled-down version of Gravitational Lensing, where an intervening object (like a galaxy cluster) is used to focus light coming from a galaxy or other object located beyond it. It also incorporates a key element of the highly-effective Transit Method, where stars are monitored for dips in brightness to indicate the presence of an exoplanet.
In accordance with Einstein’s Theory of General Relativity, gravity causes the fabric of spacetime to bend. This effect can cause light affected by an object’s gravity to become distorted or bent. It can also act as a lens, causing light to become more focused and making distant objects (like stars) appear brighter to an observer. This effect occurs only when the two stars are almost exactly aligned relative to the observer (i.e. one positioned in front of the other).
These “lensing events” are brief, but plentiful, as Earth and stars in our galaxy are always moving relative to each other. In the past decade, over one thousand such events have been observed, and typically lasted for a few days or weeks at a time. In fact, this effect was used by Sir Arthur Eddington in 1919 to provide the first empirical evidence for General Relativity.
This took place during the solar eclipse of May 29th, 1919, where Eddington and a scientific expedition traveled to the island of Principe off the coast of West Africa to take pictures of the stars that were now visible in the region around the Sun. The pictures confirmed Einstein’s prediction by showing how light from these stars was shifted slightly in response to the Sun’s gravitational field.
The technique was originally proposed by astronomers Shude Mao and Bohdan Paczynski in 1991 as a means of looking for binary companions to stars. Their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method of detecting exoplanets. This method is most effective when looking for planets towards the center of the galaxy, as the galactic bulge provides a large number of background stars.
Microlensing is the only known method capable of discovering planets at truly great distances from the Earth and is capable of finding the smallest of exoplanets. Whereas the Radial Velocity Method is effective when looking for planets up to 100 light years from Earth and Transit Photometry can detect planets hundreds of light-years away, microlensing can find planets that are thousands of light-years away.
While most other methods have a detection bias towards smaller planets, the microlensing method is the most sensitive means of detecting planets that are around 1-10 astronomical units (AU) away from Sun-like stars. This makes it especially effective when paired with the Radial Velocity and Transit Methods, which can confirm the existence of exoplanets as well as yield accurate estimates of a planet’s radius and mass.
Taken together, these benefits make microlensing the most effective method for finding Earth-like planets around Sun-like stars (alone or in combination with other methods). In addition, microlensing surveys can be effectively mounted using ground-based facilities. Like Transit Photometry, the Microlensing Method benefits from the fact that it can be used to survey tens of thousands of stars simultaneously.
Because microlensing events are unique and not subject to repeat, any planets detected using this method will not be observable again. In addition, those planets that are detected tend to be very far way, which makes follow-up investigations virtually impossible. This makes microlensing a good means for detecting exoplanet candidates, but a very poor means for confirming candidates.
Another problem with microlensing is that it is subject to a considerable margin of error when placing constraints on a planet’s characteristics. For example, microlensing surveys can only produce rough estimations of a planet’s distance, leaving large margins for error. This means that planets that are tens of thousands of light-years from Earth would produce distance estimates with a margin of several thousand light-years.
Microlensing is also unable to yield accurate estimates of a planet’s size, and mass estimates are subject to loose constraints. Orbital properties are also difficult to determine, since the only orbital characteristic that can be directly determined with this method is the planet’s current semi-major axis. As such, planet’s with an eccentric orbit will only be detectable for a tiny portion of its orbit (when it is far away from its star).
The gravitational microlensing effect increases as a result of the planet-to-star mass ratio, which means that it is easiest to detect planets around low-mass stars. This makes microlensing effective in the search for rocky planets around low mass, M-type (red dwarf) stars, but limits its effectiveness with more massive stars. Finally, microlensing is dependent on rare and random events – the passage of one star precisely in front of another, as seen from Earth – which makes detections both rare and unpredictable.
Examples of Gravitational Microlensing Surveys:
Surveys that rely on the Microlensing Method include the Optical Gravitational Lensing Experiment (OGLE) at the University of Warsaw. Led by Andrzej Udalski, the director of the University’s Astronomical Observatory, this international project uses the 1.3 meter “Warsaw” telescope at Las Campanas, Chile, to search for microlensing events in a field of 100 stars around the galactic bulge.
There is also the Microlensing Observations in Astrophysics (MOA) group, a collaborative effort between researchers in New Zealand and Japan. Led by Professor Yasushi Muraki of Nagoya University, this group uses the Microlensing Method to conduct surveys for dark matter, extra-solar planets, and stellar atmospheres from the southern hemisphere.
And then there’s the Probing Lensing Anomalies NETwork (PLANET), which consists of five 1-meter telescopes distributedaroundthe southernhemisphere. In collaboration with RoboNet, this project is able to provide near-continuous observations for microlensing events caused by planets with masses as low as Earth’s.