For astronomers, one of the greatest challenges is capturing images of objects and phenomena that are difficult to see using optical (or visible light) telescopes. This problem has been largely addressed by interferometry, a technique where multiple telescopes gather signals, which is then combined to create a more complete picture. Examples include the Event Horizon Telescope, which relies on observatories from around the world to capture the first images of the supermassive black hole (SMBH) at the center of the M87 galaxy, and of Sagittarius A* at the center of the Milky Way.
That being said, classic interferometry requires that optical links be maintained between observatories, which imposes limitations and can lead to drastically increased costs. In a recent study, a team of astrophysicists and theoretical physicists proposed how these limitations could be overcome by relying on quantum mechanics. Rather than relying on optical links, they propose how the principle of quantum entanglements could be used to share photons between observatories. This technique is part of a growing field of research that could lead to “quantum telescopes” someday.
An almost unimaginably enormous black hole is situated at the heart of the Milky Way. It’s called a Supermassive Black Hole (SMBH), and astronomers think that almost all massive galaxies have one at their center. But of course, nobody’s ever seen one (sort of, more on that later): It’s all based on evidence other than direct observation.
The Milky Way’s SMBH is called Sagittarius A* (Sgr. A*) and it’s about 4 million times more massive than the Sun. Scientists know it’s there because we can observe the effect it has on matter that gets too close to it. Now, we have one of our best views yet of Sgr. A*, thanks to a team of scientists using a technique called interferometry.
Fall will soon be at our doorstep. But before the leaves change colors and the smell of pumpkin fills our coffee shops, the Pleiades star cluster will mark the new season with its earlier presence in the night sky.
The delicate grouping of blue stars has been a prominent sight since antiquity. But in recent years, the cluster has also been the subject of an intense debate, marking a controversy that has troubled astronomers for more than a decade.
Now, a new measurement argues that the distance to the Pleiades star cluster measured by ESA’s Hipparcos satellite is decidedly wrong and that previous measurements from ground-based telescopes had it right all along.
The Pleiades star cluster is a perfect laboratory to study stellar evolution. Born from the same cloud of gas, all stars exhibit nearly identical ages and compositions, but vary in their mass. Accurate models, however, depend greatly on distance. So it’s critical that astronomers know the cluster’s distance precisely.
A well pinned down distance is also a perfect stepping stone in the cosmic distance ladder. In other words, accurate distances to the Pleiades will help produce accurate distances to the farthest galaxies.
But accurately measuring the vast distances in space is tricky. A star’s trigonometric parallax — its tiny apparent shift against background stars caused by our moving vantage point — tells its distance more truly than any other method.
Originally the consensus was that the Pleiades are about 435 light-years from Earth. However, ESA’s Hipparcos satellite, launched in 1989 to precisely measure the positions and distances of thousands of stars using parallax, produced a distance measurement of only about 392 light-years, with an error of less than 1%.
“That may not seem like a huge difference, but, in order to fit the physical characteristics of the Pleiades stars, it challenged our general understanding of how stars form and evolve,” said lead author Carl Melis, of the University of California, San Diego, in a press release. “To fit the Hipparcos distance measurement, some astronomers even suggested that some type of new and unknown physics had to be at work in such young stars.”
If the cluster really was 10% closer than everyone had thought, then the stars must be intrinsically dimmer than stellar models suggested. A debate ensued as to whether the spacecraft or the models were at fault.
To solve the discrepancy, Melis and his colleagues used a new technique known as very-long-baseline radio interferometry. By linking distant telescopes together, astronomers generate a virtual telescope, with a data-gathering surface as large as the distances between the telescopes.
The network included the Very Long Baseline Array (a system of 10 radio telescopes ranging from Hawaii to the Virgin Islands), the Green Bank Telescope in West Virginia, the William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico, and the Effelsberg Radio Telescope in Germany.
“Using these telescopes working together, we had the equivalent of a telescope the size of the Earth,” said Amy Miouduszewski, of the National Radio Astronomy Observatory (NRAO). “That gave us the ability to make extremely accurate position measurements — the equivalent of measuring the thickness of a quarter in Los Angeles as seen from New York.”
After a year and a half of observations, the team determined a distance of 444.0 light-years to within 1% — matching the results from previous ground-based observations and not the Hipparcos satellite.
“The question now is what happened to Hipparcos?” Melis said.
The spacecraft measured the position of roughly 120,000 nearby stars and — in principle — calculated distances that were far more precise than possible with ground-based telescopes. If this result holds up, astronomers will grapple with why the Hipparcos observations misjudged the distances so badly.
ESA’s long-awaited Gaia observatory, which launched on Dec. 19, 2013, will use similar technology to measure the distances of about one billion stars. Although it’s now ready to begin its science mission, the mission team will have to take special care, utilizing the work of ground-based radio telescopes in order to ensure their measurements are accurate.
The findings have been published in the Aug. 29 issue of Science and is available online.
When it comes to planet Earth, it’s very important to know if we’re growing or shrinking. While plate tectonics are responsible for major changes in our planet’s outer crust, we need to have accurate measurements of our atmosphere and magnetic fields, too. To make these appraisals accurate, the global science community established the International Terrestrial Reference Frame.
At one time scientists theorized that Earth might be expanding or contracting. After all, major events like volcanoes, landslides and ice sheets were at the root of significant elevation changes. Even sizable climate events like El Nino and La Nina are responsible for redistributing large amounts of water. Now a new NASA study, published recently in Geophysical Research Letter, has pointed towards the utilization of space measurement tools and a new data calculation techniques which show no vital changes in the size of our planet.
Why is monitoring our size so important? The International Terrestrial Reference Frame is not only important for ground navigation, but satellite tracking as well. NASA says to think of it this way: “If all of Earth’s GPS stations were located in Norway, their data would indicate that Earth is growing, because high-latitude countries like Norway are still rising in elevation in response to the removal of the weight of Ice Age ice sheets.” So for all intents and purposes, the ITRF uses the average center of mass of the total Earth, a computation of a quarter of a century of satellite data. High-precision space geodesy includes:
Satellite Laser Ranging — a global observation station network that measures, with millimeter-level precision, the time it takes for ultrashort pulses of light to travel from the ground stations to satellites specially equipped with retroreflectors and back again.
Very-Long Baseline Interferometry — a radio astronomy technology that combines observations of an object made simultaneously by many telescopes to simulate a telescope as big as the maximum distance between the telescopes.
Global Positioning System — the U.S.-built space-based global navigation system that provides users around the world with precise location and time information.
Doppler Orbitography and Radiopositioning Integrated by Satellite — a French satellite system used to determine satellite orbits and positioning. Beacons on the ground emit radio signals that are received by satellites. The movement of the satellites causes a frequency shift of the signal that can be observed to determine ground positions and other information.
A team of scientists led by Xiaoping Wu of NASA’s Jet Propulsion Laboratory, Pasadena, Calif., and including participants from the Institut Geographique National, Champs-sur-Marne in France, and Delft University of Technology in The Netherlands are currently busy assessing the accuracy of the International Terrestrial Reference Frame. Through the use of the new data and calculation techniques combined with measurements of Earth’s gravity from NASA’s Gravity Recovery and Climate Experiment (GRACE) spacecraft and models of ocean bottom pressure, they are even able to account for minute changes in Earth’s gravity. The resultant changes have shown Earth’s radius to vary about 0.004 inches (0.1 millimeters) – or less than the thickness of a human hair.
“Our study provides an independent confirmation that the solid Earth is not getting larger at present, within current measurement uncertainties,” said Wu.