Go to a public place where people gather such as a rush hour sidewalk downtown or a weekend shopping mall and you’ll quickly notice that each person is an individual with diverse characteristics based on their height, weight and countenance, for example. Such is also true of the stars that congregate above. Each are distinct by size, shape, age, and color. There’s also one other trait that’s immediately noticeable at first glance- each star has a unique brilliance.
As early as 120 BC, the Greek astronomers ranked the stars into categories according to their splendor- the first to do this was Hipparchus. Although we know very little about his life, he’s nonetheless considered one of the most influential astronomers of Antiquity. Over two thousand years ago, he calculated the length of a year to within 6.5 minutes. He discovered the precession of the equinoxes, predicted the where and when of both lunar and solar eclipses and accurately measured the distance from Earth to the Moon. Hipparchus was also the father of trigonometry and his catalogue charted between 850- 1,100 stars, identified each by position and ranked them according their brightness with a scale ranging from one through six. The most dazzling stars were described as first magnitude and those that appeared faintest to the unaided eye were designated as sixth. His classifications were based on naked-eye observations, therefore it was simple, but it was later incorporated and enlarged upon in Ptolomy’s Almagest which became the standard used for the next 1,400 years. Copernicus, Kepler, Galileo, Newton, and Halley were all familiar and accepted it, for example.
Of course, there were no binoculars or telescopes in the time of Hipparchus and it takes keen eyesight and good observing conditions to discern stars at the sixth magnitude. Light pollution that is pervasive in most major cities and surrounding metropolitan areas places limits on viewing faint objects in the night sky today. For example, observers in many suburban locations can only see third to fourth magnitude stars- on the very best nights, fifth magnitude may be visible. Although the loss of one or two magnitudes does not seem like much, consider that the number of visible stars rapidly increases with each movement up the scale. The difference between a light polluted sky and a dark sky is breathtaking!
By the mid-19th century technology had reached a point of precision that the old method of gauging star brightness by approximation was an impediment to research. By this time the array of instruments used to study the heavens included not only a telescope but a spectroscope and camera. These devices provided a huge improvement over hand written notes, eyepiece sketches and inferences drawn from the recollections of previous visual observations. Additionally, since telescopes are capable of gathering more light that the human eye can muster, science had known, since Galileo’s first telescopic observations, that there were stars much fainter than people had suspected when the magnitude scale was invented. Therefore, it became increasingly accepted that the brightness assignments handed down from Antiquity were too subjective. But instead of abandoning it, astronomers chose to adjust it by differentiating star brightness mathematically.
Norman Robert Pogson was a British astronomer born in Nottingham, England on March 23, 1829. Pogson exhibited his prowess with complex calculations at an early age by computing the orbits of two comets by the time he was only 18. During his career as an astronomer in Oxford and later in India, he discovered eight asteroids and twenty-one variable stars. But his most memorable contribution to science was a system of assigning accurate stellar brightness quantifiably. Pogson was the first to notice that stars of the first magnitude were about a hundred times as bright as stars of the sixth magnitude. In 1856, he proposed this should be accepted as a new standard so that each decline in magnitude would decrement the value of the previous at a rate equal to the fifth-root of 100 or about 2.512. Polaris, Aldebaran and Altair were designated magnitude 2.0 by Pogson and all other stars were compared to these in his system and of the three, Polaris was the reference star. Unfortunately, astronomers later discovered that Polaris is slightly variable, so they substituted Vega’s brilliance as the base line for brightness. Of course, it should be noted that Vega has since been replaced with a more complicated mathematical zero point.
Assigning an intensity value to stars between the first and sixth magnitude levels was based on the, then, prevalent belief that the eye sensed differences in brightness on a logarithmic scale- scientists, at that time, believed a star’s magnitude was not directly proportional to the actual amount of energy the eye received. They assumed a star of magnitude 4 would appear to be halfway between the brightness of a star at magnitude 3 and one at magnitude 5. We now know that this is not true. The eye’s sensitivity is not exactly logarithmic – it follows Steven’s Power Law curve.
Regardless, the Pogson Ratio became the standard method of assigning magnitudes based on the apparent brightness of stars seen from Earth and over time, as instruments improved, astronomers were able to further refine their designations so that fractional magnitudes also became possible.
As previously mentioned, it had been known that the Universe was filled with stars fainter than the eye alone could perceive since the time of Galileo. The great astronomer’s notebooks are full of references to seventh and eighth magnitude stars that he discovered. So the Pogson Ratio was extended to encompass those that were dimmer than sixth magnitude, too. For example, the un-aided eye has access to about 6,000 stars (but few people ever see this many due to night sly glow and the need to observe over a period of months from the equator). Common 10X50 binoculars will increase the eye’s light grasp by about fifty times, expand the number of viewable stars to around 50,000 and enable the observer to spot ninth magnitude objects. A modest six-inch telescope will increase vision even more by revealing stars down to the twelfth magnitude- that’s about 475 fainter than the unaided eye can detect. Approximately 60,000 celestial targets are observable with an instrument like this.
The great 200-inch Hale Telescope on Mount Palomar, long the largest telescope on Earth until new instruments surpassed it over the past twenty years, could offer visual peeks down to the twentieth magnitude- that’s about a million times fainter than un-assisted vision. Unfortunately, this telescope is not equipped for direct observation- it did not come with an eyepiece holder and, like every other large telescope today, it’s essentially a gigantic camera lens. The Hubble Space Telescope, in low Earth orbit, can photograph stars at the twenty-ninth magnitude. This represents humankind’s current edge of the visible Universe- about twenty-five billion times fainter than normal human perception! Incredibly, enormous telescopes are on the drawing board and being funded, with light gathering mirrors the size of football fields, which will enable the sighting of objects at the thirty-eighth magnitude! It is speculated that this may take us to the very dawn of creation!
With Vega representing the starting point for determining magnitudes, something had to be done with objects that were brighter, too. Eight stars, several planets, the Moon and the Sun (all) outshine Vega, for instance. Since the use of higher numbers accounted for fainter-than-naked-eye objects it seemed appropriate that zero and negative numbers could be used to take in those that were brighter than Vega. Therefore, the Sun is said to shine at magnitude -26.8, the full Moon at -12. Sirius, the brightest star seen from our planet, was given a magnitude of -1.5.
This arrangement has persisted because it combines accuracy and flexibility to describe with high precision the apparent brightness of everything we can see in the heavens.
However, the brilliance of stars can be deceiving. Some stars appear brighter because they are closer to Earth, release unusually large amounts of energy or have a color that our eyes perceive with greater or lesser sensitivity. Therefore, astronomers also have a separate system that describes the sparkle of stars based on how they would appear from a standard distance- about 33 light years– called absolute magnitude. This removes the effects of the star’s separation from our planet, its intrinsic brightness and its color from the apparent magnitude equation.
To deduce a star’s absolute magnitude, astronomers must first understand its actual distance. There are several methods that have proven useful, of these parallax is the most frequently used. If you hold a finger upward at arms length, then move your head from side to side you will notice that the finger appears to shift its position relative to objects in the background. This shift is a simple example of parallax. Astronomers use it to measure stellar distances by measuring the position of an object against the background stars when the Earth is on one side of its orbit versus the other. By applying trigonometry, astronomers can calculate the object’s distance. Once this is understood, another calculation can estimate its apparent brightness at 33 light years.
Curious changes to magnitude assignments result. For example, our Sun’s absolute magnitude shrinks to only 4.83. Alpha Centauri, one of our closest stellar neighbors, is similar with an absolute magnitude of 4,1. Interestingly, Rigel, the bright, white-blue star that represents the hunter’s right foot in the constellation of Orion, shines with an apparent magnitude of about zero but an absolute magnitude of -7. That means Rigel is tens of thousands of times brighter than our Sun.
This is one way astronomers have learned about the true nature of stars even though they are very remote!
Galileo was not the last great Italian astronomer. Though he is arguably the most famous, modern Italy is bustling with thousands of both world-class professional and gifted amateur astronomers who are involved in researching and photographing the Universe. For example, the magnificent picture that accompanies this discussion was produced by Giovanni Benintende with a ten-inch Ritchey-Chretien telescope and a 3.5 mega-pixel astronomical camera from his observing site in Sicily on September 23, 2006. The image portrays an etherial nebula, designated Van den Bergh 152. It’s in the direction of the constellation Cepheus, located about 1,400 light years from Earth. Because it only shines at a feeble magnitude 20 (which you should now appreciate as being extremely faint!), it took Giovanni 3.5 hours of exposure to capture this marvelous scene.
The cloud’s beautiful hue is produced by the brilliant star, near the top. Microscopic dust grains within the nebula are small enough to reflect the shorter wavelengths of starlight, which tend toward the blue part of the color spectrum. Longer wavelengths, which tend toward red, simply pass through. This is also analogous to the reason our earthly skies are blue.The striking backlight effect is very real and comes from the combined starlight of our Galaxy!
Do you have photos you’d like to share? Post them to the Universe Today astrophotography forum or email them, and we might feature one in Universe Today.
Written by R. Jay GaBany