Space junk is a growing problem. For decades we have been sending satellites into orbit around Earth. Some of them de-orbit and burn up in Earth’s atmosphere, or crash into the surface. But most of the stuff we send into orbit is still up there.
This is becoming an acute problem as years go by and we launch more and more hardware into orbit. Since the very first satellite—Sputnik 1—was launched into orbit in 1957, over 8000 satellites have ben placed in orbit. As of 2018, an estimated 4900 are still in orbit. About 3000 of those are not operational. They’re space junk. The risk of collision is growing, and scientists are working on solutions. The problem will compound itself over time, as collisions between objects create more pieces of debris that have to be dealt with.
What is the most wonderful time of the year? In my opinion, it is when the new Year In Space Calendars come out! This is our most-recommended holiday gift every year and whether it’s the gigantic wall calendar or the spiral-bound desk calendar, the 2018 versions don’t disappoint. They are full of wonderful color images, daily space facts, and historical references. These calendars even show you where you can look in the sky for all the best astronomical sights.
These calendars are the perfect gift every space enthusiast will enjoy all year.
The gorgeous wall calendar has over 120 crisp color images and is larger, more lavishly illustrated, and packed with more information than any other space-themed wall calendar. It’s a huge 16 x 22 inches when hanging up.
The Year In Space calendars take you on a year-long guided tour of the Universe, providing in-depth info on human space flight, planetary exploration, and deep sky wonders. You’ll even see Universe Today featured in these calendars 🙂
The Year in Space calendars normally sell for $19.95, but Universe Today readers can buy the calendar for only $14.95 or less, with additional discounts that appear during checkout if you buy more than 1 copy at a time. Check out all the details here.
Other features of the Year In Space calendar:
– Background info and fun facts
– A sky summary of where to find naked-eye planets
– Space history dates
– Major holidays (U.S. and Canada)
– Daily Moon phases
– A mini-biography of famous astronomer, scientist, or astronaut each month
The 136-Page Desk Calendar is available at a similar discounts. The desk calendar also includes a Monthly Sky Summary, which is a handy month-by-month list of what’s visible in the night sky, such as conjunctions, meteor showers, eclipses, planet visibility, and more. Plus there’s information on planetary exploration, including a comprehensive look at what to expect from the many planetary missions taking place in the year ahead.
The Solar System is a beautiful thing to behold. Between its four terrestrial planets, four gas giants, multiple minor planets composed of ice and rock, and countless moons and smaller objects, there is simply no shortage of things to study and be captivated by. Add to that our Sun, an Asteroid Belt, the Kuiper Belt, and many comets, and you’ve got enough to keep your busy for the rest of your life.
But why exactly is it that the larger bodies in the Solar System are round? Whether we are talking about moon like Titan, or the largest planet in the Solar System (Jupiter), large astronomical bodies seem to favor the shape of a sphere (though not a perfect one). The answer to this question has to do with how gravity works, not to mention how the Solar System came to be.
According to the most widely-accepted model of star and planet formation – aka. Nebular Hypothesis – our Solar System began as a cloud of swirling dust and gas (i.e. a nebula). According to this theory, about 4.57 billion years ago, something happened that caused the cloud to collapse. This could have been the result of a passing star, or shock waves from a supernova, but the end result was a gravitational collapse at the center of the cloud.
Due to this collapse, pockets of dust and gas began to collect into denser regions. As the denser regions pulled in more matter, conservation of momentum caused them to begin rotating while increasing pressure caused them to heat up. Most of the material ended up in a ball at the center to form the Sun while the rest of the matter flattened out into disk that circled around it – i.e. a protoplanetary disc.
The planets formed by accretion from this disc, in which dust and gas gravitated together and coalesced to form ever larger bodies. Due to their higher boiling points, only metals and silicates could exist in solid form closer to the Sun, and these would eventually form the terrestrial planets of Mercury, Venus, Earth, and Mars. Because metallic elements only comprised a very small fraction of the solar nebula, the terrestrial planets could not grow very large.
In contrast, the giant planets (Jupiter, Saturn, Uranus, and Neptune) formed beyond the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid (i.e. the Frost Line). The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium.
The leftover debris that never became planets congregated in regions such as the Asteroid Belt, the Kuiper Belt, and the Oort Cloud. So this is how and why the Solar System formed in the first place. Why is it that the larger objects formed as spheres instead of say, squares? The answer to this has to do with a concept known as hydrostatic equilibrium.
In astrophysical terms, hydrostatic equilibrium refers to the state where there is a balance between the outward thermal pressure from inside a planet and the weight of the material pressing inward. This state occurs once an object (a star, planet, or planetoid) becomes so massive that the force of gravity they exert causes them to collapse into the most efficient shape – a sphere.
Typically, objects reach this point once they exceed a diameter of 1,000 km (621 mi), though this depends on their density as well. This concept has also become an important factor in determining whether an astronomical object will be designated as a planet. This was based on the resolution adopted in 2006 by the 26th General Assembly for the International Astronomical Union.
In accordance with Resolution 5A, the definition of a planet is:
A “planet” is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighborhood around its orbit.
A “dwarf planet” is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape , (c) has not cleared the neighborhood around its orbit, and (d) is not a satellite.
All other objects, except satellites, orbiting the Sun shall be referred to collectively as “Small Solar-System Bodies”.
So why are planets round? Well, part of it is because when objects get particularly massive, nature favors that they assume the most efficient shape. On the other hand, we could say that planets are round because that is how we choose to define the word “planet”. But then again, “a rose by any other name”, right?
With proposed missions to Mars and plans to establish outposts on the Moon in the coming decades, there are several questions about what effects time spent in space or on other planets could have on the human body. Beyond the normal range of questions concerning the effects of radiation and lower-g on our muscles, bones, and organs, there is also the question of how space travel could impact our ability to reproduce.
Earlier this week – on Monday, May 22nd – a team of Japanese researchers announced findings that could shed light on this question. Using a sample of freeze-dried mouse sperm, the team was able to produce a litter of healthy baby mice. As part of a fertility study, the mouse sperm had spent nine months aboard the International Space Station (between 2013 and 2014). The real question now is, can the same be done for human babies?
The study was led by Sayaka Wakayama, a student researcher at the University of Yamanashi‘s Advanced Biotechnology Center. As she and her colleagues explain in their study – which was recently published in the Proceedings of the National Academy of Sciences – assisted reproductive technology will be needed if humanity ever intends to live in space long-term.
As such, studies that address the effect that living in space could have on human reproduction are needed first. These need to address the impact microgravity (or low-gravity) could have on fertility, human abilities to conceive, and the development of children. And more importantly, they need to deal with one of the greatest hazards of spending time in space – which is the threat posed by solar and cosmic radiation.
To be fair, one need not go far to feel the effects of space radiation. The ISS regularly receives more than 100 times the amount of radiation that Earth’s surface does, which can result in genetic damage if sufficient safeguards are not in place. On other Solar bodies – like Mars and the Moon, which do not have a protective magnetosphere – the situation is similar.
And while the effects of radiation on adults has been studied extensively, the potential damage that could be caused to our offspring has not. How might solar and cosmic radiation affect our ability to reproduce, and how might this radiation affect children when they are still in the womb, and once they are born? Hoping to take the first steps in addressing these questions, Wakayama and her colleagues selected the spermatozoa of mice.
They specifically chose mice since they are a mammalian species that reproduces sexually. As Sayaka Wakayama explained Universe Today via email:
“So far, only fish or salamanders were examined for reproduction in space. However, mammalian species are very different compared to those species, such as being born from a mother (viviparity). To know whether mammalian reproduction is possible or not, we must use mammalian species for experiments. However, mammalian species such as mice or rats are very sensitive and difficult to take care of by astronauts aboard the ISS, especially for a reproduction study. Therefore, we [have not conducted these studies] until now. We are planning to do more experiments such as the effect of microgravity for embryo development.”
The samples spent nine months aboard the ISS, during which time they were kept at a constant temperature of -95 °C (-139 °F). During launch and recovery, however, they were at room temperature. After retrieval, Wakayama and her team found that the samples had suffered some minor damage,.
“Sperm preserved in space had DNA damage even after only 9 months by space radiation,” said Wakayama. “However, that damage was not strong and could be repaired when fertilized by oocytes capacity. Therefore, we could obtain normal, healthy offspring. This suggests to me that we must examine the effect when sperm are preserved for longer periods.”
In addition to being reparable, the sperm samples were still able to fertilize mouse embryos (once they were brought back to Earth) and produce mouse offspring, all of which grew to maturity and showed normal fertility levels. They also noted that the fertilization and birth rates were similar to those of control groups, and that only minor genomic differences existed between those and the mouse created using the test sperm.
From all this, they demonstrated that while exposure to space radiation can damage DNA, it need not affect the production of viable offspring (at least within a nine month period). Moreover, the results indicate that human and domestic animals could be produced from space-preserved spermatozoa, which could be mighty useful when it comes to colonizing space and other planets.
As Wakayama put it, this research builds on fertilization practices already established on Earth, and demonstrated that these same practices could be used in space:
“Our main subject is domestic animal reproduction. In the current situation on the ground, many animals are born from preserves spermatozoa. Especially in Japan, 100% of milk cows were born from preserved sperm due to economic and breeding reasons. Sometimes, sperm that has been stored for more than 10 years was used to produce cows. If humans live in space for many years, then, our results showed that we can eat beefsteak in the space. For that purpose, we did this study. For humans, our finding will probably help infertile couples.”
This research also paves the way for additional tests that would seek to measure the effects of space radiation on ova and the female reproduction system. Not only could these tests tell us a great deal about how time in space could affect female fertility, it could also have serious implications for astronaut safety. As Ulrike Luderer, a professor of medicine at the University of California and one of the co-authors on the paper said in a statement to the AFP:
“These types of exposures can cause early ovarian failure and ovarian cancer, as well as other osteoporosis, cardiovascular disease and neurocognitive diseases like Alzheimer’s. Half the astronauts in the NASA’s new astronaut classes are women. So it is really important to know what chronic health effects there could be for women exposed to long-term deep space radiation.”
However, a lingering issue with these sorts of tests is being able to differentiate between the effects of microgravity and radiation. In the past, research has been conducted that showed how exposure to simulated microgravity can reduce DNA repair capacity and induce DNA damage in humans. Other studies have raised the issue of the interplay between the two, and how further experiments are needed to address the precise impact of each.
In the future, it may be possible to differentiate between the two by placing samples of spermatazoa and ova in a torus that is capable of simulating Earth gravity (1 g). Similarly, shielded modules could be used to isolate the effects of low or even micro-gravity. Beyond that, there will likely be lingering uncertainties until such time as babies are actually born in space, or in a lunar or Martian environment.
And of course, the long-terms impact of reduced gravity and radiation on human evolution remains to be seen. In all likelihood, that won’t become clear for generations to come, and will require multi-generational studies of children born away from Earth to see how they and their progeny differ.
On March 30, 2017, SpaceX performed a pretty routine rocket launch. The payload was a communications satellite called SES-10, owned by a company in Luxembourg. And if all goes well, the satellite will eventually make its way to a high orbit of 35,000 km (22,000 miles) and deliver broadcasting and television services to Latin America.
For all intents and purposes, this is an absolutely normal, routine, and maybe even boring event in the space industry. Another chemical rocket blasted off another communications satellite to join the thousands of satellites that have come before.
Of course, as you probably know, this wasn’t a routine launch. It was the first step in one of the most important achievements in space flight – launch reusability. This was the second time the 14-story Falcon 9 rocket had lifted off and pushed a payload into orbit. Not Falcon 9s in general, but this specific rocket was reused.
In a previous life, this booster blasted off on April 8, 2016 carrying CRS-8, SpaceX’s 8th resupply mission to the International Space Station. The rocket launched from Florida’s Cape Canaveral, released its payload, re-entered the atmosphere and returned to a floating robotic barge in the Atlantic Ocean called Of Course I Still Love You. That’s a reference to an amazing series of books by Iain M. Banks.
Why is this such an amazing accomplishment? What does the future hold for reusability? And who else is working on this?
Developing a rocket that could be reused has been one of the holy grails of the space industry, and yet, many considered it an engineering accomplishment that could never be achieved. Trust me, people have tried in the past.
Portions of the space shuttle were reused – the orbiter and the solid rocket boosters. And a few decades ago, NASA tried to develop the X-33 as a single stage reusable rocket, but ultimately canceled the program.
To reuse a rocket makes total sense. It’s not like you throw out your car when you return from a road trip. You don’t destroy your transatlantic airliner when you arrive in Europe. You check it out, do a little maintenance, refuel it, fill it with passengers and then fly it again.
According to SpaceX founder Elon Musk, a brand new Falcon 9 first stage costs about $30 million. If you could perform maintenance, and then refill it with fuel, you’d bring down subsequent launches to a few hundred thousand dollars.
SpaceX is still working out what a “flight-tested” launch will cost on a reused Falcon 9 will cost, but it should turn into a significant discount on SpaceX’s already aggressive prices. If other launch providers think they’re getting undercut today, just wait until SpaceX really gets cranking with these reused rockets.
For most kinds of equipment, you want them to have been re-used many times. Cars need to be taken to the test track, airplanes are flown on many flights before passengers ever climb inside. SpaceX will have an opportunity to test out each rocket many times, figuring out where they fail, and then re-engineering those components. This makes for more durable and safer launch hardware, which I suspect is the actual goal here – safety, not cost.
In addition to the first stage, SpaceX also re-used the satellite fairing. This is the covering that makes the payload more aerodynamic while the rocket moves through the lower atmosphere. The fairing is usually ejected and burns up on re-entry, but SpaceX has figured out how to recover that too, saving a few more million.
SpaceX’s goals are even more ambitious. In addition to the first stage booster and launch fairing, SpaceX is looking to reuse the second stage booster. This is a much more complicated challenge, because the second stage is going much faster and needs to lose a lot more velocity. In late 2014, they put their plans on hold for a second stage reuse.
SpaceX’s next big milestone will be to decrease the reuse time. From almost a year to under 24 hours.
Sometime this year, SpaceX is expected to do the first launch of the Falcon Heavy. A launch system that looks like it’s made up of 3 Falcon-9 rockets bolted together. Since that’s basically what it is.
The center booster is a reinforced Falcon-9, with two additional Falcon-9s as strap-on boosters. Once the Falcon Heavy lifts off, the three boosters will detach and will individually land back on Earth, ready for reassembly and reuse. This system will be capable of carrying 54,000 kilograms into low Earth orbit. In addition, SpaceX is hoping to take the technology one more step and have the upper stage return to Earth.
Imagine it. Three boosters and upper stage and payload fairing all returning to Earth and getting reused.
And waiting in the wings, of course, is SpaceX’s huge Interplanetary Transport System, announced by Elon Musk in September of 2016. The super-heavy lift vehicle will be capable of carrying 300,000 kilograms into low Earth orbit.
For comparison, the Apollo era Saturn V could carry 140,000 kg into low Earth orbit, so this thing will be much much bigger. But unlike the Saturn V, it’ll be capable of returning to Earth, and landing on its launch pad, ready for reuse.
SpaceX just crossed a milestone, but they’re not the only player in this field.
Perhaps the biggest competitor to SpaceX comes from another internet entrepreneur: Amazon’s Jeff Bezos, the 2nd richest man in the world after Bill Gates. Bezos founded his own rocket company, Blue Origin in Seattle, which had been working in relative obscurity for the last decade. But in the last few years, they demonstrated their technology for reusable rocket flight, and laid out their plans for competing with SpaceX.
In April 2015, Blue Origin launched their New Shepard rocket on a suborbital trajectory. It went up to an altitude of about 100 km, and then came back down and landed on its launch pad again. It made a second flight in November 2015, a third flight in April 2016, and a fourth flight in June 2016.
That does sound exciting, but keep in mind that reaching 100 km in altitude requires vastly less energy than what the Spacex Falcon 9 requires. Suborbital and orbital are two totally milestones. The New Shepard will be used to carry paying tourists to the edge of space, where they can float around weightlessly in the vomit of the other passengers.
But Blue Origin isn’t done. In September 2016, they announced their plans for the follow-on New Glenn rocket. And this will compete head to head with SpaceX. Scheduled to launch by 2020, like, within 3 years or so, the New Glenn will be an absolute monster, capable of carrying 45,000 kilograms of cargo into low Earth orbit. This will be comparable to SpaceX’s Falcon Heavy or NASA’s Space Launch System.
Like the Falcon 9, the New Glenn will return to its launch pad, ready for a planned reuse of 100 flights.
A decade ago, the established United Launch Alliance – a consortium of Boeing and Lockheed-Martin – was firmly in the camp of disposable launch systems, but even they’re coming around to the competition from SpaceX. In 2014, they began an alliance with Blue Origin to develop the Vulcan rocket.
The Vulcan will be more of a traditional rocket, but some of its engines will detach in mid-flight, re-enter the Earth’s atmosphere, deploy parachutes and be recaptured by helicopters as they’re returning to the Earth. Since the engines are the most expensive part of the rocket, this will provide some cost savings.
There’s another level of reusability that’s still in the realm of science fiction: single stage to orbit. That’s where a rocket blasts off, flies to space, returns to Earth, refuels and does it all over again. There are some companies working on this, but it’ll be the topic for another episode.
Now that SpaceX has successfully launched a first stage booster for the second time, this is going to become the new normal. The rocket companies are going to be fine tuning their designs, focusing on efficiency, reliability, and turnaround time.
These changes will bring down the costs of launching payloads to orbit. That’ll mean it’s possible to launch satellites that were too expensive in the past. New scientific platforms, communications systems, and even human flights become more reasonable and commonplace.
Of course, we still need to take everything with a grain of salt. Most of what I talked about is still under development. That said, SpaceX just reused a rocket. They took a rocket that already launched a satellite, and used it to launch another satellite.
It’s a pretty exciting time, and I can’t wait to see what happens next.
Now you know how I feel about this accomplishment, I’d like to hear your thoughts. Do you think we’re at the edge of a whole new era in space exploration, or is this more of the same? Let me know your thoughts in the comments.
We’re always talking about Mars here on the Guide to Space. And with good reason. Mars is awesome, and there’s a fleet of spacecraft orbiting, probing and crawling around the surface of Mars.
The Red Planet is the focus of so much of our attention because it’s reasonably close and offers humanity a viable place for a second home. Well, not exactly viable, but with the right technology and techniques, we might be able to make a sustainable civilization there.
We have the surface of Mars mapped in great detail, and we know what it looks like from the surface.
But there’s another planet we need to keep in mind: Venus. It’s bigger, and closer than Mars. And sure, it’s a hellish deathscape that would kill you in moments if you ever set foot on it, but it’s still pretty interesting and mysterious to visit.
Would it surprise you to know that many spacecraft have actually made it down to the surface of Venus, and photographed the place from the ground? It was an amazing feat of Soviet engineering, and there are some new technologies in the works that might help us get back, and explore it longer.
Today, let’s talk about the Soviet Venera program. The first time humanity saw Venus from its surface.
Back in the 60s, in the height of the cold war, the Americans and the Soviets were racing to be the first to explore the Solar System. First satellite to orbit Earth (Soviets), first human to orbit Earth (Soviets), first flyby and landing on the Moon (Soviets), first flyby of Mars (Americans), first flyby of Venus (Americans), etc.
The Soviets set their sights on putting a lander down on the surface of Venus. But as we know, this planet has some unique challenges. Every place on the entire planet measures the same 462 degrees C (or 864 F).
Furthermore, the atmospheric pressure on the surface of Venus is 90 times greater than Earth. Being down at the bottom of that column of atmosphere is the same as being beneath a kilometer of ocean on Earth. Remember those submarine movies where they dive too deep and get crushed like a soda can?
Finally, it rains sulphuric acid. I mean, that’s really irritating.
Needless to say, figuring this out took the Soviets a few tries.
Their first attempts to even flyby Venus was Venera 1, on February 4, 1961. But it failed to even escape Earth orbit. This was followed by Venera 2, launched on November 12, 1965, but it went off course just after launch.
Venera 3 blasted off on November 16, 1965, and was intended to land on the surface of Venus. The Soviets lost communication with the spacecraft, but it’s believed it did actually crash land on Venus. So I guess that was the first successful “landing” on Venus?
Before I continue, I’d like to talk a little bit about landing on planets. As we’ve discussed in the past, landing on Mars is really really hard. The atmosphere is thick enough that spacecraft will burn up if you aim directly for the surface, but it’s not thick enough to let you use parachutes to gently land on the surface.
Landing on the surface of Venus on the other hand, is super easy. The atmosphere is so thick that you can use parachutes no problem. If you can get on target and deploy a parachute capable of handling the terrible environment, your soft landing is pretty much assured. Surviving down there is another story, but we’ll get to that.
Venera 4 came next, launched on June 12, 1967. The Soviet scientists had few clues about what the surface of Venus was actually like. They didn’t know the atmospheric pressure, guessing it might be a little higher pressure than Earth, or maybe it was hundreds of times our pressure. It was tested with high temperatures, and brutal deceleration. They thought they’d built this thing plenty tough.
Venera 4 arrived at Venus on October 18, 1967, and tried to survive a landing. Temperatures on its heat shield were clocked at 11,000 C, and it experienced 300 Gs of deceleration.
The initial temperature 52 km was a nice 33C, but then as it descended down towards the surface, temperatures increased to 262 C. And then, they lost contact with the probe, killed dead by the horrible temperature.
We can assume it landed, though, and for the first time, scientists caught a glimpse of just how bad it is down there on the surface of Venus.
Venera 5 was launched on January 5, 1969, and was built tougher, learning from the lessons of Venera 4. It also made it into Venus’ atmosphere, returned some interested science about the planet and then died before it reached the surface.
Venera 6 followed, same deal. Built tougher, died in the atmosphere, returned some useful science.
Venera 7 was built with a full understanding of how bad it was down there on Venus. It launched on August 17, 1970, and arrived in December. It’s believed that the parachutes on the spacecraft only partially deployed, allowing it to descend more quickly through the Venusian atmosphere than originally planned. It smacked into the surface going about 16.5 m/s, but amazingly, it survived, and continued to send back a weak signal to Earth for about 23 minutes.
For the first time ever, a spacecraft had made it down to the surface of Venus and communicated its status. I’m sure it was just 23 minutes of robotic screaming, but still, progress. Scientists got their first accurate measurement of the temperatures, and pressure down there.
Bottom line, humans could never survive on the surface of Venus.
Venera 8 blasted off for Venus on March 17, 1972, and the Soviet engineers built it to survive the descent and landing as long as possible. It made it through the atmosphere, landed on the surface, and returned data for about 50 minutes. It didn’t have a camera, but it did have a light sensor, which told scientists being on Venus was kind of like Earth on an overcast day. Enough light to take pictures… next time.
For their next missions, the Soviets went back to the drawing board and built entirely new landing craft. Built big, heavy and tough, designed to get to the surface of Venus and survive long enough to send back data and pictures.
Venera 9 was launched on June 8, 1975. It survived the atmospheric descent and landed on the surface of Venus. The lander was built like a liquid cooled reverse insulated pressure vessel, using circulating fluid to keep the electronics cooled as long as possible. In this case, that was 53 minutes. Venera 9 measured clouds of acid, bromine and other toxic chemicals, and sent back grainy black and white television pictures from the surface of Venus.
In fact, these were the first pictures ever taken from the surface of another planet.
Venera 10 lasted for 65 minutes and took pictures of the surface with one camera. The lens cap on a second camera didn’t release. The spacecraft saw lava rocks with layers of other rocks in between. Similar environments that you might see here on Earth.
Venera 11 was launched on September 9, 1975 and lasted for 95 minutes on the surface of Venus. In addition to confirming the horrible environment discovered by the other landers, Venera 11 detected lightning strikes in the vicinity. It was equipped with a color camera, but again, the lens cap failed to deploy for it or the black and white camera. So it failed to send any pictures home.
Venera 12 was launched on September 14, 1978, and made it down to the surface of Venus. It lasted 110 minutes and returned detailed information about the chemical composition of the atmosphere. Unfortunately, both its camera lens caps failed to deploy, so no pictures were returned. And pictures are what we really care about, right?
Venera 13 was built on the same tougher, beefier design, and was blasted off to Venus on October 30, 1981, and this one was a tremendous success. It landed on Venus and survived for 127 minutes. It took pictures of its surroundings using two cameras peering through quartz windows, and saw a landscape of bedrock. It used spring-loaded arms to test out how compressible the soil was.
Venera 14 was identical and launched just 5 days after Venera 13. It also landed and survived for 57 minutes. Unfortunately, its experiment to test the compressibility of the soil was a botch because one of its lens caps landed right under its spring-loaded arm. But apart from that, it sent back color pictures of the hellish landscape.
And with that, the Soviet Venus landing program ended. And since then, no additional spacecraft have ever returned to the surface of Venus.
It’s one thing for a lander to make it to the surface of Venus, last a few minutes and then die from the horrible environment. What we really want is some kind of rover, like Curiosity, which would last on the surface of Venus for weeks, months or even years and do more science.
And computers don’t like this kind of heat. Go ahead, put your computer in the oven and set it to 850. Oh, your oven doesn’t go to 850, that’s fine, because it would be insane. Seriously, don’t do that, it would be bad.
Engineers at NASA’s Glenn Research Center have developed a new kind of electrical circuitry that might be able to handle those kinds of temperatures. Their new circuits were tested in the Glenn Extreme Environments Rig, which can simulate the surface of Venus. It can mimic the temperature, pressure and even the chemistry of Venus’ atmosphere.
The circuitry, originally designed for hot jet engines, lasted for 521 hours, functioning perfectly. If all goes well, future Venus rovers could be developed to survive on the surface of Venus without needing the complex and short lived cooling systems.
This discovery might unleash a whole new era of exploration of Venus, to confirm once and for all that it really does suck.
While the Soviets had a tough time with Mars, they really nailed it with Venus. You can see how they built and launched spacecraft after spacecraft, sticking with this challenge until they got the pictures and data they were looking for. I really think this series is one of the triumphs of robotic space exploration, and I look forward to future mission concepts to pick up where the Soviets left off.
Are you excited about the prospects of exploring Venus with rovers? Let me know your thoughts in the comments.
We may be living in the Golden Age of Mars Exploration. With multiple orbiters around Mars and two functioning rovers on the surface of the red planet, our knowledge of Mars is growing at an unprecedented rate. But it hasn’t always been this way. Getting a lander to Mars and safely onto the surface is a difficult challenge, and many landers sent to Mars have failed.
The joint ESA/Roscosmos Mars Express mission, and its Chiaparelli lander, is due at Mars in only 15 days. Now’s a good time to look at the challenges in getting a lander to Mars, and also to look back at the many failed attempts.
For now, NASA has the bragging rights as the only organization to successfully land probes on Mars. And they’ve done it several times. But they weren’t the first ones to try. The Soviet Union tried first.
The USSR sent several probes to Mars starting back in the 1960s. They made their first attempt in 1962, but that mission failed to launch. That failure illustrates the first challenge in getting a craft to land on Mars: rocketry. We’re a lot better at rocketry than we were back in the 1960’s, but mishaps still happen.
Then in 1971, the Soviets sent a pair of probes to Mars called Mars 2 and Mars 3. They were both orbiters with detachable landers destined for the Martian surface. The fate of Mars 2 and Mars 3 provides other illustrative examples of the challenges in getting to Mars.
Mars 2 separated from its orbiter successfully, but crashed into the surface and was destroyed. The crash was likely caused by its angle of descent, which was too steep. This interrupted the descent sequence, which meant the parachute failed to deploy. So Mars 2 has the dubious distinction of being the first man-made object to reach Mars.
Mars 3 was exactly the same as Mars 2. The Soviets liked to do missions in pairs back then, for redundancy. Mars 3 separated from its orbiter and headed for the Martian surface, and through a combination of aerodynamic breaking, rockets, and parachutes, it became the first craft to make a soft landing on Mars. So it was a success, sort of.
But after only 14.5 seconds of data transmission, it went quiet and was never heard from again. The cause was likely an intense dust storm. In an odd turn of events, NASA’s Mariner 9 orbiter reached Mars only days before Mars 2 and 3, becoming the first spacecraft to orbit another planet. It captured images of the planet-concealing dust storms, above which only the volcanic Olympus Mons could be seen. These images provided an explanation for the failure of Mars 3.
In 1973, the Soviets tried again. They sent four craft to Mars, two of which were landers, named Mars 6 and Mars 7. Mars 6 failed on impact, but Mars 7’s fate was perhaps a little more tragic. It missed Mars completely, by about 1300 km, and is in a helicentric orbit to this day. In our day and age, we just assume that our spacecraft will go where we want them to, but Mars 7 shows us that it can all go wrong. After all, Mars is a moving target.
In the 1970s, NASA was fresh off the success of their Apollo Program, and were setting their sites on Mars. They developed the Viking program which saw 2 landers, Viking 1 and Viking 2, sent to Mars. Both of them were probe/lander configurations, and both landers landed successfully on the surface of Mars. The Vikings sent back beautiful pictures of Mars that caused excitement around the world.
In 1997, NASA’s Martian Pathfinder made it to Mars and landed successfully. Pathfinder itself was stationary, but it brought a little rover called Sojourner with it. Sojourner explored the immediate landing area around Pathfinder. Sojourner became the first rover to operate on another planet.
Pathfinder was able to send back over 16,000 images of Mars, along with its scientific data. It was also a proof of concept mission for technologies such as automated obstacle avoidance and airbag mediated touchdown. Pathfinder helped lay the groundwork for the Mars Exploration Rover Mission. That means Spirit and Opportunity.
But after Pathfinder, and before Spirit and Opportunity, came a time of failure for Martian landing attempts. Everybody took part in the failure, it seems, with Russia, Japan, the USA, and the European Space Agency all experiencing bitter failure. Rocket failures, engineering errors, and other terminal errors all contributed to the failure.
Japan’s Nozomi orbiter ran out of fuel before ever reaching Mars. NASA’s Mars Polar Lander failed its landing attempt. NASA’s Deep Space 2, part of the Polar Lander mission, failed its parachute-less landing and was never heard from. The ESA’s Beagle 2 lander made it to the surface, but two of its solar panels failed to deploy, ending its mission. Russian joined in the failure again, with its Phobos-Grunt mission, which was actually headed for the Martian moon Phobos, to retrieve a sample and send it back to Earth.
In one infamous failure, engineers mixed up the use of English units with Metric units, causing NASA’s Mars Climate Orbiter to burn up on entry. These failures show us that failure is not rare. It’s difficult and challenging to get to the surface of Mars.
After this period of failure, NASA’s Spirit and Opportunity rovers were both unprecedented successes. They landed on the Martian surface in January 2004. Both exceeded their planned mission length of three months, and Opportunity is still going strong now.
So where does that leave us now? NASA is the only one to have successfully landed a rover on Mars and have the rover complete its mission. But the ESA and Russia are determined to get there.
The Schiaparelli lander, as part of the ExoMars mission, is primarily a proof of technology mission. In fact, its full name is the Schiaparelli EDM lander, meaning Entry, Descent, and Landing Demonstrator Module.
It will have some small science capacity, but is really designed to demonstrate the ability to enter the Martian atmosphere, descend safely, and finally, to land on the surface. In fact, it has no solar panels or other power source, and will only carry enough battery power to survive for 2-8 days.
Schiaparelli faces the same challenges as other craft destined for Mars. Once launched successfully, which it was, it had to navigate its way to Mars. That took about 6 months, and since ExoMars is only 15 days away from arrival at Mars, it looks like it has successfully made its way their. But perhaps the trickiest part comes next: atmospheric entry.
Schiaparelli is like most Martian craft. It will make a ballistic entry into the Martian atmosphere, and this has to be done right. There is no room for error. The angle of entry is the key here. If the angle is too steep, Schiaparelli may overheat and burn up on entry. On the other hand, if the angle is too shallow, it could hit the atmosphere and bounce right back into space. There’ll be no second chance.
The entry and descent sequence is all pre-programmed. It will either work or it won’t. It would take way too long to send any commands to Schiaparelli when it is entering and descending to Mars.
If the entry is successful, the landing comes next. The exact landing location is imprecise, because of wind speed, turbulence, and other factors. Like other craft sent to Mars, Schiaparelli’s landing site is defined as an ellipse.
The lander will be travelling at over 21,000 km/h when it reaches Mars, and will have only 6 or 7 minutes to descend. At that speed, Schiaparelli will have to withstand extreme heating for 2 or 3 minutes. It’s heat shield will protect it, and will reach temperatures of several thousand degrees Celsius.
It will decelerate rapidly, and at about 10km altitude, it will have slowed to approximately 1700 km/h. At that point, a parachute will deploy, which will further slow the craft. After the parachute slows its descent, the heat shield will be jettisoned.
On Earth, a parachute would be enough to slow a descending craft. But with Mars’ less dense atmosphere, rockets are needed for the final descent. An onboard radar will monitor Schiaparelli’s altitude as it approaches the surface, and rockets will fire to slow it to a few meters per second in preparation for landing.
In the final moments, the rockets will stop firing, and a short free-fall will signal Schiaparelli’s arrival on Mars. If all goes according to plan, of course.
We won’t have much longer to wait. Soon we’ll know if the ESA and Russia will join NASA as the only agencies to successfully land a craft on Mars. Or, if they’ll add to the long list of failed attempts.
Look up at the night sky, and what do you see? Space, glittering and gleaming in all its glory. Astronomically speaking, space is really quite close, lingering just on the other side of that thin layer we call an atmosphere. And if you think about it, Earth is little more than a tiny island in a sea of space. So it is quite literally all around us.
By definition, space is defined as being the point at which the Earth’s atmosphere ends, and the vacuum of space begins. But exactly how far away is that? How high do you need to travel before you can actually touch space? As you can probably imagine, with such a subjective definition, people tend to disagree on exactly where space begins.
The first official definition of space came from the National Advisory Committee for Aeronautics (the predecessor to NASA), who decided on the point where atmospheric pressure was less than one pound per square foot. This was the altitude that airplane control surfaces could no longer be used, and corresponded to roughly 81 kilometers (50 miles) above the Earth’s surface.
Any NASA test pilot or astronaut who crosses this altitude is awarded their astronaut wings. Shortly after that definition was passed, the aerospace engineer Theodore von Kármán calculated that above an altitude of 100 km, the atmosphere would be so thin that an aircraft would need to be traveling at orbital velocity to derive any lift.
This altitude was later adopted as the Karman Line by the World Air Sports Federation (Fédération Aéronautique Internationale, FAI). And in 2012, when Felix Baumgartner broke the record for the highest freefall, he jumped from an altitude of 39 kilometers (24.23 mi), less than halfway to space (according to NASA’s definition).
By the same token, space is often defined as beginning at the lowest altitude at which satellites can maintain orbits for a reasonable time – which is approximately 160 kilometers (100 miles) above the surface. These varying definitions are complicated when one takes the definition of the word “atmosphere” into account.
When we talk about Earth’s atmosphere, we tend to think of the region where air pressure is still high enough to cause air resistance, or where the air is simply thick enough to breath. But in truth, Earth’s atmosphere is made up of five main layers – the Troposphere, the Stratosphere, the Mesosphere, the Thermosphere, and the Exosphere – the latter of which extend pretty far out into space.
The Thermosphere, the second highest layer of the atmosphere, extends from an altitude of about 80 km (50 mi) up to the thermopause, which is at an altitude of 500–1000 km (310–620 mi). The lower part of the thermosphere, – from 80 to 550 kilometers (50 to 342 mi) – contains the ionosphere, which is so named because it is here in the atmosphere that particles are ionized by solar radiation.
The outermost layer, known as the exosphere, extends out to an altitude of 10,000 km (6214 mi) above the planet. This layer is mainly composed of extremely low densities of hydrogen, helium and several heavier molecules (nitrogen, oxygen, CO²). The atoms and molecules are so far apart that the exosphere no longer behaves like a gas and the particles constantly escape into space.
It is here that Earth’s atmosphere truly merges with the emptiness of outer space, where there is no atmosphere. Hence why the majority of Earth’s satellites orbit within this region. Sometimes, the Aurora Borealis and Aurora Australis occur in the lower part of the exosphere, where they overlap into the thermosphere. But beyond that, there is no meteorological phenomena in this region.
Interplanetary vs. Interstellar:
Another important distinction when discussing space is the difference between that which lies between planets (interplanetary space) and that which lies between star systems (interstellar space) in our galaxy. But of course, that’s just the tip of the iceberg when it comes to space.
If one were to cast the net wider, there is also the space which lies between galaxies in the Universe (intergalactic space). In all cases, the definition involves regions where the concentration of matter is significantly lower than in other places – i.e. a region occupied centrally by a planet, star or galaxy.
In addition, in all three definitions, the measurements involved are beyond anything that we humans are accustomed to dealing with on a regular basis. Some scientists believe that space extends infinitely in all directions, while others believe that space is finite, but is unbounded and continuous (i.e. has no beginning and end).
In other words, there’s a reason they call it space – there’s just so much of it!
The exploration of space (that is to say, that which lies immediately beyond Earth’s atmosphere) began in earnest with what is known as the “Space Age“, This newfound age of exploration began with the United States and Soviet Union setting their sights on placing satellites and crewed modules into orbit.
The first major event of the Space Age took place on October 4th, 1957, with the launch ofSputnik 1 by the Soviet Union – the first artificial satellite to be launched into orbit. In response, then-President Dwight D. Eisenhower signed the National Aeronautics and Space Act on July 29th, 1958, officially establishing NASA.
Immediately, NASA and the Soviet space program began taking the necessary steps towards creating manned spacecraft. By 1959, this competition resulted in the creation of the Soviet Vostok program and NASA’s Project Mercury. In the case of Vostok, this consisted of developing a space capsule that could be launched aboard an expendable carrier rocket.
Along with numerous unmanned tests, and a few using dogs, six Soviet pilots were selected by 1960 to be the first men to go into space. On April 12th, 1961, Soviet cosmonaut Yuri Gagarin was launched aboard the Vostok 1spacecraft from the Baikonur Cosmodrome, and thus became the fist man to go into space (beating American Alan Shepard by just a few weeks).
On June 16th, 1963, Valentina Tereshkova was sent into orbit aboard the Vostok 6 craft (which was the final Vostok mission), and thus became the first woman to go into space. Meanwhile, NASA took over Project Mercury from the US Air Force and began developing their own crewed mission concept.
Designed to send a man into space using existing rockets, the program quickly adopted the concept of launching ballistic capsules into orbit. The first seven astronauts, nicknamed the “Mercury Seven“, were selected from from the Navy, Air Force and Marine test pilot programs.
On May 5th, 1961, astronaut Alan Shepard became the first American in space aboard the Freedom 7 mission. Then, on February 20th, 1962, astronaut John Glenn became the first American to be launched into orbit by an Atlas launch vehicle as part of Friendship 7. Glenn completed three orbits of planet Earth, and three more orbital flights were made, culminating in L. Gordon Cooper’s 22-orbit flight aboard Faith 7, which flew on May 15th and 16th, 1963.
In the ensuing decades, both NASA and Soviets began to develop more complex, long-range crewed spacecraft. Once the “Race to the Moon” ended with the successful landing of Apollo 11 (followed by several more Apollo missions), the focus began to shift to establishing a permanent presence in space.
For the Russians, this led to the continued development of space station technology as part of the Salyut program. Between 1972 and 1991, they attempted to orbit seven separate stations. However, technical failures and a failure in one rocket’s second stage boosters caused the first three attempts after Salyut 1 to fail or result in the station’s orbits decaying after a short period.
However, by 1974, the Russians managed to successfully deploy Salyut 4, followed by three more stations that would remain in orbit for periods of between one and nine years. While all of the Salyuts were presented to the public as non-military scientific laboratories, some of them were actually covers for the military Almaz reconnaissance stations.
NASA also pursued the development of space station technology, which culminated in May of 1973 with the launch of Skylab, which would remain America’s first and only independently-built space station. During deployment, Skylab suffered severe damage, losing its thermal protection and one of its solar panels.
This required the first crew to rendezvous with the station and conduct repairs. Two more crews followed, and the station was occupied for a total of 171 days during its history of service. This ended in 1979 with the downing of the station over the Indian Ocean and parts of southern Australia.
By 1986, the Soviets once again took the lead in the creation of space stations with the deployment of Mir. Authorized in February 1976 by a government decree, the station was originally intended to be an improved model of the Salyut space stations. In time, it evolved into a station consisting of multiple modules and several ports for crewed Soyuz spacecraft and Progress cargo spaceships.
The core module was launched into orbit on February 19th, 1986; and between 1987 and 1996, all of the other modules would be deployed and attached. During its 15-years of service, Mir was visited by a total of 28 long-duration crews. Through a series of collaborative programs with other nations, the station would also be visited by crews from other Eastern Bloc nations, the European Space Agency (ESA), and NASA.
After a series of technical and structural problems caught up with the station, the Russian government announced in 2000 that it would decommission the space station. This began on Jan. 24th, 2001, when a Russian Progress cargo ship docked with the station and pushed it out of orbit. The station then entered the atmosphere and crashed into the South Pacific.
With the retirement of the Space Shuttle Program in 2011, crew members have been delivered exclusively by Soyuz spacecraft in recent years. Since 2014, cooperation between NASA and Roscosmos has been suspended for most non-ISS activities due to tensions caused by the situation in the Ukraine.
However, in the past few years, indigenous launch capability has been restored to the US thanks to companies like SpaceX, United Launch Alliance, and Blue Origin stepping in to fill the void with their private fleet of rockets.
The ISS has been continuously occupied for the past 15 years, having exceeded the previous record held by Mir; and has been visited by astronauts and cosmonauts from 15 different nations. The ISS program is expected to continue until at least 2020, but may be extended until 2028 or possibly longer, depending on the budget environment.
As you can clearly see, where our atmosphere ends and space begins is the subject of some debate. But thanks to decades of space exploration and launches, we have managed to come up with a working definition. But whatever the exact definition is, if you can get above 100 kilometers, you have definitely earned your astronaut wings!
6 million years ago, when our first human ancestors were doing their thing here on Earth, the black hole at the center of the Milky Way was a ferocious place. Our middle-aged, hibernating black hole only munches lazily on small amounts of hydrogen gas these days. But when the first hominins walked the Earth, Sagittarius A was gobbling up matter and expelling gas at speeds reaching 1,000 km/sec. (2 million mph.)
The evidence for this hyperactive phase in Sagittarius’ life, when it was an Active Galactic Nucleus (AGN), came while astronomers were searching for something else: the Milky Way’s missing mass.
There’s a funny problem in our understanding of our galactic environment. Well, it’s not that funny. It’s actually kind of serious, if you’re serious about understanding the universe. The problem is that we can calculate how much matter we should be able to see in our galaxy, but when we go looking for it, it’s not there. This isn’t just a problems in the Milky Way, it’s a problem in other galaxies, too. The entire universe, in fact.
Our measurements show that the Milky Way has a mass about 1-2 trillion times greater than the Sun. Dark matter, that mysterious and invisible hobgoblin that haunts cosmologists’ nightmares, makes up about five sixths of that mass. Regular, normal matter makes up the last sixth of the galaxy’s mass, about 150-300 billion solar masses. But we can only find about 65 billion solar masses of that normal matter, made up of the familiar protons, neutrons, and electrons. The rest is missing in action.
Astrophysicists at the Harvard-Smithsonian Center for Astrophysics have been looking for that mass, and have written up their results in a new paper.
“We played a cosmic game of hide-and-seek. And we asked ourselves, where could the missing mass be hiding?” says lead author Fabrizio Nicastro, a research associate at the Harvard-Smithsonian Center for Astrophysics (CfA) and astrophysicist at the Italian National Institute of Astrophysics (INAF).
“We analyzed archival X-ray observations from the XMM-Newton spacecraft and found that the missing mass is in the form of a million-degree gaseous fog permeating our galaxy. That fog absorbs X-rays from more distant background sources,” Nicastro continued.
Nicastro and the other scientists behind the paper analyzed how the x-rays were absorbed and were able to calculate the amount and distribution of normal matter in that fog. The team relied heavily on computer models, and on the XMM-Newton data. But their results did not match up with a uniform distribution of the gaseous fog. Instead, there is an empty “bubble”, where this is no gas. And that bubble extends from the center of the galaxy two-thirds of the way to Earth.
What can explain the bubble? Why would the gaseous fog not be spread more uniformly through the galaxy?
Clearing gas from an area that large would require an enormous amount of energy, and the authors point out that an active black hole would do it. They surmise that Sagittarius A was very active at that time, both feeding on gas falling into itself, and pumping out streams of hot gas at up to 1000 km/sec.
Which brings us to present day, 6 million years later, when the shock-wave caused by that activity has travelled 20,000 light years, creating the bubble around the center of the galaxy.
Another piece of evidence corroborates all this. Near the galactic center is a population of 6 million year old stars, formed from the same material that at one time flowed toward the black hole.
“The different lines of evidence all tie together very well,” says Smithsonian co-author Martin Elvis (CfA). “This active phase lasted for 4 to 8 million years, which is reasonable for a quasar.”
The numbers all match up, too. The gas accounted for in the team’s models and observations add up to 130 billion solar masses. That number wraps everything up pretty nicely, since the missing matter in the galaxy is thought to be between 85 billion and 235 billion solar masses.
This is intriguing stuff, though it’s certainly not the final word on the Milky Way’s missing mass. Two future missions, the European Space Agency’s Athena X-ray Observatory, planned for launch in 2028, and NASA’s proposed X-Ray Surveyor could provide more answers.
Who knows? Maybe not only will we learn more about the missing matter in the Milky Way and other galaxies, we may learn more about the activity at the center of the galaxy, and what ebbs and flows it has gone through, and how that has shaped galactic evolution.
When it comes to my style of photography, preparation is a key element in getting the shot I want.
On this specific day, we were actually planning on only shooting the low Atlantic clouds coming into the city of Cape Town. This in itself takes a lot of preparation as we had to keep a close eye on the weather forecasts for weeks using Yr.no, and the conditions are still unpredictable at best even with the latest weather forecasting technology.
We set out with cameras and camping gear with the purpose of setting up camp high up on Table Mountain so as to get a clear view over the city. The hike is extremely challenging at night, especially with a 15kg backpack on your back! We reached our campsite at about 11pm, and then started setting up our cameras for the low clouds predicted to move into the city at about 3am the next morning. For the next 2 hours or so we scouted for the best locations and compositions, and then tried to get a few hours of sleep in before the clouds arrived.
At about 3am I was woken up by fellow photographer Brendon Wainwright. I realised that he had been up all night shooting timelapses, and getting pretty impressive astro shots even though we were in the middle of the city. I noticed that the clouds had rolled in a bit earlier than predicted and had created a thick blanket over the city, which was acting as a natural light pollution filter.
I looked up at the skies and for the first time in my life I was able to see the core of the Milky Way in the middle of the city! This is when everything changed, the mission immediately became an astrophotography mission, as these kind of conditions are extremely rare in the city.
After shooting the city and clouds for a while, I turned my focus to the Milky Way. I knew I was only going to have this one opportunity to capture an arching Milky Way over a city covered with clouds, so I had to work fast to get the perfect composition before the clouds changed or faded away.
I set my tripod on top of a large rock that gave me a bit of extra height so that I could get as much of the city lights in the shot as possible. The idea I had in my mind was to shoot a panorama from the center of the city to the Twelve Apostles Mountains in the southwest. This was a pretty large area to cover, plus the Milky Way was pretty much straight above us which meant I had to shoot a massive field of view in order to get both the city and the Milky Way.
The final hurdle was to get myself into the shot, which meant that I had to stand on a 200m high sheer cliff edge! Luckily this was only necessary for one frame in the entire panorama.
Gear and settings
I usually shoot with a Canon 70D with an 18mm f/3.5 lens and a Hahnel Triad 40Lite tripod. This particular night I forgot to bring a spare battery for my Canon and by the time I wanted to shoot this photo, my one battery had already died!
I think this photo is a testament to the fact that your gear is not nearly as important as your technique and knowledge of your surroundings and your camera.
I started off by shooting the first horizontal line of photos, in landscape orientation, to form the bottom edge of the final stitched photo. From there I ended up shooting 6 rows of 7 photos each in order to capture the whole view I wanted. This gave me 42 photos in total.
For the most part, my settings were 25 seconds, f/3.5, ISO 2000, with the ISO dropped on a few of the pictures where the city light was too bright. I shot all the photos in raw as to get as much data out of each frame as possible.
Astrophotography is all about the editing techniques.
In this scenario I had to stitch 42 photos into one photo. Normally I would just use the built-in function in Lightroom, but in this case I had to use software called PTGui Pro, which is made for stitching difficult panoramas. This software enables me to choose control points on the overlapping images in order to line up the photos perfectly.
After creating the panorama in PTGui Pro, I exported it as a TIFF file and then imported that file into Lightroom again. Keep in mind that this one file is now 3GB as it is made up of 42 RAW files!
In Lightroom I went through my normal workflow to bring out the detail in the Milky Way by boosting the highlights a bit, adding contrast, a bit of clarity, and bringing out some shadows in the landscape. The most difficult part was to clear up the distortion that was caused by the faint clouds in the sky between individual images. Unfortunately it is almost impossible to blend so many images together perfectly when you have faint clouds in the sky that form and disappear within minutes, but I think I did the best job I could to even out the bad areas.
A special event
After the final touches were made and the photo was complete, I realized that I had captured something really unique. It’s not every day that you see low clouds hanging over the city, and you almost never see the Milky Way so bright above the city, and I managed to capture both in one image!
The response to the image after posting it to my Instagram account was extremely overwhelming. I got people from all over the world wanting to purchase the image and it got shared hundreds of time across all social media.
It just shows you that planning and dedication does pay off!