What is the Average Surface Temperature on Venus?

Venus is often referred to as our “sister planet,” due to the many geophysical similarities that exist between it Earth. For starters, our two planets are close in mass, with Venus weighing in at 4.868 x 1024 kg compared to Earth’s 5.9736×1024 kg. In terms of size, the planets are almost identical, with Venus measuring 12,100 km in diameter and Earth 12,742 km.

In terms of density and gravity, the two are neck and neck – with Venus boasting 86.6% of the former and 90.7% of the latter. Venus also has a thick atmosphere, much like our own, and it is believed that both planets share a common origin, forming at the same time out of a condensing clouds of dust particles around 4.5 billion years ago.

However, for all the characteristics these two planets have in common, average temperature is not one of them. Whereas the Earth has an average surface temperature of 14 degrees Celsius, the average temperature of Venus is 460 degrees Celsius. That is roughly 410 degrees hotter than the hottest deserts on our planet.

In fact, at a searing 750 K (477 °C), the surface of Venus is the hottest in the solar system. Venus is closer to the Sun by 108 million km, (about 30% closer than the Earth), but it is mainly due to the planet’s thick atmosphere. Unlike Earth’s, which is composed primarily of nitrogen, oxygen and ozone, Venus’ atmosphere is an incredibly dense cloud of carbon dioxide and sulfur dioxide gas.

The combination of these gases in high concentrations causes a catastrophic greenhouse effect that traps incident sunlight and prevents it from radiating into space. This results in an estimated surface temperature boost of 475 K (201.85 °C), leaving the surface a molten, charred mess that nothing (that we know of) can live on. Atmospheric pressure also plays a role, being 91 times that of what it is here on Earth; and clouds of toxic vapor constantly rain sulfuric acid on the surface.

In addition, the surface temperature on Venus does not vary like it does here on Earth. On our planet, temperatures vary wildly due to the time of year and even more so based on the location on our planet. The hottest temperature ever recorded on Earth was 70.7°C in the Lut Desert of Iran in 2005. On the other end of the spectrum, the coldest temperature ever recorded on Earth was in Vostok, Antarctica at -89.2 C.

But on Venus, the surface temperature is 460 degrees Celsius, day or night, at the poles or at the equator. Beyond its thick atmosphere, Venus’ axial tilt (aka. obliquity) plays a role in this temperature consistency. Earth’s axis is tilted 23.4 ° in relation to the Sun, whereas Venus’ is only tilted by 3 °.

The only respite from the heat on Venus is to be found around 50 km into the atmosphere. It is at that point that temperatures and atmospheric pressure are equal to that of Earth’s. It is for this reason that some scientists believe that floating habitats could be constructed here, using Venus’ thick clouds to buoy the habitats high above the surface. Additionally, in 2014, a group of mission planners from NASA Langely came up with a mission to Venus’ atmosphere using airships.

These habitats could play an important role in the terraforming of Venus as well, acting as scientific research stations that could either fire off the excess atmosphere off into space, or introduce bacteria or chemicals that could convert all the CO2 and SO2 into a hospitable, breathable atmosphere.

Beyond the fact that it is a hot and hellish landscape, very little is known about Venus’ surface environment. This is due to the thick atmosphere, which has made visual observation impossible. The sulfuric acid is also problematic since clouds composed of it are highly reflective of visible light, which prevents optical observation. Probes have been sent to the surface in the past, but the volatile and corrosive environment means that anything that lands there can only survive for a few hours.

3-D perspective of the Venusian volcano, Maat Mons generated from radar data from NASA’s Magellan mission.
3-D perspective of the Venusian volcano, Maat Mons generated from radar data from NASA’s Magellan mission. Credit: Magellan Team/NASA/JPL

What little we know about the planet’s surface has come from years worth of radar imaging, the most recent of which was conducted by NASA’s Magellan spacecraft (aka. the Venus Radar Mapper). Using synthetic aperture radar, the robotic space probe spent four years (1990-1994) mapping the surface of Venus and measuring its gravitational field before its orbit decayed and it was “disposed of” in the planet’s atmosphere.

The images provided by this and other missions revealed a surface dominated by volcanoes. There are at least 1,000 volcanoes or volcanic centers larger than 20 km in diameter on Venus’ harsh landscape. Many scientists believe Venus was resurfaced by volcanic activity 300 to 500 million years ago. Lava flows are a testament to this, which appear to have produced channels of hardened magma that extend for hundreds of km in all directions. The mixture of volcanic ash and the sulfuric acid clouds is also known to produce intense lightning and thunder storms.

The temperature of Venus is not the only extreme on the planet. The atmosphere is constantly churned by hurricane force winds reaching 360 kph. Add to that the crushing air pressure and rainstorms of sulfuric acid, and it becomes easy to see why Venus is such a barren, lifeless rock that has been hard to explore.

We have written many articles about Venus for Universe Today. Here are some interesting facts about Venus, and here’s an article about Venus Greenhouse Effect. And here is an article about the many interesting pictures taken of Venus over the past few decades.

If you’d like more information on Venus, check out Hubblesite’s News Releases about Venus, and here’s a link to NASA’s Solar System Exploration Guide on Venus.

We’ve also recorded an entire episode of Astronomy Cast all about Venus. Listen here, Episode 50: Venus.

Reference:
NASA

The Milky Way’s New Neighbor May Tell Us Things About the Universe

As part of the Local Group, a collection of 54 galaxies and dwarf galaxies that measures 10 million light years in diameter, the Milky Way has no shortage of neighbors. However, refinements made in the field of astronomy in recent years are leading to the observation of neighbors that were previously unseen. This, in turn, is changing our view of the local universe to one where things are a lot more crowded.

For instance, scientists working out of the Special Astrophysical Observatory in Karachai-Cherkessia, Russia, recently found a previously undetected dwarf galaxy that exists 7 million light years away. The discovery of this galaxy, named KKs3, and those like it is an exciting prospect for scientists, since they can tell us much about how stars are born in our universe.

The Russian team, led by Prof Igor Karachentsev of the Special Astrophysical Observatory (SAO), used the Hubble Space Telescope Advanced Camera for Surveys (ACS) to locate KKs3 in the southern sky near the constellation of Hydrus. The discovery occurred back in August 2014, when they finalized their observations a series of stars that have only one ten-thousandth the mass of the Milky Way.

Such dwarf galaxies are far more difficult to detect than others due to a number of distinct characteristics. KKs3 is what is known as a dwarf spheroid (or dSph) galaxy, a type that has no spiral arms like the Milky Way and also suffers from an absence of raw materials (like dust and gas). Since they lack the materials to form new stars, they are generally composed of older, fainter stars.

Image of the KKR 25 dwarf spheroid galaxy obtained by the Special Astrophysical Observatory using the HST. Credit: SAO RAS/Hubble
Image of the KKR 25 dwarf spheroid galaxy obtained by the Special Astrophysical Observatory using the HST. Credit: SAO RAS

In addition, these galaxies are typically found in close proximity to much larger galaxies, like Andromeda, which appear to have gobbled up their gas and dust long ago. Being faint in nature, and so close to far more luminous objects, is what makes them so tough to spot by direct observation.

Team member Prof Dimitry Makarov, also of the Special Astrophysical Observatory, described the process: “Finding objects like Kks3 is painstaking work, even with observatories like the Hubble Space Telescope. But with persistence, we’re slowly building up a map of our local neighborhood, which turns out to be less empty than we thought. It may be that are a huge number of dwarf spheroidal galaxies out there, something that would have profound consequences for our ideas about the evolution of the cosmos.”

Painstaking is no exaggeration. Since they are devoid of materials like clouds of gas and dust fields, scientists are forced to spot these galaxies by identifying individual stars. Because of this, only one other isolated dwarf spheroidal has been found in the Local Group: a dSph known as KKR 25, which was also discovered by the Russian research team back in 1999.

But despite the challenges of spotting them, astronomers are eager to find more examples of dSph galaxies. As it stands, it is believed that these isolated spheroids must have been born out of a period of rapid star formation, before the galaxies were stripped of their dust and gas or used them all up.

Studying more of these galaxies can therefore tell us much about the process star formation in our universe. The Russian team expects that the task will become easier in the coming years as the James Webb Space Telescope and the European Extremely Large Telescope begin service.

Much like the Spitzer Space Telescope, these next-generation telescopes are optimized for infrared detection and will therefore prove very useful in picking out faint stars. This, in turn, will also give us a more complete understanding of our universe and all that it holds.

Further Reading: Royal Astronomical Society

Meteoric Evidence Suggests Mars May Have a Subsurface Reservoir

It is a scientific fact that water exists on Mars. Though most of it today consists of water ice in the polar regions or in subsurface areas near the temperate zones, the presence of H²O has been confirmed many times over. It is evidenced by the sculpted channels and outflows that still mark the surface, as well as the presence of clay and mineral deposits that could only have been formed by water. Recent geological surveys provide more evidence that Mars’ surface was once home to warm, flowing water billions of years ago.

But where did the water go? And how and when did it disappear exactly? As it turns out, the answers may lie here on Earth, thanks to meteorites from Mars that indicate that it may have a global reservoir of ice that lies beneath the surface.

Together, researchers from the Tokyo Institute of Technology, the Lunar and Planetary Institute in Houston, the Carnegie Institution for Science in Washington and NASA’s Astromaterials Research and Exploration Science Division examined three Martian meteorites. What they found were samples of water that contained hydrogen atoms that had a ratio of isotopes distinct from that found in water in Mars’ mantle and atmosphere.

Mudstone formations in the Gale Crater show the flat bedding of sediments deposited at the bottom of a lakebed. Credit: NASA/JPL-Caltech/MSSS
Mudstone formations in the Gale Crater show the flat bedding of sediments deposited at the bottom of a lakebed. Credit: NASA/JPL-Caltech/MSSS

This new study examined meteors obtained from different periods in Mars’ past. What the researchers found seemed to indicate that water-ice may have existed beneath the crust intact over long periods of time.

As Professor Tomohiro told Universe Today via email, the significance of this find is that “the new hydrogen reservoir (ground ice and/or hydrated crust) potentially accounts for the “missing” surface water on Mars.”

Basically, there is a gap between what is thought to have existed in the past, and what is observed today in the form of water ice. The findings made by Tomohiro and the international research team help to account for this.

“The total inventory of “observable” current surface water (that mostly occurs as polar ice, ~10E6 km3) is more than one order magnitude smaller than the estimated volume of ancient surface water (~10E7 to 10E8 km3) that is thought to have covered the northern lowlands,” said Tomohiro. “The lack of water at the surface today was problematic for advocates of such large paleo-ocean and -lake volume.”

Meteorites from Mars, like NWA 7034 (shown here), contain evidence of Mars' watery past. Credit: NASA
Meteorites from Mars, like NWA 7034 (shown here), contain evidence of Mars’ watery past. Credit: NASA

In their investigation, the researchers compared the water, hydrogen isotopes and other volatile elements within the meteorites. The results of these examinations forced them to consider two possibilities: In one, the newly identified hydrogen reservoir is evidence of a near-surface ice interbedded with sediment. The second possibility, which seemed far more likely, was that they came from hydrated rock that exists near the top of the Martian crust.

“The evidence is the ‘non-atmospheric’ hydrogen isotope composition of this reservoir,” Tomohiro said. “If this reservoir occurs near the surface, it should easily interact with the atmosphere, resulting in “isotopic equilibrium”.  The non-atmospheric signature indicates that this reservoir must be sequestered elsewhere of this red planet, i.e. ground-ice.”

While the issue of the “missing Martian water” remains controversial, this study may help to bridge the gap between Mars supposed warm, wet past and its cold and icy present. Along with other studies performed here on Earth – as well as the massive amounts of data being transmitted from the many rover and orbiters operating on and in orbit of the planet – are helping to pave the way towards a manned mission, which NASA plans to mount by 2030.

The team’s findings are reported in the journal Earth and Planetary Science Letters.

Further Reading: NASA

Compromises Lead to Climate Change Deal

Earlier this month, delegates from the various states that make up the UN met in Lima, Peru, to agree on a framework for the Climate Change Conference that is scheduled to take place in Paris next year. For over two weeks, representatives debated and discussed the issue, which at times became hotly contested and divisive.

In the end, a compromise was reached between rich and developing nations, which found themselves on opposite sides for much of the proceedings.

And while few member states walked away feeling they had received all they wanted, many expressed that the meeting was an important step on the road to the 2015 Climate Change Conference. It is hoped that this conference will, after 20 years of negotiations, create the first binding and universal agreement on climate change.

The 2015 Paris Conference will be the 21st session of the Conference of the Parties who signed the 1992 United Nations Framework Convention on Climate Change (UNFCCC) and the 11th session of the Meeting of the Parties who drafted the 1997 Kyoto Protocol.

The objective of the conference is to achieve a legally binding and universal agreement on Climate Change specifically aimed at curbing greenhouse gas emissions to limit global temperature increases to an average of 2 degrees Celsius above pre-industrial levels.

This map represents global temperature anomalies averaged from 2008 through 2012. Credit: NASA Goddard Institute for Space Studies/NASA Goddard's Scientific Visualization Studio.
This map represents global temperature anomalies averaged from 2008 through 2012. Credit: NASA Goddard Institute for Space Studies/NASA Goddard’s Scientific Visualization Studio.

This temperature increase is being driven by increased carbon emissions that have been building steadily since the late 18th century and rapidly in the 20th. According to NASA, CO² concentrations have not exceeded 300 ppm in the upper atmosphere for over 400,000 years, which accounts for the whole of human history.

However, in May of last year, the National Oceanic and Atmospheric Administration (NOAA) announced that these concentrations had reached 400 ppm, based on ongoing observations from the Mauna Loa Observatory in Hawaii.

Meanwhile, research conducted by the U.S. Global Change Research Program indicates that by the year 2100, carbon dioxide emissions could either level off at about 550 ppm or rise to as high as 800. This could mean the difference between a temperature increase of 2.5 °C, which is sustainable, and an increase of 4.5 °C (4.5 – 8 °F), which would make life untenable for many regions of the planet.

Hence the importance of reaching, for the first time in over 20 years of UN negotiations, a binding and universal agreement on the climate that will involve all the nations of the world. And with the conclusion of the Lima Conference, the delegates have what they believe will be a sufficient framework for achieving that next year.

While many environmental groups see the framework as an ineffectual compromise, it was hailed by members of the EU as a step towards the long-awaited global climate deal that began in 1992.

“The decisions adopted in Lima pave the way for the adoption of a universal and meaningful agreement in 2015,” said UN Secretary-General Ban Ki-moon in a statement issued at the conclusion of the two-week meeting. In addition, Peru’s environment minister – Manuel Pulgar-Vidal, who chaired the summit – was quoted by the BBC as saying: “As a text it’s not perfect, but it includes the positions of the parties.”

Al Gore and UNEP Executive Director Achim Steiner at the China Pavilion. Credit: UNEP
Al Gore and UNEP Executive Director Achim Steiner at the China Pavilion at the Lima Conference. Credit: UNEP

Amongst the criticisms leveled by environmental groups is the fact that many important decisions were postponed, and that the draft agreement contained watered-down language.

For instance, on national pledges, it says that countries “may” include quantifiable information showing how they intend to meet their emissions targets, rather than “shall”. By making this optional, environmentalists believe that signatories will be entering into an agreement that is not binding and therefore has no teeth.

However, on the plus side, the agreement kept the 194 members together and on track for next year. Concerns over responsibilities between developed and developing nations were alleviated by changing the language in the agreement, stating that countries have “common but differentiated responsibilities”.

Other meaningful agreements were reached as well, which included boosted commitments to a Green Climate Fund (GCF), financial aid for “vulnerable nations”, new targets to be set for carbon emission reductions, a new process of Multilateral Assessment to achieve new levels of transparency for carbon-cutting initiatives, and new calls to raise awareness by putting climate change into school curricula.

In addition, the Lima Conference also led to the creation of The 1 Gigaton Coalition, a UN-coordinated group dedicated to promoting renewable energy. As stated by the UNEP, this group was created “to boost efforts to save billions of dollars and billions of tonnes of CO² emissions each year by measuring and reporting reductions of greenhouse gas emissions resulting from projects and programs that promote renewable energy and energy efficiency in developing countries.”

A massive, over 7-metre-high balloon, representing one tonne of carbon dioxide (CO2). Credit: UN Photo/Mark Garten
A massive, over 7-metre-high balloon, representing one tonne of carbon dioxide (CO2). Credit: UN Photo/Mark Garten

Coordinated by the United Nations Environment Programme (UNEP) with the support of the Government of Norway, they will be responsible for measuring CO² reductions through the application of renewable energy projects. The coalition was formed in light of the fact that while many nations have such initiatives in place, they are not measuring or reporting the drop in greenhouse gases that result.

They believe that, if accurately measured, these drops in emissions would equal 1 Gigaton by the year 2020. This would not only be beneficial to the environment, but would result in a reduced financial burden for governments all across the world.

As UNEP Executive Director Achim Steiner stated in a press release: “Our global economy could be $18 trillion better off by 2035 if we adopted energy efficiency as a first choice, while various estimates put the potential from energy efficient improvements anywhere between 2.5 and 6.8 gigatons of carbon per year by 2030.”

Ultimately, the 1 Gigaton Coalition hopes to provide the information that demonstrates unequivocally that energy efficiency and renewables are helping to close the gap between current emissions levels and what they will need to come down to if we hope to meet a temperature increase of just 2 °C. This, as already stated, could mean the difference between life and death for many people, and ultimately for the environment as a whole.

The location of UNFCCC talks are rotated by regions throughout United Nations countries. The 2015 conference will be held at Le Bourget from 30 November to 11 December 2015.

Further Reading: UN, UNEP, UNFCCC

SpaceX Continues to Expand Facilities, Workforce in Quest for Space

SpaceX was founded by Elon Musk in 2002 with a dream of making commercial space exploration a reality. Since that time, Musk has seen his company become a major player in the aerospace industry, landing contracts with various governments, NASA, and other private space companies to put satellites in orbit and ferry supplies to the International Space Station.

But 2014 was undoubtedly their most lucrative year to date. In September, the company (along with Boeing) signed a contract with NASA for $6.8 billion to develop space vehicles that would bring astronauts to and from the ISS by 2017 and end the nation’s reliance on Russia.

And this past week, the company announced a plan to expand operations at its Rocket Development and Test Facility in McGregor, Texas. This move, which is costing the company a cool $46 million, is expected to create 300 new full-time jobs in the community and expand testing and development even further.

According to Mike Copeland of the Waco Tribute, an additional $1.5 million in funding could be allocated from McLennon County. This would give SpaceX a total of $3 million in funds from the Waco-McLennan County Economic Development Corportation, a fund which is used to attract and keep industry in the region.

A SuperDraco engine being tested at the McGregor Facility in Texas. Credit: SpaceX
A SuperDraco thruster being tested at the Rocket Development and Test Facility in McGregor, Texas. Credit: SpaceX

Copeland also indicates that a report prepared by the Waco City Council specified what types of jobs would be created. Apparently, SpaceX is is need of additional engineers, technicians and industry professionals. No doubt, this planned expansion has much to do with the company meeting its new contractual obligations with NASA.

Originally built in 2003, the Rocket Development and Test Facility has been the site of some exciting events over the years. Using rocket test stands, the company has conducted several low-altitude Vertical Takeoff and Vertical Landing (VTVL) test flights with the Falcon 9 Grasshopper rocket. In addition, the McGregor facility is used for post-flight disassembly and defueling of the Dragon spacecraft.

In the past ten years, SpaceX has also made numerous expansions and improvements to the facility, effectively doubling the size of the facility by purchasing several pieces of adjacent farmland. As of September 2013, the facility measured 900 acres (360 hectares). But by early 2014, the company had more than quadrupled its lease in McGregor, to a total of 4,280 acres.

Though far removed from the company’s rocket building facilities at their headquarters in Hawthorne, California, the facility plays an important role in the development of their space capsule and reusable rocket systems. According to SpaceX’s company website, “Every Merlin engine that powers the Falcon 9 rocket and every Draco thruster that controls the Dragon spacecraft is tested on one of 11 test stands.”

A Falcon 9 Grasshopper conducting VTVL testing. Credit: SpaceX
A Falcon 9 Grasshopper conducting VTVL testing. Credit: SpaceX

In short, the facility is the key testing grounds for all SpaceX technology. And now that the company is actively collaborating with NASA to restore indigenous space-launch ability to the US, more testing will be needed. Much has been made about the company’s efforts with VTVL rocket systems – such as the Falcon 9 Grasshopper (pictured above) – but the Dragon V2 takes things to another level.

As revealed by SpaceX in May of this year, the Dragon V2 capsule is designed to ferry crew members and supplies into orbit, and then land propulsively (i.e. under its own power) back to Earth before refueling and flying again. This is made possible thanks to the addition of eight side-mounted SuperDraco engines.

Compared to the standard Draco Engine, which is designed to give the Dragon Capsule (and the upper stages of the Falcon 9 rocket) attitude control in space, the SuperDraco is 100 times more powerful.

According to SpaceX, each SuperDraco is capable of producing 16,000 pounds of thrust and can be restarted multiple times if necessary. In addition, the engines have the ability to deep throttle, providing astronauts with precise control and enormous power.

With eight engines in total, that would provide a Dragon V2 with 120,000 pounds of axial thrust, giving it the ability to land anywhere without the need of a parachute (though they do come equipped with a backup chute).

Between this and ongoing developments with the Falcon 9 reusable rocket system, employees in McGregor are likely to have their hands full in the coming years. The expansion is expected to be complete by 2018.

Further Reading: NASA, SpaceX, Waco Tribute

What is the Average Surface Temperature of the Planets in our Solar System?

It’s is no secret that Earth is the only inhabited planet in our Solar System. All the planets besides Earth lack a breathable atmosphere for terrestrial beings, but also, many of them are too hot or too cold to sustain life. A “habitable zone” which exists within every system of planets orbiting a star. Those planets that are too close to their sun are molten and toxic, while those that are too far outside it are icy and frozen.

But at the same time, forces other than position relative to our Sun can affect surface temperatures. For example, some planets are tidally locked, which means that they have one of their sides constantly facing towards the Sun. Others are warmed by internal geological forces and achieve some warmth that does not depend on exposure to the Sun’s rays. So just how hot and cold are the worlds in our Solar System? What exactly are the surface temperatures on these rocky worlds and gas giants that make them inhospitable to life as we know it?

Mercury:

Of our eight planets, Mercury is closest to the Sun. As such, one would expect it to experience the hottest temperatures in our Solar System. However, since Mercury also has no atmosphere and it also spins very slowly compared to the other planets, the surface temperature varies quite widely.

What this means is that the side exposed to the Sun remains exposed for some time, allowing surface temperatures to reach up to a molten 465 °C. Meanwhile, on the dark side, temperatures can drop off to a frigid -184°C. Hence, Mercury varies between extreme heat and extreme cold and is not the hottest planet in our Solar System.

Venus imaged by Magellan Image Credit: NASA/JPL
Venus is an incredibly hot and hostile world, due to a combination of its thick atmosphere and proximity to the Sun. Image Credit: NASA/JPL

Venus:

That honor goes to Venus, the second closest planet to the Sun which also has the highest average surface temperatures – reaching up to 460 °C on a regular basis. This is due in part to Venus’ proximity to the Sun, being just on the inner edge of the habitability zone, but also to Venus’ thick atmosphere, which is composed of heavy clouds of carbon dioxide and sulfur dioxide.

These gases create a strong greenhouse effect which traps a significant portion of the Sun’s heat in the atmosphere and turns the planet surface into a barren, molten landscape. The surface is also marked by extensive volcanoes and lava flows, and rained on by clouds of sulfuric acid. Not a hospitable place by any measure!

Earth:

Earth is the third planet from the Sun, and so far is the only planet that we know of that is capable of supporting life. The average surface temperature here is about 14 °C, but it varies due to a number of factors. For one, our world’s axis is tilted, which means that one hemisphere is slanted towards the Sun during certain times of the year while the other is slanted away.

This not only causes seasonal changes, but ensures that places located closer to the equator are hotter, while those located at the poles are colder. It’s little wonder then why the hottest temperature ever recorded on Earth was in the deserts of Iran (70.7 °C) while the lowest was recorded in Antarctica (-89.2 °C).

Mars' thin atmosphere, visible on the horizon, is too weak to retain heat. Credit: NASA
Mars’ thin atmosphere, visible on the horizon, is too weak to retain heat. Credit: NASA

Mars:

Mars’ average surface temperature is -55 °C, but the Red Planet also experiences some variability, with temperatures ranging as high as 20 °C at the equator during midday, to as low as -153 °C at the poles. On average though, it is much colder than Earth, being just on the outer edge of the habitable zone, and because of its thin atmosphere – which is not sufficient to retain heat.

In addition, its surface temperature can vary by as much as 20 °C due to Mars’ eccentric orbit around the Sun (meaning that it is closer to the Sun at certain points in its orbit than at others).

Jupiter:

Since Jupiter is a gas giant, it has no solid surface, so it has no surface temperature. But measurements taken from the top of Jupiter’s clouds indicate a temperature of approximately -145°C. Closer to the center, the planet’s temperature increases due to atmospheric pressure.

At the point where atmospheric pressure is ten times what it is on Earth, the temperature reaches 21°C, what we Earthlings consider a comfortable “room temperature”. At the core of the planet, the temperature is much higher, reaching as much as 35,700°C – hotter than even the surface of the Sun.

Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute. Assembled by Gordan Ugarkovic.
Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute/Gordan Ugarkovic

Saturn:

Due to its distance from the Sun, Saturn is a rather cold gas giant planet, with an average temperature of -178 °Celsius. But because of Saturn’s tilt, the southern and northern hemispheres are heated differently, causing seasonal temperature variation.

And much like Jupiter, the temperature in the upper atmosphere of Saturn is cold, but increases closer to the center of the planet. At the core of the planet, temperatures are believed to reach as high as 11,700 °C.

Uranus:

Uranus is the coldest planet in our Solar System, with a lowest recorded temperature of -224°C. Despite its distance from the Sun, the largest contributing factor to its frigid nature has to do with its core.

Much like the other gas giants in our Solar System, the core of Uranus gives off far more heat than is absorbed from the Sun. However, with a core temperature of approximately 4,737 °C, Uranus’ interior gives of only one-fifth the heat that Jupiter’s does and less than half that of Saturn.

Neptune photographed by Voyage. Image credit: NASA/JPL
Neptune photographed by Voyager 2. Image credit: NASA/JPL

Neptune:

With temperatures dropping to -218°C in Neptune’s upper atmosphere, the planet is one of the coldest in our Solar System. And like all of the gas giants, Neptune has a much hotter core, which is around 7,000°C.

In short, the Solar System runs the gambit from extreme cold to extreme hot, with plenty of variance and only a few places that are temperate enough to sustain life. And of all of those, it is only planet Earth that seems to strike the careful balance required to sustain it perpetually.

Universe Today has many articles on the temperature of each planet, including the temperature of Mars and the temperature of Earth.

You may also want to check out these articles on facts about the planets and an overview of the planets.

NASA has a great graphic here that compares the temperatures of all the planets in our Solar System.

Astronomy Cast has episodes on all planets including Mercury.

Just in Time for the Holidays – Galactic Encounter Puts on Stunning Display

At this time of year, festive displays of light are to be expected. This tradition has clearly not been lost on the galaxies NHC 2207 and IC 2163. Just in time for the holidays, these colliding galaxies, which are located within the Canis Major constellation (some 130 million light-years from Earth,) were seen putting on a spectacular lights display for us folks here on Earth!

And while this galaxy has been known to produce a lot of intense light over the years, the image above is especially luminous. A composite using data from the Chandra Observatory and the Hubble and Spitzer Space Telescopes, it shows the combination of visible, x-ray, and infrared light coming from the galactic pair.

In the past fifteen years, NGC 2207 and IC 2163 have hosted three supernova explosions and produced one of the largest collections of super bright X-ray lights in the known universe. These special objects – known as “ultraluminous X-ray sources” (ULXs) – have been found using data from NASA’s Chandra X-ray Observatory.

While the true nature of ULXs is still being debated, it is believed that they are a peculiar type of star X-ray binary. These consist of a star in a tight orbit around either a neutron star or a black hole. The strong gravity of the neutron star or black hole pulls matter from the companion star, and as this matter falls toward the neutron star or black hole, it is heated to millions of degrees and generates X-rays.

 the core of galaxy Messier 82 (M82), where two ultraluminous X-ray sources, or ULXs, reside (X-1 and X-2). Credit: NASA
The core of galaxy Messier 82 (M82), where two ultraluminous X-ray sources, or ULXs, reside (X-1 and X-2). Credit: NASA

Data obtained from Chandra has determined that – much like the Milky Way Galaxy – NGC 2207 and IC 2163 are sprinkled with many star X-ray binaries. In the new Chandra image, this x-ray data is shown in pink, which shows the sheer prevalence of x-ray sources within both galaxies.

Meanwhile, optical light data from the Hubble Space Telescope is rendered in red, green, and blue (also appearing as blue, white, orange, and brown due to color combinations,) and infrared data from the Spitzer Space Telescope is shown in red.

The Chandra observatory spent far more time observing these galaxies than any previous ULX study, roughly five times as much. As a result, the study team – which consisted of researchers from Harvard University, MIT, and Sam Houston State University – were able to confirm the existence of 28 ULXs between NGC 2207 and IC 2163, seven of which had never before been seen.

In addition, the Chandra data allowed the team of scientists to observe the correlation between X-ray sources in different regions of the galaxy and the rate at which stars are forming in those same regions.

Galaxy mergers, such as the Mice Galaxies will be part of Galaxy Zoo's newest project. Credit: Hubble Space Telescope
The Mice galaxies, seen here well into the process of merging. Credit: Hubble Space Telescope

As the new Chandra image shows, the spiral arms of the galaxies – where large amounts of star formation is known to be occurring – show the heaviest concentrations of ULXs, optical light, and infrared. This correlation also suggests that the companion star in the star X-ray binaries is young and massive.

This in turn presents another possibility which has to do with star formation during galactic mergers. When galaxies come together, they produce shock waves that cause clouds of gas within them to collapse, leading to periods of intense star formation and the creation of star clusters.

The fact that the ULXs and the companion stars are young (the researchers estimate that they are only 10 million years old) would seem to confirm that they are the result of NGC 2207 and IC 2163 coming together. This seem a likely explanation since the merger between these two galaxies is still in its infancy, which is attested to by the fact that the galaxies are still separate.

They are expected to collide soon, a process which will make them look more like the Mice Galaxies (pictured above). In about one billion years time, they are expected to finish the process, forming a spiral galaxy that would no doubt resemble our own.

A paper describing the study was recently published on online with The Astrophysical Journal.

Further Reading: NASA/JPL, Chandra, arXiv Astrophysics

What Causes Day and Night?

For most of here on planet Earth, sunrise, sunset, and the cycle of day and night (aka. the diurnal cycle) are just simple facts of life. As a result of seasonal changes that happen with every passing year, the length of day and night can vary – and be either longer or shorter – by just a few hours. But in some regions of the world (i.e. the poles) the Sun does not set during certain times of the year. And there are also seasonal periods where a single night can last many days.

Naturally, this gives rise to certain questions. Namely, what causes the cycle of day and night, and why don’t all places on the planet experience the same patterns? As with many other seasonal experiences, the answer has to do with two facts: One, the Earth rotates on its axis as it orbits the Sun. And two, the fact that Earth’s axis is tilted.

Earth’s Rotation:

Earth’s rotation occurs from west to east, which is why the Sun always appears to be rising on the eastern horizon and setting on the western. If you could view the Earth from above, looking down at the northern polar region, the planet would appear to be rotating counter-clockwise. However, viewed from the southern polar region, it appears to be rotating clockwise.

Earth's axial tilt (or obliquity) and its relation to the rotation axis and plane of orbit as viewed from the Sun during the Northward equinox. Credit: NASA
Earth’s axial tilt and its relation to the rotation axis and plane of orbit as viewed from the Sun during the Northward equinox. Credit: NASA

The Earth rotates once in about 24 hours with respect to the Sun and once every 23 hours 56 minutes and 4 seconds with respect to the stars.  What’s more, its central axis is aligned with two stars. The northern axis points outward to Polaris, hence why it is called “the North Star”, while its southern axis points to Sigma Octantis.

Axial Tilt:

As already noted, due to the Earth’s axial tilt (or obliquity), day and night are not evenly divided. If the Earth’s axis were perpendicular to its orbital plane around the Sun, all places on Earth would experience equal amounts of day and night (i.e. 12 hours of day and night, respectively) every day during the year and there would be no seasonal variability.

Instead, at any given time of the year, one hemisphere is pointed slightly more towards the Sun, leaving the other pointed away. During this time, one hemisphere will be experiencing warmer temperatures and longer days while the other will experience colder temperatures and longer nights.

Seasonal Changes:

Of course, since the Earth is rotating around the Sun and not just on its axis, this process is reversed during the course of a year. Every six months, the Earth undergoes a half orbit and changes positions to the other side of the Sun, allowing the other hemisphere to experience longer days and warmer temperatures.

Precession of the Equinoxes. Image credit: NASA
Artist’s rendition of the Earth’s rotation and the precession of the Equinoxes. Credit: NASA

Consequently, in extreme places like the North and South pole, daylight or nighttime can last for days. Those times of the year when the northern and southern hemispheres experience their longest days and nights are called solstices, which occur twice a year for the northern and southern hemispheres.

The Summer Solstice takes place between June 20th and 22nd in the northern hemisphere and between December 20th and 23rd each year in the southern hemisphere. The Winter Solstice occurs at the same time but in reverse – between Dec. 20th and 23rd for the northern hemisphere and June 20th and 22nd for the southern hemisphere.

According to NOAA, around the Winter Solstice at the North Pole there will be no sunlight or even twilight beginning in early October, and the darkness lasts until the beginning of dawn in early March. Conversely, around the Summer Solstice, the North Pole stays in full sunlight all day long throughout the entire summer (unless there are clouds). After the Summer Solstice, the sun starts to sink towards the horizon.

Another common feature in the cycle of day and night is the visibility of the Moon, the stars, and other celestial bodies. Technically, we don’t always see the Moon at night. On certain days, when the Moon is well-positioned between the Earth and the Sun, it is visible during the daytime. However, the stars and other planets of our Solar System are only visible at night after the Sun has fully set.

Astrophoto: Night Sky by Sam Crimmin
“Night Sky”. On a clear night, the stars and the glowing band of the Milky Way Galaxy are generally visible. Credit: Sam Crimmin

The reason for this is because the light of these objects is too faint to be seen during daylight hours. The Sun, being the closest star to us and the most radiant object visible from Earth, naturally obscures them when it is overhead. However, with the Earth tilted away from the Sun, we are able to see the Moon radiating the Sun’s light more clearly, and the stars light is detectable.

On an especially clear night, and assuming light pollution is not a major factor, the glowing band of the Milky Way and other clouds of dust and gas may also be visible in the night sky. These objects are more distant than the stars in our vicinity of the Galaxy, and therefore have less luminosity and are more difficult to see.

Another interesting thing about the cycle of day and night is that it is getting slower with time. This is due to the tidal effects the Moon has on Earth’s rotation, which is making days longer (but only marginally). According to atomic clocks around the world, the modern day is about 1.7 milliseconds longer than it was a century ago – a change which may require the addition of more leap seconds in the future.

We have many interesting articles on Earth’s Rotation here at Universe Today. To learn more about solstices here in Universe Today, be sure to check out our articles on the Shortest Day of the Year and the Summer Solstice.

More information can be found at NASA, Seasons of the Year, The Sun at Solstice

Check out this podcast at Astronomy Cast: The Life of the Sun

NASA’s RoboSimian And Surrogate Robots

Since they were first announced in 2012, NASA has been a major contender in the DARPA Robotics Challenge (DRC). This competition – which involves robots navigating obstacle courses using tools and vehicles – was first conceived by DARPA to see just how capable robots could be at handling disaster response.

The Finals for this challenge will be taking place on June 5th and 6th, 2015, at Fairplex in Pomona, California. And after making it this far with their RoboSimian design, NASA was faced with a difficult question. Should their robotic primate continue to represent them, or should that honor go to their recently unveiled Surrogate robot?

As the saying goes “you dance with the one who brung ya.” In short, NASA has decided to stick with RoboSimian as they advance into the final round of obstacles and tests in their bid to win the DRC and the $2 million prize.

Surrogate’s unveiling took place this past October 24th at NASA’s Jet Propulsion Laboratory in Pasadena, California. The appearance of this robot on stage, to the them song of 2001: A Space Odyssey, was held on the same day that Thomas Rosenbaum was inaugurated as the new president of the California Institute of Technology.

Robotics researchers at NASA's Jet Propulsion Laboratory in Pasadena, California, stand with robots RoboSimian and Surrogate, both built at JPL. Credit: JPL-Caltech
Robotics researchers at NASA’s Jet Propulsion Laboratory stand with robots RoboSimian and Surrogate, both built at JPL. Credit: JPL-Caltech

In honor of the occasion, Surrogate (aka “Surge”) strutted its way across the stage to present a digital tablet to Rosenbaum, which he used to push a button that initiated commands for NASA’s Mars rover Curiosity. Despite the festive nature of the occasion, this scene was quite calm compared to what the robot was designed for.

“Surge and its predecessor, RoboSimian, were designed to extend humanity’s reach, going into dangerous places such as a nuclear power plant during a disaster scenario such as we saw at Fukushima. They could take simple actions such as turning valves or flipping switches to stabilize the situation or mitigate further damage,” said Brett Kennedy, principal investigator for the robots at JPL.

RoboSimian was originally created for the DARPA Robotics Challenge, and during the trial round last December, the JPL team’s robot won a spot to compete in the finals, which will be held in Pomona, California, in June 2015.

With the support of the Defense Threat Reduction Agency and the Robotics Collaborative Technology Alliance, the Surrogate robot began construction in 2014. Its designers began by incorporating some of RoboSimian’s extra limbs, and then added a wheeled base, twisty spine, an upper torso, and a head for holding sensors.

Surrogate, nicknamed "Surge," is a robot designed and built at NASA's Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech
Surrogate, nicknamed “Surge,” is a robot designed and built at NASA’s Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech

Additional components include a the hat-like appendage on top, which is in fact a LiDAR (Light Detection and Ranging) device. This device spins and shoots out laser beams in a 360-degree field to map the surrounding environment in 3-D.

Choosing between them was a tough call, and took the better part of the last six months. On the one hand, Surrogate was designed to be more like a human. It has an upright spine, two arms and a head, standing about 1.4 meters (4.5 feet) tall and weighing about  91 kilograms (200 pounds). Its major strength is in how it handles objects, and its flexible spine allows for extra manipulation capabilities. But the robot moves on tracks, which doesn’t allow it to move over tall objects, such as flights of stairs, ladders, rocks, and rubble.

RoboSimian, by contrast, is more ape-like, moving around on four limbs. It is better suited to travel over complicated terrain and is an adept climber. In addition, Surrogate has only one set of “eyes” – two cameras that allow for stereo vision – mounted to its head, whereas RoboSimian has up to seven sets of eyes mounted all over its body.

The robots also run on almost identical computer code, and the software that plans their motion is very similar. As in a video game, each robot has an “inventory” of objects with which it can interact. Engineers have to program the robots to recognize these objects and perform pre-set actions on them, such as turning a valve or climbing over blocks.

RoboSimian is an ape-like robot that moves around on four limbs. It was designed and built at NASA's Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech
RoboSimian is an ape-like robot that moves around on four limbs. It will be representing the Jet Propulsion Laboratory at the DARPA Robotics Challenge Finals in June, 2015. Credit: JPL-Caltech

In the end, they came to a decision. RoboSimian will represent the team in Pomona.

“It comes down to the fact that Surrogate is a better manipulation platform and faster on benign surfaces, but RoboSimian is an all-around solution, and we expect that the all-around solution is going to be more competitive in this case,” Kennedy said.

The RoboSimian team at JPL is collaborating with partners at the University of California, Santa Barbara, and Caltech to get the robot to walk more quickly. JPL researchers also plan to put a LiDAR on top of RoboSimian in the future. These efforts seek to improve the robot in the long-run, but are also aimed at getting it ready to face the challenges of the DARPA Robot Challenge Finals.

Specifically, it will be faced with such tasks as driving a vehicle and getting out of it, negotiating debris blocking a doorway, cutting a hole in a wall, opening a valve, and crossing a field with cinderblocks or other debris. There will also be a surprise task.

Although RoboSimian is now the focus of Kennedy’s team, Surrogate won’t be forgotten.

“We’ll continue to use it as an example of how we can take RoboSimian limbs and reconfigure them into other platforms,” Kennedy said.

For details about the DARPA Robotics Challenge, visit: http://www.theroboticschallenge.org/

Further Reading: NASA

A Universe of 10 Dimensions

When someone mentions “different dimensions,” we tend to think of things like parallel universes – alternate realities that exist parallel to our own, but where things work or happened differently. However, the reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization.

To break it down, dimensions are simply the different facets of what we perceive to be reality. We are immediately aware of the three dimensions that surround us on a daily basis – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).

Beyond these three visible dimensions, scientists believe that there may be many more. In fact, the theoretical framework of Superstring Theory posits that the universe exists in ten different dimensions. These different aspects are what govern the universe, the fundamental forces of nature, and all the elementary particles contained within.

The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities. Add to it a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).

The third dimension involves depth (the z-axis), and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume. Beyond these three lie the seven dimensions which are not immediately apparent to us, but which can be still be perceived as having a direct effect on the universe and reality as we know it.

The timeline of the universe, beginning with the Big Bang. Credit: NASA
The timeline of the universe, beginning with the Big Bang. According to String Theory, this is just one of many possible worlds. Credit: NASA

Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an objects position in time is essential to plotting its position in the universe. The other dimensions are where the deeper possibilities come into play, and explaining their interaction with the others is where things get particularly tricky for physicists.

According to Superstring Theory, the fifth and sixth dimensions are where the notion of possible worlds arises. If we could see on through to the fifth dimension, we would see a world slightly different from our own that would give us a means of measuring the similarity and differences between our world and other possible ones.

In the sixth, we would see a plane of possible worlds, where we could compare and position all the possible universes that start with the same initial conditions as this one (i.e. the Big Bang). In theory, if you could master the fifth and sixth dimension, you could travel back in time or go to different futures.

In the seventh dimension, you have access to the possible worlds that start with different initial conditions. Whereas in the fifth and sixth, the initial conditions were the same and subsequent actions were different, here, everything is different from the very beginning of time. The eighth dimension again gives us a plane of such possible universe histories, each of which begins with different initial conditions and branches out infinitely (hence why they are called infinities).

In the ninth dimension, we can compare all the possible universe histories, starting with all the different possible laws of physics and initial conditions. In the tenth and final dimension, we arrive at the point in which everything possible and imaginable is covered. Beyond this, nothing can be imagined by us lowly mortals, which makes it the natural limitation of what we can conceive in terms of dimensions.

String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson. String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.
The existence of extra dimensions is explained using the Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.

The existence of these additional six dimensions which we cannot perceive is necessary for String Theory in order for their to be consistency in nature. The fact that we can perceive only four dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted (aka. brane theory).

If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold (shown above). While imperceptible as far as our senses are concerned, they would have governed the formation of the universe from the very beginning. Hence why scientists believe that peering back through time, using telescopes to spot light from the early universe (i.e. billions of years ago), they might be able to see how the existence of these additional dimensions could have influenced the evolution of the cosmos.

Much like other candidates for a grand unifying theory – aka the Theory of Everything (TOE) – the belief that the universe is made up of ten dimensions (or more, depending on which model of string theory you use) is an attempt to reconcile the standard model of particle physics with the existence of gravity. In short, it is an attempt to explain how all known forces within our universe interact, and how other possible universes themselves might work.

For additional information, here’s an article on Universe Today about parallel universes, and another on a parallel universe scientists thought they found that doesn’t actually exist.

There are also some other great resources online. There is a great video that explains the ten dimensions in detail. You can also look at the PBS web site for the TV show Elegant universe. It has a great page on the ten dimensions.

You can also listen to Astronomy Cast. You might find episode 137 The Large Scale Structure of the Universe pretty interesting.

Source: PBS