What is an Enhanced Greenhouse Effect?

Enhanced Greenhouse Effect
Greenhouse Effect vs. Enhanced Greenhouse Effect. Image Credit: environment.act.gov.au

Every day, solar radiation reaches the surface of our planet from the sun. It is then converted into thermal radiation which is then absorbed by atmospheric greenhouse gases (such as carbon dioxide) and is re-radiated in all directions. Known as the Greenhouse Effect, this process is essential to life as we know it. Without it, Earth’s surface temperature would be significantly lower and many life forms would cease to exist. However, where human agency is involved, this effect has been shown to have a downside. Indeed, when excess amounts of greenhouse gases are put into the atmosphere, this natural warming effect is boosted to the point where it can have damaging, even disastrous consequences for life here on Earth. This process is known as the Enhanced Greenhouse Effect, where the natural process of warming caused by solar radiation and greenhouse gases is heightened by anthropogenic (i.e. human) factors.

The effect of CO2 and other greenhouse gases on the global climate was first publicized in 1896 by Swedish scientist Svante Arrhenius. It was he that first developed a theory to explain the ice ages, as well as the first scientist to speculate that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature of the Earth. This was expanded upon in the mid-20th century by Guy Stewart Callendar, an English steam engineer and inventor who was also interested in the link between increased CO2 levels in the atmosphere and rising global temperatures. Thanks to his research in the field, the link between the two came to be known for a time as the “Callendar effect”.
As the 20th century rolled on, a scientific consensus emerged that recognized this phenomenon as a reality and increasingly urgent problem. Relying on ice core data, atmospheric surveys performed by NASA, the Mauna Loa observatory and countless other research institutes all over the planet, scientists now believe there is a direct link between human agency and the rise in global mean temperatures over the fifty and even two-hundred years. This is due largely to increased production of CO2 through fossil fuel burning and other activities such as cement production and tropical deforestation. In addition, methane production has also been successfully linked to an increase in global temperatures, which is the result of growing consumption of meat and the need to clear large areas of tropical rainforests in order to make room for pasture land.

According to the latest Assessment Report from the Intergovernmental Panel on Climate Change which was released in 2007, “most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”. If left unchecked, it is unclear what the exact consequences would be, but most scenarios predict a steep drop in worldwide food production, widespread drought, glacial depletion, the near to total depletion of the polar ice cap, and the possibility that the process could become irreversible.
Getting toasty in here!

We have written many articles about enhanced greenhouse effect for Universe Today. Here’s an article about greenhouse effect, and here’s an article about atmospheric gases.

If you’d like more info on Enhanced Greenhouse Effect, check out these articles from USA Today and Earth Observatory.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:
http://en.wikipedia.org/wiki/Greenhouse_effect
http://www.science.org.au/nova/016/016key.htm
http://en.wikipedia.org/wiki/Radiative_forcing
http://en.wikipedia.org/wiki/Svante_Arrhenius
http://en.wikipedia.org/wiki/Callendar_effect
http://en.wikipedia.org/wiki/History_of_climate_change_science

What is Electromagnetic Induction?

Electromagnetic Induction
Electromagnetic Induction. Image Credit: ionaphysics.org

It is hard to imagine a world without electricity. At one time, electricity was a humble offering, providing humanity with unnatural light that did not depend on gas lamps or kerosene lanterns. Today, it has grown to become the basis of our comfort, providing our heat, lighting and climate control, and powering all of our appliances, be they for cooking, cleaning, or entertainment. And beneath most of the machines that make it possible is a simple law known as Electromagnetic Induction, a law which describes the operation of generators, electric motors, transformers, induction motors, synchronous motors, solenoids, and most other electrical machines. Scientifically speaking it refers to the production of voltage across a conductor (a wire or similar piece of conducting material) that is moving through a magnetic field.

Though many people have been thought to have contributed to the discovery of this phenomenon, it is Michael Faraday who is credited with first making the discovery in 1831. Known as Faraday’s law, it states that “The induced electromotive force (EMF) in any closed circuit is equal to the time rate of change of the magnetic flux through the circuit”. In practice, this means that an electric current will be induced in any closed circuit when the magnetic flux (i.e. the amount of magnetic field) passing through a surface bounded by the conductor changes. This applies whether the field itself changes in strength or the conductor is moved through it.
Whereas it was already known at this time that an electric current produced a magnetic field, Faraday demonstrated that the reverse was also true. In short, he proved that one could generate an electric current by passing a wire through a magnetic field. To test this hypothesis, Faraday wrapped a piece of metal wire around a paper cylinder and then connected the coil to a galvanometer (a device used to measure electric current). He then moved a magnet back and forth inside the cylinder and recorded through the galvanometer that an electrical current was being induced in the wire. He confirmed from this that a moving magnetic field was necessary to induce an electrical field, because when the magnet stopped moving, the current also ceased.
Today, electromagnetic induction is used to power many electrical devices. One of the most widely known uses is in electrical generators (such as hydroelectric dams) where mechanical power is used to move a magnetic field past coils of wire to generate voltage.
In mathematical form, Faraday’s law states that: ? = – d?B/dt, where ? is the electromotive force and ?B is the magnetic flux, and d and t represent distance and time.

We have written many articles about electromagnetic induction for Universe Today. Here’s an article about electromagnets, and here’s an article about generators.

If you’d like more info on electromagnetic induction, check out these articles from All About Circuits and Physics 24/7.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:
http://en.wikipedia.org/wiki/Electromagnetic_induction
http://en.wikipedia.org/wiki/Faraday%27s_law_of_induction
http://en.wikipedia.org/wiki/Magnetic_flux
http://micro.magnet.fsu.edu/electromag/java/faraday2/
http://www.scienceclarified.com/El-Ex/Electromagnetic-Induction.html
http://en.wikipedia.org/wiki/Galvanometer

Convex Lens

Convex Lens

As every child is sure to find out at some point in their life, lenses can be an endless source of fun. They can be used for everything from examining small objects and type to focusing the sun’s rays. In the latter case, hopefully they choose to be humanitarian and burn things like paper and grass rather than ants! But the fact remains, a Convex Lens is the source of this scientific marvel. Typically made of glass or transparent plastic, a convex lens has at least one surface that curves outward like the exterior of a sphere. Of all lenses, it is the most common given its many uses.

A convex lens is also known as a converging lens. A converging lens is a lens that converges rays of light that are traveling parallel to its principal axis. They can be identified by their shape which is relatively thick across the middle and thin at the upper and lower edges. The edges are curved outward rather than inward. As light approaches the lens, the rays are parallel. As each ray reaches the glass surface, it refracts according to the effective angle of incidence at that point of the lens. Since the surface is curved, different rays of light will refract to different degrees; the outermost rays will refract the most. This runs contrary to what occurs when a divergent lens (otherwise known as concave, biconcave or plano-concave) is employed. In this case, light is refracted away from the axis and outward.

Lenses are classified by the curvature of the two optical surfaces. If the lens is biconvex or plano-convex, the lens is called positive or converging. Most convex lenses fall into this category. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. These types of lenses are used in the manufacture of magnifying glasses. If both surfaces have the same radius of curvature, the lens is known as an equiconvex biconvex. If one of the surfaces is flat, the lens is plano-convex (or plano-concave depending on the curvature of the other surface). A lens with one convex and one concave side is convex-concave or meniscus. These lenses are used in the manufacture of corrective lenses.

For an illustrated example of how images are formed with a convex lens, click here.

We have written many articles about lenses for Universe Today. Here’s an article about the concave lens, and here’s an article about telescope lens.

If you’d like more info on convex lens, check out these articles from The Physics Classroom and Wikipedia.

We’ve also recorded an episode of Astronomy Cast all about the Telescope. Listen here, Episode 33: Choosing and Using a Telescope.

Sources:
http://en.wikipedia.org/wiki/Lens_(optics)
http://homepage.mac.com/cbakken/obookshelf/cvreal.html
http://www.play-hookey.com/optics/lens_convex.html
http://www.answers.com/topic/convex-lens-1
http://www.physicsclassroom.com/class/refrn/u14l5a.cfm
http://www.tutorvista.com/content/science/science-ii/refraction-light/formation-convex.php

Conservation of Mass

Conservation of Mass
Conservation of Mass. Image Credit: www.efm.leeds.ac.uk

[/caption]While it may offend anyone currently trying to lose that holiday weight, it is a classic physical law that in a closed system, mass can neither be created nor destroyed. Feeling discouraged yet? Well, don’t! Strictly speaking, this law does NOT mean you can’t drop pounds, just that within an isolated system (which your body is not) mass cannot be created/destroyed, although it may be rearranged in space, and changed into different types of particles. This law is known as the Conversation of Mass, otherwise known as the principal of mass/matter conservation. More specifically, the law states that the mass of an isolated system cannot be changed as a result of processes acting inside the system. This implies that for any chemical process in a closed system, the mass of the reactants must equal the mass of the products. The law is considered “classical” in that it does not take into consideration more recent physical laws, such as special relativity or quantum mechanics, but still applies in many contexts.

This law is rooted in classical Greek philosophy, which states that “nothing can come from nothing”, often stated in its Latin form: ex nihlionihlio fit. The basic premise here, first espoused by Empedocles (ca. 490–430 BCE), is that no new matter can come into existence where none was present before. It was further elaborated on by Epicurus, Parmenedes, and a number of Indian and Arab philosophers. However, it was not until the 18th century with Antoine Lavoisier that it graduated from the field of cosmology and became a scientific law. Lavoisier was the first to clearly outlined it in his seminal work TraitéÉlémentaire de Chimie (Elementary Treatise on Chemistry) in 1789.

Historically, the conservation of mass and weight was obscure for millennia because of the buoyant effect of the Earth’s atmosphere on the weight of gases. In addition, when a substance burns, mass appears to be lost since ashes weight less than the original substance. These effects were not understood until careful experiments in which chemical reactions such as rusting were performed in sealed glass ampules, whereby it was found that the chemical reaction did not change the weight of the sealed container. Once understood, the conservation of mass was of great importance in changing alchemy to modern chemistry. When chemists realized that substances never disappeared from measurement with the scales (once buoyancy effects were held constant, or had otherwise been accounted for), they could for the first time embark on quantitative studies of the transformations of substances.

The historical concept of both matter and mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. In relativity, the mass-energy equivalence theorem states that mass conservation is equivalent to energy conservation, which is the first law of thermodynamics.

We have written many articles about the conservation of mass for Universe Today. Here’s an article about nuclear fusion, and here’s an article about the atom.

If you’d like more info on the law of conservation of mass, check out these articles from NASA Glenn Research Center and Engineering Toolbox.

We’ve also recorded an entire episode of Astronomy Cast all about the Atom. Listen here, Episode 164: Inside the Atom.

Sources:
http://en.wikipedia.org/wiki/Conservation_of_mass
http://www.grc.nasa.gov/WWW/K-12/airplane/mass.html
http://en.wikipedia.org/wiki/Nothing_comes_from_nothing
http://en.wikipedia.org/wiki/Antoine_Lavoisier
http://en.wikipedia.org/wiki/Jain_philosophy

What is Conductance?

Conductance
Electricity. Image Source: juniorcitizen.org.uk

Electricity is an amazing, and potentially very dangerous, thing. In addition to powering our appliances, heating our homes, starting our cars and providing us with unnatural lighting during the evenings, it is also one of the fundamental forces upon which the Universe is based. Knowing what governs it is crucial to using it for our benefit, as well as understanding how the Universe works.

For those of us looking to understand it – perhaps for the sake of becoming an electrical engineer, a skilled do-it-yourselfer,  or just satisfying scientific curiosity – some basic concepts need to be kept in mind. For example, we need to understand a little thing known as conductance, and quality that is related to resistance; which taken together govern the flow of electrical current.

Definition:

Conductance is the measure of how easily electricity flows along a certain path through an electrical element, and since electricity is so often explained in terms of opposites, conductance is considered the opposite of resistance. In terms of resistance and conductance, the reciprocal relationship between the two can be expressed through the following equation: R = 1/G, G=1/R; where R equals resistance and G equals conduction.

Another way to represent this is: W=1/S, S=1/W, where W (the Greek letter omega) represents resistance and S represents Siemens, ergo the measure of conductance. In addition, Siemens can be measured by comparing them to their equivalent of one ampere (A) per volt (V).

In other words, when a current of one ampere (1A) passes through a component across which a voltage of one volt (1V) exists, then the conductance of that component is one Siemens (1S). This can be expressed through the equation: G = I/E, where G represents conductance and E is the voltage across the component (expressed in volts).

The temperature of the material is definitely a factor, but assuming a constant temperature, the conductance of a material can be calculated.

Measurement:

The SI (International System) derived unit of conductance is known as the Siemens, named after the German inventor and industrialist Ernst Werner von Siemens. Since conductance is the opposite of resistance, it is usually expressed as the reciprocal of one ohm – a unit of electrical resistance named after George Simon Ohm – or one mho (ohm spelt backwards).

Recently, this term was re-designated to Siemens, expressed by the notational symbol S. The factors that affect the magnitude of resistance are exactly the same for conductance, but they affect conductance in the opposite manner. Therefore, conductance is directly proportional to area, and inversely proportional to the length of the material.

We have written many articles about conductance for Universe Today. Here’s What are Electrons?, Who Discovered Electricity?, What is Static Electricity?, What is Electromagnetic Induction?, and What are the Uses of Electromagnets?

If you’d like more info on Conductance, check out All About Circuits for another article about conductance.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:

Concave Lens

Concave Mirror
Concave Lens

[/caption]For centuries, human beings have been able to do some pretty remarkable things with lenses. Although we can’t be sure when or how the first person stumbled onto the concept, it is clear that at some point in the past, ancient people (probably from the Near East) realized that they could manipulate light using a shaped piece of glass. Over the centuries, how and for what purpose lenses were used began to increase, as people discovered that they could accomplish different things using differently shaped lenses. In addition to making distant objects appear nearer (i.e. the telescope), they could also be used to make small objects appear larger and blurry objects appear clear (i.e. magnifying glasses and corrective lenses). The lenses used to accomplish these tasks fall into two categories of simple lenses: Convex and Concave Lenses.

A concave lens is a lens that possesses at least one surface that curves inwards. It is a diverging lens, meaning that it spreads out light rays that have been refracted through it. A concave lens is thinner at its centre than at its edges, and is used to correct short-sightedness (myopia). The writings of Pliny the Elder (23–79) makes mention of what is arguably the earliest use of a corrective lens. According to Pliny, Emperor Nero was said to watch gladiatorial games using an emerald, presumably concave shaped to correct for myopia.

After light rays have passed through the lens, they appear to come from a point called the principal focus. This is the point onto which the collimated light that moves parallel to the axis of the lens is focused. The image formed by a concave lens is virtual, meaning that it will appear to be farther away than it actually is, and therefore smaller than the object itself. Curved mirrors often have this effect, which is why many (especially on cars) come with a warning: Objects in mirror are closer than they appear. The image will also be upright, meaning not inverted, as some curved reflective surfaces and lenses have been known to do.

The lens formula that is used to work out the position and nature of an image formed by a lens can be expressed as follows: 1/u + 1/v = 1/f, where u and v are the distances of the object and image from the lens, respectively, and f is the focal length of the lens.

We have written many articles about concave lens for Universe Today. Here’s an article about the telescope mirror, and here’s an article about the astronomical telescope.

If you’d like more info on the Concave Lens, check out NASA’s The Most Dreadful Weapon, and here’s a link to Build a Telescope Page.

We’ve also recorded an entire episode of Astronomy Cast all about the Telescope. Listen here, Episode 150: Telescopes, The Next Level.

Sources:
http://en.wiktionary.org/wiki/concave
http://www.physics.mun.ca/~jjerrett/lenses/concave.html
http://encyclopedia.farlex.com/concave+lens
http://en.wikipedia.org/wiki/Collimated_light
http://en.wikipedia.org/wiki/Virtual_image

What is the Coefficient of Friction?

Friction
Friction. Image Source: Wikipedia

Ever watch a car spin its wheels and notice all the smoke and tire marks it leaves behind? How about going down a slide? You might have noticed that if it were wet, you travelled farther than if the surface was dry. Ever wonder just how far you’d get if you tried to slide on wet concrete (don’t this, by the way!). Why is it that some surfaces are easy to slide across while others are just destined to stop you short? It comes down to a little thing known as friction, which is essentially the force that resists surfaces from sliding against each other. When it comes to measuring friction, the tool which scientists use is called the Coefficient of Friction or COH.

The COH is the value which describes the ratio of the force of friction between two bodies and the force pressing them together. They range from near zero to greater than one, depending on the types of materials used.For example, ice on steel has a low coefficient of friction, while rubber on pavement (i.e. car tires on the road) has a comparatively high one. In short, rougher surfaces tend to have higher effective values whereas smoother surfaces have lower due to the friction they generate when pressed together.

There are essentially two kind of coefficients; static and kinetic. The static coefficient of friction is the coefficient of friction that applies to objects that are motionless. The kinetic or sliding coefficient of friction is the coefficient of friction that applies to objects that are in motion.The coefficient of friction is not always the same for objects that are motionless and objects that are in motion; motionless objects often experience more friction than moving ones, requiring more force to put them in motion than to sustain them in motion.

Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, which is elusive at best, whereas a value above 1 would mean that the force required to slide an object along the surface is greater than the normal force of the surface on the object.

Mathematically, frictional force can be expressed asFf= ? N, where Ff = frictional force (N, lb), ? = static (?s) or kinetic (?k) frictional coefficient, N = normal force (N, lb).

We have written many articles about the coefficient of friction for Universe Today. Here’s an article about friction, and here’s an article about aerobraking.

If you’d like more info on the Friction, check out Hyperphysics, and here’s a link to Friction Games for Kids by Science Kids.

We’ve also recorded an entire episode of Astronomy Cast all about Gravity. Listen here, Episode 102: Gravity.

Sources:
http://en.wikipedia.org/wiki/Friction
http://www.engineeringtoolbox.com/friction-coefficients-d_778.html
http://www.thefreedictionary.com/coefficient+of+friction

Chromatic Aberration

Chromatic Aberration
Chromatic Aberration. Source: Wikipedia

[/caption]

Some colours just can’t keep up with the others! Well, that’s probably the simplest way to put it. But when scientists talk about the characteristics of light, it would be more accurate to say that different colours of light propagate at different speeds, orhave different wavelengths, and therefore refract differently. A well-known example of this is the prism effect, where a beam of white light is broken into a rainbow of colours. The result of this is that when objects are viewed through a simple lens, light will refract (bends) at different angles, meaning that it will not image all in the same place. A distortion results in which “fringes” of color appear along the boundaries that separate dark and bright parts of the image. This effect, known as Chromatic Aberration, can be a real pain for astronomers, surveyors, photographers, or just about anyone who wants to view an object (or objects) through a lens and needs to do so clearly!

Sir Isaac Newton was the first to demonstrate this effect some two-hundred years ago when he discovered that light was composed of multiple wavelengths. These colours refract unevenly, with blue-appearing light refracting at shorter wavelengths and red-appearing light refracting at longer, with green refracting in the middle. Since that time, scientists, astronomers and opticians have come to identify two basic kinds of aberration. The first is axial (or longitudinal) where different wavelengths are focused at a different distance because the lens in unable to focus different colours in the same focal plane. The second is transverse (or lateral) aberration, where different wavelengths are focused at different positions in the focal plane and the effect is a sideward displacement of the image. In the former case, distortion occurs throughout the image whereas in the latter, distortion is absent from the centre but increases towards the edge.

There are many ways to remedy Chromatic Aberration. During the 17th century, telescopes had to be very long in order to correct for colour distortions. Sir Isaac Newton remedied this problem by creating the comparably compact, reflecting telescope in 1668 that used curved mirrors to get around this problem. The achromatic lens (or achromatic doublet) is another; a double lens that uses two kinds of glass that focuses all white light coming in at the same point on the other side of the lens. Many types of glass, known as low dispersion glasses, have been developed to reduce chromatic aberration, the most notable being glasses that contain fluorite.

The discovery of Chromatic Aberration and the development of corrective lenses were major steps in the development of the optical microscope, the telescope; which in turn was a boon for astronomers and biologist who were able to gain a greater understanding of the universe and the natural world as a result.

We have written many articles about chromatic aberration for Universe Today. Here’s an article about optical aberration, and here’s an article about achromatic lens.

If you’d like more info on Chromatic Aberration, check out Hyperphysics for a great article on chromatic aberration, and here’s a link to Wise Geek’s discussion about chromatic aberration.

We’ve also recorded an entire episode of Astronomy Cast all about Choosing and Using a Telescope. Listen here, Episode 33: Choosing and Using a Telescope.

Sources:
http://en.wikipedia.org/wiki/Chromatic_aberration
http://toothwalker.org/optics/chromatic.html
http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/aber2.html
http://www.yorku.ca/eye/chroaber.htm
http://www.yorku.ca/eye/achromat.htm

Charles Law

Charles's Law
Charles's Law. Image Credit: NASA GRC

[/caption]

For most people, the words “ideal gas” might conjure up the image of some kind of super fuel, perhaps a near-inexhaustible kind that creates zero air pollution! Sadly, this is not what is meant by ideal gas. In reality, an ideal gas is a theoretical gas composed of a set of randomly-moving, non-interacting point particles. At normal conditions such as standard temperature and pressure, most real gases such as air, nitrogen, oxygen, hydrogen, noble gases, and some heavier gases like carbon dioxide behave like an ideal gas and can be treated as such within reasonable tolerances. It is only when they are treated with higher temperatures and lower pressure that they deviate from this trend. Once they get into this territory, experimental gas laws, such as Charles’s Law, come into play.

Also known as the law of volumes, Charles’s Law is an experimental gas law which describes how gases tend to expand when heated. It was first published by French natural philosopher Joseph Louis Gay-Lussac in 1802, although he credited the discovery to unpublished work from the 1780s by Jacques Charles, hence the name. This law applies generally to all gases, and also to the vapours of volatile liquids if the temperature is more than a few degrees above the boiling point. Given the interest in hot air balloons at the time, it is certainly understandable why Gay-Lussac, Charles and other scientists around the globe were so interested in the relationship between volume, pressure and temperature when it came to gasses.

In lay terms, the law states that: at constant pressure, the volume of a given mass of an ideal gas increases or decreases by the same factor as its temperature on the absolute temperature scale (i.e. the gas expands as the temperature increases). This can be written as: V? T, where V is the volume of the gas; and T is the absolute temperature. In mathematical terms, the law can also be expressed as: V100 – V0 = kV0, where V100 is the volume occupied by a given sample of gas at 100 °C; V0 is the volume occupied by the same sample of gas at 0 °C; and k is a constant which is the same for all gases at constant pressure. Gay-Lussac’s value for k was ½.6666, remarkably close to the present-day value of ½.7315.

Combined with Boyle’s law, these laws make up what is known as the “Ideal Gas Law” which was first stated by ÉmileClapeyron in 1834.

We have written many articles about Charles’s Law for Universe Today. Here’s an article about the Combined Gas Law, and here’s an article about Boyle’s Law.

If you’d like more info on Charles’s Law, check out a discussion about Charles’s Law, and here’s a link to an article about Charles’s Law by the Glenn Research Center.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:
http://en.wikipedia.org/wiki/Charles%27s_law
http://en.wikipedia.org/wiki/Ideal_gas
http://www.chm.davidson.edu/vce/gaslaws/charleslaw.html
http://www.grc.nasa.gov/WWW/K-12/airplane/glussac.html
http://en.wikipedia.org/wiki/Ideal_gas_law

What is Planck Time?

Planck Time
The Universe. So far, no duplicates found@

What is the smallest unit of time you can conceive? A second? A millisecond? Hard to say seeing as how time is relative. Under the right circumstances, hours can fly by and seconds can feel like a lifetime. But unfortunately for physicists, time is not something that can be dealt with so philosophically. And since they deal with cosmological forces both infinitesimally large and small, they need units that can objectively measure them. When it comes to dealing with the small, Planck Time is the measurement of choice. Named after German physicist Max Planck, the founder of quantum theory, a unit of Planck time is the time it takes for light to travel, in a vacuum, a single unit of Planck length. Taken together, they part of the larger system of natural units known as Planck units.

Originally proposed in 1899 by German physicist Max Planck, Planck units are physical units of measurement defined exclusively in terms of five universal physical constants. These are the Gravitational constant (G), the Reduced Planck constant (h), the speed of light in a vacuum (c), the Coulomb constant 1/4??0 (ke or k), and Boltzmann’s constant (kB, sometimes k). Each of these constants can be associated with at least one fundamental physical theory: c with special relativity, G with general relativity and Newtonian gravity, ? with quantum mechanics, ?0 with electrostatics, and kB with statistical mechanics and thermodynamics. They were invented as a means of simplifying the particular algebraic expressions appearing in theoretical physics, especially in quantum mechanics.

Ultimately, Planck time is derived from the field of mathematical physics known as dimensional analysis, which studies units of measurement and physical constants. The Planck time is the unique combination of the gravitational constant G, the relativity constant c, and the quantum constant h, to produce a constant with units of time. They are often semi-humorously referred to by physicists as “God’s units” because eliminate anthropocentric arbitrariness from the system of units, unlike the meter and second, which exist for purely historical reasons and are not derived from nature. Some challenges to Planck’s Time have been mounted. For example, in 2003 during the analysis of the Hubble Space Telescope Deep Field images, some scientists speculated that where there are space-time fluctuations on the Planck scale, images of extremely distant objects should be blurry. The Hubble images, they claimed, were too sharp for this to be the case. Other scientists disagreed with this assumption however, with some saying the fluctuations would be too small to be observable, others saying that the speculated blurring effect that was expected was off by a very large magnitude.

A unit of Planck Time can be expressed as follows:

Planck Time
Planck Time

We have written many articles about Planck Time for Universe Today. Here’s an article about the Big Bang Theory, and here’s an article about astronomical units.

If you’d like more info on the Planck Time, check out Wikipedia, and here’s a link to Physics and Astronomy Online.

We’ve also recorded a Question Show all about Black Hole Time. Listen here, Question Show: Galileoscope, Black Hole and What Exactly is Energy?.

Sources:
http://en.wikipedia.org/wiki/Planck_time
http://en.wikipedia.org/wiki/Max_Planck
http://en.wikipedia.org/wiki/Planck_units
http://scienceworld.wolfram.com/physics/PlanckTime.html
http://en.wikipedia.org/wiki/Dimensional_analysis