Shortest Single-Photon Pulse Generated: Implications for Quantum Communications in Space

Equipment used by Oxford scientists to produce the pulses (Oxford Uni.)

Scientists at Oxford University have developed a method to generate the shortest ever single-photon pulse by removing the interference of quantum entanglement. So how big are these tiny record-breakers? They are 20 microns long (or 0.00002 metres), with a period of 65 femtoseconds (65 millionths of a billionth of a second). This experiment smashes the previous record for the shortest single-photon pulse; the Oxford photon is 50 times shorter. While this sounds pretty cool, what is all the fuss about? How can these tiny electromagnetic wave-particles be of any use? In two words: quantum computing. And in an additional three words: quantum satellite communications

Quantum entanglement is a tough situation to put into words. In a nutshell: If a photon is absorbed by a type of material, two photons may be re-emitted. These two photons are of a lower energy than the original photon, but they are emitted from the same source and therefore entangled. This entangled pair is inextricably linked; regardless of the distance they are separated. Should the quantum state of one be changed, the other will experience that change. In theory, no matter how far away these photons are separated, the quantum change of one will communicated to the other instantly. Einstein called this quantum phenomenon “spooky action at a distance” and didn’t believe it possible, but experiment has proven otherwise.

The Oxford University experiment

So, in a recent publication, the Oxford group are trying to remove the entangled state of photons, this experiment isn’t about using this “spooky action”, it is to get rid of it. This is to remove the interference caused when one of the photon pair is detected. Once one of the twins is detected, the quantum state of the other is altered, contaminating the signal. If this effect can be removed, very short-period “pure” photons can be generated, heralding a new phase of quantum computing. If scientists have very definite, identical single photons at their disposal, highly accurate information can be carried with no interference from the quirky nature of quantum physics.

Our technique minimises the effects of this entanglement, enabling us to prepare single photons that are extremely consistent and, to our knowledge, have the shortest duration of any photon ever generated. Not only is this a fascinating insight into fundamental physics but the precise timing and consistent attributes of these photons also makes them perfect for building photonic quantum logic gates and conducting experiments requiring large numbers of single photons.” – Peter Mosley, Co-Investigator, Oxford University.

The Oxford University blog reporting this news highlights how useful these regimented photons will be to quantum computing, quantum communications in space could also be a major benefactor. Imagine sending pulses of quantum-identical photons through space, to satellites at first, later through interplanetary space. Space scientists will have an extremely powerful resource so data can be sent though the vacuum, encrypted in a small number of photons, indecipherable to everything other than its destination…

Source: University of Oxford

Podcast: Wave Particle Duality

Have you ever heard that photons behave like both a particle and a wave and wondered what that meant? It’s true. Sometimes light acts like a wave, and other times it behaves like a little particle. It’s both. This week we discuss the experiments that demonstrate this, explain how scientists figured it all out in the first place. What does wave/particle duality have to do with astronomy? Well, everything, since light is the only way astronomers can see out into the Universe.

Click here to download the episode

Wave Particle Duality – Show notes and transcript

Or subscribe to: astronomycast.com/podcast.xml with your podcatching software.

Hawaiian Man Files Lawsuit Against the Large Hadron Collider (LHC)

lhc_welding_700.thumbnail.jpg

The Large Hadron Collider (LHC) is set to go online in May of this year. This magnificent machine will accelerate particles and collide them at such high energies that scientists expect to make some of the biggest discoveries ever about the very small (exotic sub-atomic particles) and the very large (the structure of the Universe itself).

But not everyone is happy. Particle accelerators have always been the source of controversy; at the end of the day, we can only predict the outcome of the LHC experiments. But what if scientists have overlooked something? What if the theories are wrong? A guy living on the other side of the planet to the LHC believes the world may come to an end and he’s begun filing a lawsuit against the completion of the accelerator. The concern? A massive black hole might be created, or vast amounts of antimatter will destroy the Earth. And where’s the scientific basis for all this panic? Hmmm… didn’t think so…

Through fear that the LHC is going to unleash death and destruction on the world, Walter Wagner from Hawaii has filed a lawsuit against an impressive array of defendants. The U.S. Department of Energy, the Fermilab particle-accelerator near Chicago, CERN and the National Science Foundation (NSF) are all named.

Wagner and his associate Luis Sancho have a pretty dubious (and quite frankly, weak) argument against the LHC, as they describe in the lawsuit:

The compression of the two atoms colliding together at nearly light speed will cause an irreversible implosion, forming a miniature version of a giant black hole. […] Any matter coming into contact with it would fall into it and never be able to escape. Eventually, all of earth would fall into such growing micro-black-hole, converting earth into a medium-sized black hole, around which would continue to orbit the moon, satellites, the ISS, etc.” Walter F. Wagner and Luis Sancho lawsuit, filed in U.S. District Court in Honolulu.

There is no evidence to suggest that colliding particles will create a black hole that will swallow the planet. I do however like their description that the International Space Station will continue to orbit the Earth-mass black hole – at least we’ll have somewhere to hide as the rampaging black hole eats the ground from under us!

The credentials of the plaintiffs are also pretty sketchy. Wagner has worked in nuclear medicine and has a minor degree in physics from Berkley, but he has nothing more advanced than that. His colleague Sancho has an even more sketchy physics background.

Wagner wants the opening of the LHC to be delayed until further safety studies are carried out. Its cases like these that scientists have had to combat for many years. Unfounded predictions of the “end of the world” and fear of the unknown have been published only to be debunked through correct scientific thinking. If the world listened to alarmists such as Wagner and co, we would advance no further.

I for one hope that the LHC does produce micro-black holes. I hope that this time next year we’ll be looking in awe at images of particle tracks from the sensors at the LHC showing the point of creation and the point of evaporation of micro-black holes. Peering very closely we see particle emission as if from nowhere, the evaporating particles from the tiny event horizon. The image will be entitled Hawking Radiation Experiment.

Even if the accelerator energies are not high enough to create mini-black holes, thereby giving Stephen Hawking some experimental evidence for his radiation, we are pretty sure we’ll find some other exotic and exciting particles to help us understand our universe a little bit better. We might gain a better grasp of other dimensions, detect some exotic particles, and lets not forget the possibility of discovering the Higgs Boson.

If we give into the fear of the unknown, scientific advancement will be stopped in its tracks and we may be restricted to scratching at the surface of space-time and string theory, rather than physically proving its existence with tools like the LHC.

Source: FOXnews.com

Why There’s More Matter Than Antimatter in the Universe

kek.thumbnail.jpg

In the first few moments of the Universe, enormous amounts of both matter and antimatter were created, and then moments later combined and annihilated generating the energy that drove the expansion of the Universe. But for some reason, there was an infinitesimal amount more matter than anti matter. Everything that we see today was that tiny fraction of matter that remained.

But why? Why was there more matter than antimatter right after the Big Bang? Researchers from the University of Melbourne think they might have an insight.

Just to give you an idea of the scale of the mystery facing researchers, here’s Associate Professor Martin Sevior of the University of Melborne’s School of Physics:

“Our universe is made up almost completely of matter. While we’re entirely used to this idea, this does not agree with our ideas of how mass and energy interact. According to these theories there should not be enough mass to enable the formation of stars and hence life.”

“In our standard model of particle physics, matter and antimatter are almost identical. Accordingly as they mix in the early universe they annihilate one another leaving very little to form stars and galaxies. The model does not come close to explaining the difference between matter and antimatter we see in the nature. The imbalance is a trillion times bigger than the model predicts.”

If the model predicts that matter and antimatter should have completely annihilated one another, why is there something, and not nothing?

The researchers have been using the KEK particle accelerator in Japan to create special particles called B-mesons. And it’s these particles which might provide the answer.

Mesons are particles which are made up of one quark, and one antiquark. They’re bound together by the strong nuclear force, and orbit one another, like the Earth and the moon. Because of quantum mechanics, the quark and antiquark can only orbit each other in very specific ways depending on the mass of the particles.

A B-meson is a particularly heavy particle, with more than 5 times the mass of a proton, due almost entirely to the mass of the B-quark. And it’s these B-mesons which require the most powerful particle accelerators to generate them.

In the KEK accelerator, the researchers were able to create both regular matter B-mesons and anti-B-mesons, and watch how they decayed.

“We looked at how the B-mesons decay as opposed to how the anti-B-mesons decay. What we find is that there are small differences in these processes. While most of our measurements confirm predictions of the Standard Model of Particle Physics, this new result appears to be in disagreement.”

In the first few moments of the Universe, the anti-B-mesons might have decayed differently than their regular matter counterparts. By the time all the annihilations were complete, there was still enough matter left over to give us all the stars, planets and galaxies we see today.

Original Source: University of Melbourne News Release

Could Cosmic Rays Influence Global Warming?

sunset.thumbnail.jpg

The idea goes like this: Cosmic rays, originating from outside the Solar System, hit the Earth’s atmosphere. In doing so these highly energetic particles create microscopic aerosols. Aerosols collect in the atmosphere and act as nuclei for water droplet formation. Large-scale cloud cover can result from this microscopic interaction. Cloud cover reflects light from the Sun, therefore cooling the Earth. This “global dimming” effect could hold some answers to the global warming debate as it influences the amount of radiation entering the atmosphere. Therefore the flux of cosmic rays is highly dependent on the Sun’s magnetic field that varies over the 11-year solar cycle.

If this theory is so, some questions come to mind: Is the Sun’s changing magnetic field responsible for the amount of global cloud cover? To what degree does this influence global temperatures? Where does that leave man-made global warming? Two research groups have published their work and, perhaps unsurprisingly, have two different opinions…


I always brace myself when I mention “global warming”. I have never come across such an emotive and controversial subject. I get comments from people that support the idea that the human race and our insatiable desire for energy is the root cause of the global increases in temperature. I get anger (big, scary anger!) from people who wholeheartedly believe that we are being conned into thinking the “global warming swindle” is a money-making scheme. You just have to look at the discussions that ensued in the following climate-related stories:

But what ever our opinion, huge quantities of research spending is going into understanding all the factors involved in this worrying upward trend in average temperature.

Cue cosmic rays.

Researchers from the National Polytechnic University in the Ukraine take the view that mankind has little or no effect on global warming and that it is purely down to the flux of cosmic radiation (creating clouds). Basically, Vitaliy Rusov and colleagues run the analysis of the situation and deduce that the carbon dioxide content of the atmosphere has very little effect on global warming. Their observations suggest that global temperature increases are periodic when looking into the history of global and solar magnetic field fluctuations and the main culprit could be cosmic ray interactions with the atmosphere. Looking back over 750,000 years of palaeotemperature data (historic records of climatic temperature stored in ice cores sampled in the Northern Atlantic ice sheets), Rusov’s theory and data analysis draw the same conclusion, that global warming is periodic and intrinsically linked with the solar cycle and Earth’s magnetic field.

But how does the Sun affect the cosmic ray flux? As the Sun approaches “solar maximum” its magnetic field is at its most stressed and active state. Flares and coronal mass ejections become commonplace, as do sunspots. Sunspots are a magnetic manifestation, showing areas on the solar surface where the powerful magnetic field is up welling and interacting. It is during this period of the 11-year solar cycle that the reach of the solar magnetic field is most powerful. So powerful that galactic cosmic rays (high energy particles from supernovae etc.) will be swept from their paths by the magnetic field lines en-route to the Earth in the solar wind.

It is on this premise that the Ukrainian research is based. Cosmic ray flux incident on the Earth’s atmosphere is anti-correlated with sunspot number – less sunspots equals an increase in cosmic ray flux. And what happens when there is an increase in cosmic ray flux? There is an increase in global cloud cover. This is the Earth’s global natural heat shield. At solar minimum (when sunspots are rare) we can expect the albedo (reflectivity) of the Earth to increase, thus reducing the effect of global warming.

This is a nice bit of research, with a very elegant mechanism that could physically control the amount of solar radiation heating the atmosphere. However, there is a lot of evidence out there that suggests carbon dioxide emissions are to blame for the current upward trend of average temperature.

Prof. Terry Sloan and Prof. Sir Arnold Wolfendale from the University of Lancaster and University of Durham, UK step into the debate with the publication “Testing the proposed causal link between cosmic rays and cloud cover“. Using data from the International Satellite Cloud Climatology Project (ISCCP), the UK-based researchers set out to investigate the idea that the solar cycle has any effect on the amount of global cloud cover. They find that cloud cover varies depending on latitude, demonstrating that in some locations cloud cover/cosmic ray flux correlates in others it does not. The big conclusion from this comprehensive study states that if cosmic rays in some way influence cloud cover, at maximum the mechanism can only account for 23 percent of cloud cover change. There is no evidence to suggest that changes in the cosmic ray flux have any effect on global temperature changes.

The cosmic-ray, cloud-forming mechanism itself is even in doubt. So far, there has been little observational evidence of this phenomenon. Even looking at historical data, there has never been an accelerated increase in global temperature rise than the one we are currently observing.

So could we be clutching at straws here? Are we trying to find answers to the global warming problem when the answer is already right in front of us? Even if global warming can be amplified by natural global processes, mankind sure ain’t helping. There is a known link between carbon dioxide emission and global temperature rise whether we like it or not.

Perhaps taking action on carbon emissions is a step in the right direction while further research is carried out on some of the natural processes that can influence climate change, as for now, cosmic rays do not seem to have a significant part to play.

Original source: arXiv blog

A Step Toward Quantum Communications with Space

egs.thumbnail.jpg

Sending quantum information in the form of qubits (quantum bits) have been successfully carried out for years. Firing indecipherable packets of quantum data (or quantum states) via photons can however degrade the message as the photons travel through the dense atmosphere. Also, the distance of transmitting data is severely hindered by other factors such as the curvature of the Earth. Now, for the first time, Italian scientists have carried out a successful mock single-photon exchange between Earth and a satellite orbiting at an altitude of 1485 km. Although transmission may be restricted here on Earth, the use of satellites will greatly increase the range of such a system, possibly beginning an era of long-distance quantum communication with space.

The key advantage to quantum communications is that it is perfectly secure from being hacked. In a world of security-conscious information transmission, the possibility of sending information hidden in the quantum states of photons would be highly desirable. A major drawback of sending encoded photos here on Earth is the degradation of data as the photons are scattered by atmospheric particles. The current record stands at 144 km for an encoded photon to travel along its line of sight without losing its quantum code. That distance can be increased by firing encoded photons along optical fibres.

But what if you used satellites as nodes to communicate the encoded photons through space? By shooting the photons straight up, they need only travel through 8 km of dense atmosphere. This is exactly what Paolo Villoresi and his team at the Department of Information Engineering, University of Padova with collaborators in other institutes in Italy and Austria hoped to achieve. In fact, they have already tested the “single-photon exchange” between a ground station and the Japanese Experimental Geodetic Satellite Ajisai with some good results.

Weak laser pulses, emitted by the ground-based station, are directed towards a satellite equipped with cube-corner retroreflectors. These reflect a small portion of the pulse, with an average of less-than-one photon per pulse directed to our receiver, as required for the faint-pulse quantum communication.” – From “Experimental verification of the feasibility of a quantum channel between Space and Earth“, Villoresi et al..

The communication between satellite and observatory
They achieved this feat by using existing Earth-based laser ranging technology (at the Matera Laser Ranging Observatory, Italy) to direct a weak source of photons at the Ajisai, spherical mirrored satellite (pictured top). As the powerful laser ranging beam pinpointed the satellite, it was switched off to allow the weaker encoded laser to fire pulses of data. The two lasers could easily be switched to be sure the Ajisai was receiving the photons. Only a tiny fraction of the pulses were received back at the observatory, and, statistically speaking, the requirement of less than one photon return per laser pulse for quantum communications was achieved.

This is the first step of many toward quantum communications, and it by no means demonstrates the quantum entanglement between two photons (this situation is described in great detail by one of the collaborators in a separate publication) – now that would be the ultimate form of quantum data transmission!

Source: arXiv, arXiv blog

Do Advanced Civilizations Communicate with Neutrinos?

ice_cube.thumbnail.jpg

It’s one of the biggest questions in all humanity: are we alone in the Universe? Either way, the answer is significant. And so, scientists are searching for intelligence out there. Huge arrays of radio telescopes, like the Allen Array scan the skies for radio broadcasts. And researchers have also proposed that aliens might be using lasers to communicate with us. A Russian researcher is proposing another way that aliens might be communicating with us – with neutrinos.

To borrow a quote from the Hitchhiker’s Guide to the Galaxy, “Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is.” When you’re attempting to communicate across the vast distances of space, you need huge amounts of energy. Just look at a star, even though it’s generating an incomprehensible amount of energy every second, the brightness drops dramatically with distance.

Instead of broadcasting in all directions, the other strategy is to focus your communications towards a specific location. A targeted beam of radio waves or laser light towards another star still requires an enormous amount of energy, but it’s less.

To save energy, alien civilizations might not be using radio or optical light at all, they might be communicating in a completely different way, with neutrinos.

Researcher Z. K. Silagadze at the Budker Institute of Nuclear Physics and Novosibirsk State University recently posted this idea to the Arxiv pre-press mailing list. His article is called SETI and Muon Collider.

It might sound like science fiction, but scientists are starting to understand how to generate beams of neutrinos – by creating beams of muons. Beams of these unstable particles can be generated in large particle accelerators. The muon beam decays quickly into a focused beam of neutrinos that can travel for light years and still remain remarkably coherent. A beam fired at relatively nearby star Tau Ceti, 12 light-years away, would open up to about 600 astronomical units across – enough to bathe the whole system in neutrinos that could be tracked back to a specific source star.

Finding neutrinos here on Earth is difficult. We’ve got an incredible amount of neutrinos stream towards us from the Sun. In fact, you’ve got billions of neutrinos passing through your body every second and you never feel them because never interact. It takes a huge vat of water, protected underground from other radiation and a suite of sensitive detectors. And even then, they only turn up a few thousand neutrinos a year.

In fact, a neutrino can pass through light-years of pure lead and not even notice.

But there are some advantages. Neutrino detectors are omnidirectional – they don’t have to be targeted in a specific direction to “tune in” a signal coming from a star. If the stream of neutrinos is passing through the Earth, we should be able to detect it, and then track back the source after the fact.

Neutrino detectors are also sensitive to many different energy levels. They don’t have to scan specific frequencies, they can detect high energy neutrinos as easily as low-energy ones.

According to Silagadze, the newly developed IceCube neutrino observatory being built in Antarctica should have the sensitivity to spot neutrinos generated on purpose by alien civilizations – whether they’re targeting us specifically, or we’re just overhearing their conversations.

It has been suggested that advanced civilizations might deliberately choose neutrinos for communications because it shuts out the very young, and not mature civilizations from the galactic conversation.

But give us a few years, and we’ll be listening.

Original Source: Arxiv

Final Detector in Place at the Large Hadron Collider

smallwheel1.thumbnail.jpg

One of the most complicated construction projects ever attempted reached a major milestone today. The final large detector element for the ATLAS instrument was lowered into the Large Hadron Collider. And this baby’s big. Weighing in at 100 tonnes. When the collider finally comes online, this instrument will measure the cascade of particles generated in proton-proton collisions.

The ATLAS detector itself is enormous, weighing 7,000 tonnes and measuring 46 metres long, 25 metres high and 25 metres wide. It has 100 million sensors that will track all the particles that freeze out when protons are smashed together at tremendous energies.

And so today, the final element for ATLAS was plugged into its permanent home. It’s known as a “small wheel”, and there are two of them in the detector. Compared to the full ATLAS instrument, it only weighs 100 tonnes, and measures a mere 9.3 metres across.

Since the whole detector is located deep underground, engineers had to lower each piece down a 100 metre shaft. And they’ve been installing pieces this way since 2003. In the case of the small wheel, it was even harder to get it down.

“One of the major challenges is lowering the small wheel in a slow motion zigzag down the shaft,” explained Ariella Cattai, leader of the small wheel team, “and performing precision alignment of the detector within a millimetre of the other detectors already in the cavern.”

With all of ATLAS’ parts in place, it’s time to enter the commissioning phase. Researchers will test all of the parts together in preparation for the first tests this Summer.

By this time next year, physicists might have many more answers about the nature of gravity, dark matter, and nature’s preference for matter over dark matter. And I’m sure they’ll have even more new questions. But that’s how science works.

Original Source: CERN News Release

Pluto’s Moons, Nix and Hydra, may have been Adopted

The discovery images of Nix (and Hydra) obtained by the Hubble Space Telescope. Credit: NASA, ESA, H. Weaver (JHU/APL), A. Stern (SwRI)

 

How many moons does Pluto have? The mini-moons of Pluto, Nix and Hydra, were discovered in 2005 (but named in 2006) during an observation campaign by the Hubble Space Telescope. The discovery of these mini-moons increase the number of natural satellites orbiting Pluto to three (including larger moon Charon). But where did these satellites come from? The current accepted theory on the formation on the large moon, Charon, is much like the theory supporting the creation of Earth’s Moon. It is thought that a large impact between two Large Kuiper Belt Objects chipped Charon away from a proto-Pluto, putting the chunk of Pluto mass into orbit. Over the years, tidal forces slowed the pair and Charon was allowed to settle into its present-day orbit. Recent theory suggests that Nix and Hydra are a by product of this collision, merely shattered fragments of the huge impact. But there are problems with this idea. Could Nix and Hydra have come from somewhere other than the Pluto-Charon impact?

The orbits of Plutos moons, Charon, Nix and Hydra (credit: NASA)
The small moons that orbit the Large Kuiper Belt Object (formerly classified as a planet) can be found about 48,700 kilometers and 64,800 kilometers from the surface of Pluto. The closest moon is called Nix and the farthest, Hydra. Nix has an orbital resonance of 4:1 with Charons orbit and the larger moon Hydra has a resonance of 6:1 (i.e. Nix will orbit Pluto once for every four of Charons orbits; Hydra will orbit Pluto once for every six of Charons orbits).

The reasons behind these mini-moon orbits are only just beginning to be understood, but it is known that their resonances with Charons orbit is rooted way back during the Pluto-system evolution. If we assume Hydra and Nix were formed from a massive Kuiper Belt Object collision, the easiest explanation is to assume they are whole fragments from the impact caught in the gravity of the Pluto-Charon system. However, due to the highly eccentric orbits that would have resulted from this collision, it is not possible that the two little moons could have evolved into a near-circular orbit, in near-corotational resonance with Charon.

So, could it be possible that the moons may have formed from the dust and debris resulting from the initial collision? If there was enough material produced, and if the material collided frequently, then perhaps Nix and Hydra were born from a cold disk of debris (rather than being whole pieces of rock), eventually coalescing and forming sizeable rocky moons. As there may have been a disk of debris, collisions with the orbiting Nix and Hydra would have also reduced any eccentricity in their orbits.

But there is a big problem with this theory. From impact simulations, the post-impact disk of debris surrounding Pluto would have been very compact. The disk could not have reached as far as the present-day orbits of the moons.

One more theory suggests that perhaps the moons were created in a post-impact disk, but very close to Pluto, and then through gravitational interactions with Charon, the orbits of Nix and Hydra were pulled outward, allowing them to orbit far from the Pluto-Charon post-impact disk. According to recent computer simulations, this doesn’t seem to be possible either.

To find an answer, work by Yoram Lithwick and Yanqin Wu (University of Toronto) suggest we must look beyond the Pluto-Charon system for a source of material for Nix and Hydra. From simulations, the above theories on the creation of the small moons being started by material ejected from a large collision between two Large Kuiper Belt Objects (creating Pluto and Charon) are extremely problematic. They do not correctly answer how the highly eccentric orbits Nix and Hydra would have from a collision could evolve into the near-circular ones they have today.

Lithwick and Wu go on to say that the circular, corotational resonant orbits of the two moons could be created from a Plutocentric disk of small bits of rock scooped up during Pluto’s orbit around the Sun. Therefore Nix and Hydra may have been formed from the rocky debris left over from the development of the Solar System, and not from a collision event creating Charon. This may hold true for the countless other Kuiper Belt Objects in orbit in the far reaches of the Solar System, no impact is necessary for the creation of the tiny moons now thought to be their satellites.

It is hoped that the New Horizons mission (launched January 21st, 2006) to the far reaches of the Solar System will reveal some of the questions that remain unanswered in the depths of our mysterious Kuiper Belt. Hopefully we will also find out whether Nix and Hydra are children of Pluto and Charon… or whether they were adopted.

Source: arXiv

Synthetic Black Hole Event Horizon Created in UK Laboratory

Researchers at St. Andrews University, Scotland, claim to have found a way to simulate an event horizon of a black hole – not through a new cosmic observation technique, and not by a high powered supercomputer… but in the laboratory. Using lasers, a length of optical fiber and depending on some bizarre quantum mechanics, a “singularity” may be created to alter a laser’s wavelength, synthesizing the effects of an event horizon. If this experiment can produce an event horizon, the theoretical phenomenon of Hawking Radiation may be tested, perhaps giving Stephen Hawking the best chance yet of winning the Nobel Prize.

So how do you create a black hole? In the cosmos, black holes are created by the collapse of massive stars. The mass of the star collapses down to a single point (after running out of fuel and undergoing a supernova) due to the massive gravitational forces acting on the body. Should the star exceed a certain mass “limit” (i.e. the Chandrasekhar limit – a maximum at which the mass of a star cannot support its structure against gravity), it will collapse into a discrete point (a singularity). Space-time will be so warped that all local energy (matter and radiation) will fall into the singularity. The distance from the singularity at which even light cannot escape the gravitational pull is known as the event horizon. High energy particle collisions by cosmic rays impacting the upper atmosphere might produce micro-black holes (MBHs). The Large Hadron Collider (at CERN, near Geneva, Switzerland) may also be capable of producing collisions energetic enough to create MBHs. Interestingly, if the LHC can produce MBHs, Stephen Hawking’s theory of “Hawking Radiation” may be proven should the MBHs created evaporate almost instantly.

Hawking predicts that black holes emit radiation. This theory is paradoxical, as no radiation can escape the event horizon of a black hole. However, Hawking theorizes that due to a quirk in quantum dynamics, black holes can produce radiation.
The principal of Hawking Radiation (source: http://library.thinkquest.org)
Put very simply, the Universe allows particles to be created within a vacuum, “borrowing” energy from their surroundings. To conserve the energy balance, the particle and its anti-particle can only live for a short time, returning the borrowed energy very quickly by annihilating with each other. So long as they pop in and out of existence within a quantum time limit, they are considered to be “virtual particles”. Creation to annihilation has net zero energy.

However, the situation changes if this particle pair is generated at or near an event horizon of a black hole. If one of the virtual pair falls into the black hole, and its partner is ejected away from the event horizon, they cannot annihilate. Both virtual particles will become “real”, allowing the escaping particle to carry energy and mass away from the black hole (the trapped particle can be considered to have negative mass, thus reducing the mass of the black hole). This is how Hawking radiation predicts “evaporating” black holes, as mass is lost to this quantum quirk at the event horizon. Hawking predicts that black holes will gradually evaporate and disappear, plus this effect will be most prominent for small black holes and MBHs.

So… back to our St. Andrews laboratory…

Prof Ulf Leonhardt is hoping to create the conditions of a black hole event horizon by using laser pulses, possibly creating the first direct experiment to test Hawking radiation. Leonhardt is an expert in “quantum catastrophes”, the point at which wave physics breaks down, creating a singularity. In the recent “Cosmology Meets Condensed Matter” meeting in London, Leonhardt’s team announced their method to simulate one of the key components of the event horizon environment.

Light travels through materials at different velocities, depending on their wave properties. The St. Andrews group use two laser beams, one slow, one fast. First, a slow propagating pulse is fired down the optical fiber, followed by a faster pulse. The faster pulse should “catch up” with the slower pulse. However, as the slow pulse passes through the medium, it alters the optical properties of the fiber, causing the fast pulse to slow in its wake. This is what happens to light as it tries to escape from the event horizon – it is slowed down so much that it becomes “trapped”.

We show by theoretical calculations that such a system is capable of probing the quantum effects of horizons, in particular Hawking radiation.” – From a forthcoming paper by the St. Andrews group.

The effects that two laser pulses have on eachother to mimic the physics within an event horizon sounds strange, but this new study may help us understand if MBHs are being generated in the LHCs and may push Stephen Hawking a little closer toward a deserved Nobel Prize.
Source: Telegraph.co.uk