*Caption: Selfmade Alum Crystal. Weight 5.01g Source: JanDerChemiker via wikimedia commons*

The second law of thermodynamics states that all isolated systems head towards entropy. Our universe will one day reach a state where all energy is evenly distributed and can no longer sustain motion or life. A group of physicists have speculated that a device called a ‘space-time crystal’ could theoretically continue to work as a computer even after the heat death of the universe. Trouble was that they had no idea how to build a space-time crystal, until now.

Crystals are made up of repeating patterns of atoms or molecules, they are symmetrical in space and in their lowest energy state. They are the result of removing all the energy from a system (like ice crystals forming when heat is taken away) Nobel-prize winning physicist Frank Wilczek at the Massachusetts Institute of Technology speculated that the symmetry of such crystalline structures could exist in the fourth dimension of time as well as in space. The atoms in a time crystal would constantly rotate and return to their original location and, being in their lowest possible energy state they would continue to rotate even after the universe has succumbed to entropy. Such a repeating pattern of motion usually requires energy but now a group of scientists at the University of Michigan in Ann Arbor and Tsinghua University in Beijing, led by Tongcang Li at the University of California, Berkeley think they have worked out how to create such a crystal in its lowest energy state that shows this repeating pattern, or periodic structure, in both space and time, a space-time crystal.

They propose constructing an ion trap, which holds charged particles in place using an electric field. The ions naturally repel each other due to Coulomb repulsion, forming a ring-shaped crystal which can be made to rotate by applying a weak static magnetic field. If you then remove the electric field, the ions will continue rotating by themselves. This does not violate any laws of physics, it isn’t a perpetual motion machine as no energy can be taken out of the system, it can’t do any work even though it is moving. The main challenge to building the crystal will be the need to bring the temperatures close to absolute zero.

Space-time crystals’ periodicity makes them natural clocks. Wilczek suggests building a computer from a working time crystal, with different rotational states standing in for the 0s and 1s of a conventional computer. Such a computer would be able to survive the eventual heat death of the Universe. There is just one small snag, however, as Tongcang Li admits “we focus on a space-time crystal that can be created in a laboratory, so you need to figure out a method to make a laboratory that can survive in the heat-death of the universe.”

Read more here

Comments on this entry are closed.

I’m awestruck

well they better get right on it.

If you can create a laboratory that can survive the death of the universe, then who needs a space-time crystal?

This sounds like something from Doctor Who!

Samantha Carter has been working with these since 1997.

Now whats the purpouse of this?

These sorts of experiments probe deep properties of physics. This is how Einstein originally realized that classical mechanics is wrong, and that you need relativity.

Personally I’m not so impressed by this, but it added a new type of “eternal” system to the usual (fields, relatively moving bodies and other such systems in vacuum).

first paragraph…device, not devise. You want a noun.

Can’t they make it self sustaining? Does it need to float in an ion trap forever? Scientists think of the weirdest things!

I am not sure how heat death is defined here. As the universe evolves into the future the time frame for energy transfer will increase. In about 10^{12} years the last of the stars will wink out. Around 10^{33} years from now, if we have so called GUT field theory figured out right, about half the protons in the universe will have decayed. In 10^{46} years one or less of any protons out of a mole of protons will still exist. In 10^{50} years a region of the universe defined by the cosmological horizon r = sqrt{3/?} will have at most one supermassive black hole. In 10^{110} the last of the supermassive black holes will have quantum evaporated away. This is the thermal heat death. After about 10^{120} years every region of the universe will be a vacuum. Over a huge period of time beyond then the cosmological horizon will quantum decay and the horizon will recede off to ?, or better put ? — > 0, in the limit t — > ?. This is the point of quantum heat death.

As this process goes on the time scales for events expands. In the final stage the decay of the cosmological horizon will happen with the emission of photons with a wavelength comparable to the horizon scale, which is now about 10^{10} light years. This is the final stage which will wind down very slowly. The decay of the cosmological horizon has to happen in order that Boltzmann brains don’t arise.

Any device made of atoms may simply decay away by proton decay and cease to function after 10^{35} years. This type of system might cheat heat death far into the future, but I don’t think it would make it to the point of black hole evaporation.

LC

Ah, so eternal inflation/landscape is a solution preferred by simpleness, as well as a quasistable higgs field/vacuum is.

Good to know.

The vacuum state is determined by the winding of strings and D-branes on Calabi-Yau manifolds. The number of possible Calabi-Yau manifolds is huge, about 10^{500}. There is something called T-duality, which is that the number of modes an unwound string has on a Calabi-Yau manifold is equivalent to the number of times that string can be wound around this space. Think of a rubber band around a torus, where a torus is an elementary form of Calabi-Yau space. As time evolves the vacuum can transition into lower energy forms. So the vacuum will over an arbitrary amount of time will transition from a ? = ? ~ 10^{-72}GeV^4 to absolute zero asymptotically.

The business of Calabi-Yau spaces and the rest is where string theory and QFT gets very difficult.

LC

Huh. I was under the impression that string theory was not even currently testable, much less that it had been proved. But I note a distinct lack of conditionals in this little exposition of how things “are.”

String theory is not directly testable. In fact the Higgs particle recently found was not directly found either, but rather the particles which are decay products were found. So physics is already in a curious situation along these lines. There are some indirect tests of string theory that can be made. One of the predictions of string/M-theory is that a spacetime cosmology can’t recollapse, for this would violate something called holography. An observable universe must then be spatially flat and exhibit a certain amount of anisotropy. These predictions have been found to remarkable accuracy. String/M-theory also makes preductions about black holes which will in the future be testable.

Interestingly string theory within the so called anti-de Sitter spacetime correspondence with conformal field theory has been found to occur in a different guise in solid state physics. This is a bit like how isospin theory appears in a number of different physical situations.

I am not sure if string and M-theory are a theory of everything, but I do think they are a theory of something. There is I think some deeper foundation beyond string theory.

LC

My point, sir, is that, however elegant the math is, you cannot on that basis say something as categorical as “the vacuum state is determined by the winding of strings and D-branded on Calibi-Yau manifolds.” We have no evidence to suggest that strings in fact represent any physical reality, and until we do they cannot be used to describe how the world is, but at most how it might be. To imply otherwise is unscientific.

We can’t do experiments near the Planck scale. The scale of apparatus required is far beyond our abilities. We can though compute things about the universe based on theories or hypotheses like superstrings and look out to measure the consequences. If the predictions are observed this lends weight to such theories. So far inflationary cosmology with strings appears on the observational radar screen.

With Calabi Yau spaces these give statistics for vacuum states of black holes. String theory in this setting does compute stable vacua for black holes. Not that this is proof of anything, however science in general does not operate on the basis of proof.

This is a tough business, and finding observational evidence is going to be a long term process. Further, most of the data obtained will be rather indirect, and so the support for something like superstring theory will always be more tentative than with previous physics, or other scientific studies. It took over four decades to find the Higgs particle, and it will likely take at least a half century to acquire sufficient data required to minimally say that superstring theory is tentatively supported. However, during the prior decades most theorists in quantum field work considered the Higgs field and particle to be very plausible.

By the same measure many of us think that string theory and T-duality on Calabi-Yau manifolds reflects some type of structure in the universe. It might not be a theory of everything, but it is probably a theory of something. So these are constructions we work with.

LC

Eloquently stated. Thanks for your continued contributions. They add much to this site and I enjoy reading them.

welcome to “modern” cosmology. the burden of proof has softened somewhat. :/

This might be a stupid question: if it can’t do any work how can it compute? Having a specific rotational state for the ions in this ring shaped crystal is all well and good, but that’s not computation, that’s memory. Furthermore what effect does observation have on this otherwise closed system?

That is not a stupid question. What is stupid here, is the article.

The mathematical foundation of computation is the Turing machine. Think of a Turing machine with a register that records each step of the process. If the machine halts the register data is used to “rewind” the computation to its starting point so the machine starts all over. This system can be described by a Hamiltonian in classical physics which is perfectly reversible and exhibits Poincare recurrence in a bounded or known time. A computation which erases data produces entropy by the Khinchin-Shannon theorem, and so this device is not reversible.

Since this type of device only moves information in a way such that its phase space volume is constant, it could be argued this machine computes nothing. It only takes an imput stack of data, shuffles that around in some reversible way so that it ultimately returns to that starting point. An external observer would have to look at each step of the machine’s progress to look at how that data evolves so as to define some algorithmic output.

In the future of the universe I suspect that after the end of the stellar period, around 10^{12} years, there will be no longer any intelligent life anywhere. I don’t tend to entertain wild speculations about intelligent life reaching extreme levels, such as the Type I through V “civilizations,” and so forth. Any computing device, one that is not just some recurrence system, which locally accesses energy and materials, will probably run out of the necessary heat flow gradients required after the end of the stellar period. It is of course possible in theory that some nearly eternal recurring system could persist longer, but there will likely be nobody out there to read the states of the system to understand the algorithm and its output. If there is nobody to read the machine the point of the machine is lost.

LC

Interesting, a few followup questions (I am not very fluent in math so these may be fairly elementary).

Without additional input this computer would be using the results of one run as input for the next series of calculations correct? in that instance wouldn’t there be a set number of calculations it could run before it ended up at its starting point unless it was working on transcendental numbers? And in what ways would the data be able to evolve otherwise? Clearly if one were to create a computer that could outlast the universe they would want it to have something to work on.

A few background points first. A system which loses no information maintains a constant volume in a phase space of momenta and position (p_i, q_i), where the index i runs over the number of degrees of freedom for the system. So for n particles or elements this space has 2n dimensions. Energy conservation E = const restricts that volume our system has in this phase space to 2n – 1 dimensions. As the system evolves this volume may change shape, but its volume remains constant. If a system loses N bits of information there is an entropy measure dS = k n log(n). If we integrate that S = k?n log(n) = k log(N) is the increase in entropy. As a result the phase space volume increases. There is then nothing internal to the system which can restore it to is original entropy. Some external energy source is required to do that. This is one reason your computer needs to be connected to a power source. Though really only a few percent of the entropy generated by computers is due to information loss. Most entropy produced by a computer is due to the thermal distribution of electrons in circuits.

So let us think of this “eternal” computer as a Turing machine. This machine has a reader over a section of tape and reads the symbol on that section. Based on some algorithm the machine replaces that symbol with another symbol (or does not replace it) and moves left or right to the next section. If a symbol is replaced that is the erasure of information. In order to make the machine maintain constant entropy that information needs to be stored in some auxiliary register. Hence as the Turing machine does its computation a record is built of its processing. Once the machine computes its outcome and reaches a halting state the register is used to “rewind” the machine to its initial condition. The process can then start over.

There is a bit of a sticky issue over how the Turing machine knows it has reached a halting state. It turns out that no Turing machine is capable of computing the halting state of all possible Turing machines. Such a machine has to emulate all possible Turing machines, including itself — which means emulating itself emulating all Turing machines, including itself. If one wanted to build this machine it probably would reflect the desire to have some complicated problem solved, even if nobody is around to read the output, in order that the problem simply has its solution physically exist. Maybe the hope is that in the far future there are intelligent life forms which find this machine and read its results. The purpose for this machine is a bit “odd,” if you ask me.

We will for now ignore the halting problem. There are maybe a number of ways the recurrence could be set up. My idea with the register is one possible way. Certainly there will be the need to prevent information erasure. The machine might “rewind” or the halting state is used in some ways to reset the initial state with the register system.

It is not likely this sort of machine will ever be built. Most machines only last somewhere between a few years to a few decades. They generally fall apart for a number of reasons. Even this putative machine will be subject to quantum uncertainties, where atomic elements of the machine may quantum tunnel out of it.

LC

This is a time crystal in the same way that a standing wave of photons inside a perfectly reflecting box would be if you rotate it in vacuum. The difference may be that it is physically realizable.

But I don’t see what it has to do with computing. It is well known that you can do lossless computing, but that it is the erasure of memory when you have a finite memory (in a finitely large observable universe) that increases entropy.

… towards

maximumentropy.ha! yes that’s the problem with the term ‘entropy’. you don’t know if your heading toward or away from it.