[/caption]

According to Wikipedia, a journal club is a group of individuals who meet regularly to critically evaluate recent articles in scientific literature. Being Universe Today if we occasionally stray into critically evaluating each other’s critical evaluations, that’s OK too. And of course, the first rule of Journal Club is… don’t talk about Journal Club.

So, without further ado – today’s journal article under the spotlight is about nothing.

The premise of the article is that to define nothing we need to look beyond a simple vacuum and think of nothing in terms of what there was before the Big Bang – i.e. really nothing.

For example, you can have a bubble of nothing (no topology, no geometry), a bubble of next to nothing (topology, but no geometry) or a bubble of something (which has topology, geometry and most importantly volume). The universe is a good example of a bubble of something.

The paper walks the reader through a train of logic which ends by defining nothing as ‘*anti De Sitter space as the curvature length approaches zero*’. De Sitter space is essentially a ‘vacuum solution’ of Einstein’s field equations – that is, a mathematically modelled universe with a positive cosmological constant. So it expands at an accelerating rate even though it is an empty vacuum. Anti De Sitter space is a vacuum solution with a negative cosmological constant – so it’s shrinking inward even though it is an empty vacuum. And as its curvature length approaches zero, you get **nothing**.

Having so defined nothing, the authors then explore how you might get a universe to spontaneously arise from that nothing – and nope, apparently it can’t be done. Although there are various ways to enable ‘tunnelling’ that can produce quantum fluctuations within an apparent vacuum – you can’t ‘up-tunnel’ from nothing (or at least you can’t up-tunnel from ‘*anti-de Sitter space as the curvature length approaches zero*’ ).

The paper acknowledges this is obviously a problem, since here we are. By explanation, the authors suggest:

- get past the problem by appealing to immeasurable extra dimensions (a common strategy in theoretical physics to explain impossible things without anyone being able to easily prove or disprove it);
- that their definition of nothing is just plain wrong; or
- that they (and we) are just not asking the right questions.

Clearly the third explanation is the authors’ favoured one as they end with the statement: ‘*One thing seems clear… to truly understand everything, we must first understand nothing*‘. Nice.

So – comments? Is appealing to extra dimensions just a way of dodging a need for evidence? Nothing to declare? Want to suggest an article for the next edition of Journal Club?

**Today’s article:**

Brown and Dahlen On Nothing.

I really had to laugh out loud! 😀

Anyway, I know a group of guys that invoke other dimensions to explain the OPERA results. So, you’re probably right.

P.S.: Do you know what Astronomers invoke, if they encounter something difficult?

Nope, not other dimensions. They scream magnetic field. And then most begin to laugh, too. 😉

It used to be that accretion disks were the answer to everything.

But I suppose the real goto for Astronomers is Dark ….

Alternatively, Brown and Dahlen on Nothing to Declare. 😉

The only thing I can think of is Existence does exist which is the only fact that we all innately

know.

Thinking we therefore are

When science turns to Descartes for answers, we’re in trouble! :p

Great article 🙂

As for extra dimensions, I would think that if one could prove that dimensions exist and are countable in our reality, then there shouldn’t be a reason not to add or subtract a couple of them.

Still, it might be just our view on geometry and I’m pretty sure that one could describe a reality without a usage of 3D (+time), or couldn’t? Nah… even then someone could say that there exists an addition part of reality.

Well, in that case I guess we need to quantify our experience here to maximum and only then invoke unseen 😉

I’m not sure if you were drunk when you wrote that comment, but the fact that GPS works is enough proof that dimensions is a valid concept.

Well, yes actually I was drunk 🙂

But the main reason for this comment was this SETI seminar on ‘ET Math’ or how different math language can be.

And I understand it is a valid concept but I’m not sure whether some other concepts could not be used to describe world around us (obviously I don’t know any other, just staying on a safe side as an ignorant on this matter). Also, it might seem funny but things which appear obvious are sometimes hard to be proven.

nothing is something that is just around the corner, with a rotation …

—–

explanation:

-> from dot to infinite line -> plane -> 3d volume… as long as any of its part is inifinite in its next dimension, you only need to a tiny angular rotation to bring it into “existance”.

Reverse that, and our volume of space only needs to be the smallest of angular rotation away from being nothing.

—-

Note: With smallest being less than the size of an electron as seen from billions of lightyears away … if you need something to imagine, and infinite being, well, infinite.

I don’t get your point, a plane can be defined by only 3 points, and to have all of the plane you have to rotate the line by 180 degree.

Nothing is true, all is permitted.

Also sprach Zarathustra

And I would add “if” at the beginning of this quote to relate to the article.

There are a number of reasons for extra dimensions. One of the reasons is the vacuum itself. I will illustrate here a rather simplified version of how this works, though I will admit I may hedge and fudge a little bit. The rigorous demonstration of this involves highest weight analysis of anomaly cancellations in Virasoro algebras, which is beyond what I should present here.

Quantum mechanics predicts the existence of a zero point energy. It works by the quantization of the harmonic oscillator. The energy for this harmonic oscillator is E = p^2/2m + kx^2, where the spring constant k and the mass m I will absorb into x and p. The momentum p = i(a^† – a) and the position x = a^† + a, where a is an operator which lowers a quantum state and the complex conjugate raises the quantum state. Slam this into the definition of the energy, to construct the operator H, and we get

H = (1/2)(a^†a + aa^†)

The addition and subtraction of a^†a leads to

H = a^†a + (1/2)(aa^† – a^†a)

The last part is a commutator aa^† – a^†a = [a, a^†] = 1. I now restore the frequency ? = sqrt{k/m} so the energy spectrum is

E = ?a^†a + ?/2

That part ?/2 is the zero point energy of the oscillator, which is due to the Heisenberg uncertainty principle. This is summed over all possible frequencies. In a discrete setting, such as a box, or a string, these frequencies are ? = n?_0. The vacuum part is then a sum of the sort

E_{vac} = ?_0 sum_n n/2.

So ignoring constants this is really just a sum 1 + 2 + 3 + 4 + … . This is a few weeks of a quantum mechanics course is a paragraph.

I will now prove to you that this summation equals 12, well really 12 plus an infinite bit I can throw away. Dirac presented to Pauli how this vacuum was infinite in energy, but “irrelevant” and could then be thrown away. Pauli replied with a rather pithy, “Just because something is infinite does not mean it is zero.”

I take this summation and replace each n with n exp(-n?), for ? a small number I take a limit where it is zero. So the summation is replaced with

sum_n n = exp(-?) + 2exp(-2?) + 3exp(-3?) + … = sum_n n exp(-n?).

Now I can write this as the derivative of this expression with respect to ? so

sum_n n = -d/d? sum_n exp(-n?).

This is a geometric series. I can write

sum_n exp(n?) = exp(-?)(1 + exp(-?) + exp(-2?) + … )

= exp(-?)/(1 – exp(-?))

Now I Taylor series this for small ? so exp(-?) ~= 1 – ? + ?^2/2 and this summation (before I do the differentitation with respect to ?) is

sum_n exp(n?) = exp(-?)/(1 – exp(-?)) = (1 – ? + ?^2/2)/(? – ?^2/2 + ?^3/3)

= (1/?)(1 – ? + ?^2/2)/(1 – ?/2 + ?^2/3).

Now use binomial theorem which tells us that for small x that 1/(1 – x) ~= 1 + x + x^2 + x^3 … and we get

sum_n exp(n?) = (1/?)(1 – ? + ?^2/2)(1 + ?/2 – ?^2/3 + e^2/4).

I don’t need higher powers of ? because we are setting this to zero. I now multiply all of this out to get

sum_n exp(n?) = (1/?)(1 – ?/2 + ?^2/12).

Now take the derivative with elementary calculus rules change the sign as above and we have

sum_n n = 1/?^2 – 1/12.

Now as ? — > 0 the first term blows up. This can be absorbed into the definition of the momentum and regularized away. That is a kettle of fish I will punt on, for renormalization theory is a tad complicated.

The summation for the zero point energy is a half of this so we have this -1/24 as the finite piece of interest. Now there is a need this actually to be one, not -1/24. This has to do with some string theory stuff that I am not going to get into here, but is needed to get rid of tachyons and an unstable vacuum. However, what we have above is one dimensional, so we have to multiply by a factor dim = 24 to get the correct result. Now, this does not include the time dimension, so the spacetime dimension is then 25. Further, this all sits in an infinite momentum frame construction (string theory stuff again), which eliminates a spatial dimension. Hence this string theory is in 26 dimensions.

This 26 dimensional bosonic string theory can be mapped into a theory with 16 + 10 dimensions, where 16 of these dimensions are in a gauge group SO(16) and the spacetime dimension is reduced to 10. This leads into the infamous 10 dimensional superstring theory and so forth. The anti-de Sitter spacetime in this paper exist in 5 dimensions, 3 space and 2 of time. The spacetime in this space is found by finding a hyperboloid in this space where the additional time direction is constant. The AdS has certain isometry groups which fit into the types of extra dimensions in string theory. The numbers of dimensions are then determined in the end by quantization conditions.

I punted a bit through this, in particular the stringy stuff and the renormalization stuff. However, if I were to write about those matters this would be the longest blog post on UT, by far.

LC

I thought there was an easy way to derive zero point energy by ‘thought experimenting’ with zero Kelvin.

Since at zero K any particle motion should cease, it would become possible to determine both a particle’s position and momentum at the same time – hence violating the uncertainty principle.

You get around this by proposing that there must always be a residual energy in the system sufficient to retain some uncertainty about whether a particle is really frozen in position – or whether it might still spontaneously fluctuate.

Anyhow, I understand you have provided a mathematical proof of superstring theory. Impressive, but is this sufficient as a proof of its reality?

Any attempt to isolate a particle reduces the uncertainty in the position ?x — > 0, which results in an expansion of the momentum uncertainty by the Heisenberg principle

?x?p = ?

We may think of anything which isolates a particle into a potential well, which models a sort of confining “box.” The momentum uncertainty might be written as ?p = sqrt{2m?} for some energy minimum ?. The uncertainty in the position for a Harmonic oscillator V = kx^2/2 is then some function of a potential minimum V_{min} = ? so that ?x = sqrt{2?/k} and you multiply them together to get

?x?p = 2sqrt{m/k}? = ?

The term sqrt{m/k} = 1/?, which is the angular frequency of the oscillator and so you have

?x?p = 2?/? = ?

Where as a dimensional analysis check the ?/? has cgs units of ergs-seconds and this is the same as the unit of action. So for some oscillator with angular frequency ? the zero point energy is ? = ??/2, which is the ZPE result! Then for a host of oscillators with different modes ?_0 = n?_0 we sum them up and get the sum_n n/2.

Using temperature as a way of isolating a particle works, but there are some physical consequences of this, which are subtle. The Boltzmann factor e^{-E/kT} for low temperature system will become small as T — > 0 so it reaches a scale comparable to the quantum functional or phase e^{iEt/?}. Now to make this work I absorb the i = sqrt{-1} into the time. This is a Euclideanization procedure, which we later break out in something called analytic continuation. We then identify it = ?, where we do the “turn a blind eye” routine to the fact we have this imaginary factor in the Euclidean time ?, and later break this out. We then have the relationship between temperature and the Euclidean time ? = ?/kT. This time is a sort of fluctuation time from the Heisenberg uncertainty principle, and at this criterion it has a length determined by the thermal fluctuation length of a temperature T. This means there is a disordering of the system on certain quantum scales of length or time which act as a temperature. As a result there are quantum phase transitions to exotic states such as superconductivity which have a quantum critical point to them.

Using temperature as the isolating “box” means you will not only get the Heisenberg uncertainty principle and minimal energy ZPE results, but you also get phase criticality for the onset of exotic behavior in a system, such as condensates, superconductivity and so forth.

For a future paper to look at we might consider http://arxiv.org/abs/1109.2563. I would not suggest this as the next paper, for this is a tough paper and involves quantum physics — teleportation in fact. It is related (in a deep way) to this paper under consideration, which I actually downloaded last month, but had not gotten around to reading. I started reading it last night. The next paper might be better if it involves more straight forwards astronomy or astrophysics issues.

LC

The take home message is, which I think lcrowell’s comment is meant to illustrate, that the energy isn’t there to predict uncertainty but that uncertainty predicts the energy. You can calculate the latter based on the former.

Btw, I would not agree with that this is out of bounds for UT (not exactly lcrowell’s proposal, I know), not even in the more restricted sense of astronomy. This work has cosmological consequences!

The “1/12” is often enough mentioned, so I followed this excellent exposition. Some minor typos in case anyone else do this:

* sum_n exp(n?) is sum_n exp(-n?).

* – ?^2/3 + e^2/4 in the binomial approximation is – ?^2/3 + ?^2/4 = – ?^2/12.

* (1/?)(1 – ?/2 + ?^2/12) is (1/?)(1 – ?/2 – ?^2/12).

Agreed I dropped a negative sign in the sum_n exp(-n?) and the sign on 1/12. I did this on the keyboard, which has a substantially reduced chance of coming out right.— Thx

LC

You are welcome. I mess up all the time, as you may have noted, so it is helpful to have other eyes looking at the product.

I can’t see the problem. WE don’t exist. It’s all in the mind — which doesn’t exist. I’m reminded of the apochryphal tale of The Theoretical Astronomer in discussion with An Astrophysist:

“Isn’t it amazing how it all fits together!”

“Of course it all fits together, we imagine it all.

And wereject anything that doesn’t fit!”

Unfortunately you can’t explain away the observable existence of ourselves by declaring it ain’t so.

I can’t see the problem. WE don’t exist. It’s all in the mind — which doesn’t exist. I’m REminded of the apochryphal tale of The Theoretical Astronomer in discussion with An Astrophysist:

“Isn’t it amazing how it all fits together!”

“Of course it all fits together, we imagine it all.

And we reject anything that doesn’t fit!”

Yrs aye…

Maybe our universe is some sort of computer software running on a kinda of supra hardware…

Nothing was when there was no memory allocated to our universe :).

Anyway, what did issue the command create new universe ???…

Who would have thought… It was DOS

FORMAT drive: [/Q] [/B | /S]

/Q Performs a quick format (cosmic inflation)

/B Allocates space on the formatted disk for system files (space-time metric)

/S Copies system files to the formatted disk (apply gravitational constant, fine structure constant, speed of light, etc)

Who would have thought… It was DOS

FORMAT drive: [/Q] [/B | /S]

/Q Performs a quick format (cosmic inflation)

/B Allocates space on the formatted disk for system files (space-time metric)

/S Copies system files to the formatted disk (apply gravitational constant, fine structure constant, speed of light, etc)

A plausible kickoff is a quantum fluctuation. The fluctuation may have occurred on a scale such that it induced a phase change in the degrees of freedom in the vacuum. In a sense the universe might be a sort of quantum fluctuation which is “frozen in place.”

LC

“As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality.” AE

My Teddy Bear does not understand thus — therefore it is irrelevant

Einstein

…didn’t know that one… SMILE

how about this one:

“God does not care about our mathematical difficulties. He integrates empirically.” AE

The 13.7 Billion Year Experiment

http://abstrusegoose.com/

Which after monday will probably be

http://abstrusegoose.com/424

LC

It has been said here, that the universe just is. It can also be said, that ‘nothing’ just isn’t. Nothing is merely, something else. Personally, I find the notion that ‘nothing’ somehow gave rise to our something, to be one of materialism’s most wishful leaps of faith.

We might think of a universe with positive mass energy in particles, but the gravitational potential is negative. A simple “101” idea is that the positive mass-energy of particles equals the negative mass-energy of gravity fields. So the universe is just nothingness rearranged in some way.

Energy conservation in general relativity is a funny thing. Energy conservation is only defined where there is an isometry, a distance preserving symmetry transformation, that is timelike or future directed. Energy is given by the Hamiltonian operator which generates time translations. If there is timelike isometry, called a Killing vector field, then there is a constant Hamiltonian which defines energy conservation. There are classes of spacetime solutions determined by eigenvectors of the Weyl curvature. Some of these have timelike isometries, other do not. The class of spacetimes which cosmologies sit in does not have this timelike isometry.

This is one of those odd things about cosmology and relativity. Old chestnut ideas about energy conservation and things not going faster than light are seen to be only local laws. The global structure of the universe is such that violations can happen under certain conditions, such as distant galaxies with z > 1 and hence v > c do not pass through our local frame.

LC

You know what, this is boring. You don’t make an effort to understand the science but repeat inanely what creationists troll on all science blogs.

For example the claim that science, that observably works, takes it success on “leaps of faith”. It is your idea of creationism that is a leap of faith, as Faye Flam notes:

“Some people argue that scientists have faith in the process of science, but this type of faith is not a religious leap but a logical extension of our experience. The scientific method has worked in the past many times. Therefore it’s quite rational to think it will continue to work in the future.”

Maybe the universe just is, maybe there was an initial boundary of sorts. Both works, and we are trying to find out. The alternative is to claim that personal incredulity is the better option than to actually know.

As for nothing, read the post and the paper. One useful definition of something of nothing is that it is “a bubble of nothing (no topology, no geometry)” as a limit of next of nothing. This is no different from other limits of derivatives or distributions.

The latter case is most illustrative here, since the distribution limit doesn’t need to belong to the supported set of test functions that regularize the problem.

In non-math speak, we can not only do tests on bubbles of nothing, we can “prove” some things on nothing. Creationists like that, even if there is nothing [sic!] of empirical science in it.

When ever I read or hear somebody talk about materialism this way I pretty clearly get the sense this is some appeal to a theological basis for the origin of the world. Of course from a scientific perspective such considerations are not within proper bounds. The origin of the universe is something which we can actually talk about from the basis of physics. The subject is of course wide open as yet, but compared to 50 years ago when the subject was a complete unknown, we are far more capable to working on these problems. While the problem has not been “solved,” not theoretically and certainly there is no empirical evidence to support anything. Currently we have a preponderance of evidence for the big bang, and anisotropy data supports basic aspects of inflation. Inflation occurred before the reheating of the universe, which is the thermal “bang.” Prior to that things are more speculative, but we can work on questions on whether the universe emerged from a quantum tunneling, Dp-brane interactions and so forth. We don’t know the answer yet, but unlike decades past we can now actually talk about these issues in a serious way.

The relationship between science and religion has been fractious. The problem is that science has falsified certain ideas about God, though not necessarily God itself. Hutton and Lyle demonstrated an ancient age to the Earth, and then Charles Darwin demonstrated how living species are related to each other, all of which is contrary to a literal reading of Be’reysheet, aka Genesis. Cosmology is developing to a stage where we will push out the “God said let there be light” bit. This does not necessarily demolish God, for the “light” can be thought of as conscious awareness — God thought the universe. I don’t particularly have big problems with that sort of idea. Further the creation story is largely about the idea the universe is ordered. In Judaism an important concept is separation (Kodesh), and the Genesis creation story is a narrative about how the world is ordered into dualities, light separated from dark, dry land from sea and so forth. So there is a certain meaning to the idea, but it is not literally correct. Of course with Christianity there is a big concern over evolution for this eliminates a literal Adam and Eve and thus no fall or original sin. This in turn damages the scriptural purpose of Jesus Christ who is believed to forgive sins and so forth. Unfortunately, I am not a theologian, so I can’t give ideas about this.

I am actually rather knowledgeable of religion, I have read the Bible through a couple of times, and I can read, or better stumble through, the Torah in Hebrew, and my mixed up background involved Catholicism as well. I actually think it is important for people to know this and the Bible, for otherwise we leave it all to the most fundamentalist types who can claim “ownership” of it. If you read the Bible as mytho-poetic literature it is rather interesting, and you can talk to proselytizers on their “turf,” which can be interesting.

LC

There is two methods to adress a problem (any kind of problem).

1. Investigate and learn more, and possibly eventually understand it.

2. Ignore it, or claim with utmost certainty the answer is something untestable.

I prefer method 1.

I still haven’t read this, but I find it interesting. It is based on thermodynamics was my initial impression, so it is hard to get around. I am surprised they suggest that extra dimensions can make a loophole.

If quantum tunneling out of nothing is forbidden, this should strengthen the likelihood of eternal alternatives such as eternal inflation.

Aguirre has a loophole around the need to blueshift into a singularity as you follow the expansion back in time analogous to a redshift when you follow the expansion forward in time. Suggested by Tegmark in a recent paper; I haven’t read Aguirre’s work either, but it was strong enough to make Tegmark pause, if not to take it on board as likely.

Also, Linde claims, correctly I think, that you can’t equate a local maximum on a set of pathways with a global maximum on all sets. Here the pathways you follow are semiclassical worldlines of particles, and hence you can measure energy of photons that cross them, i.e. red- and blueshift. If eternal inflation expands worldlines into exponential expanding sets of worldlines, you can always find an older worldline somewhere that initiated your local set.

Now you don’t usually want your processes to have no initial boundaries. But surely we can allow it if it predicts the correct physics.

Another problem is that it is an unlikely choice of volume in your phase space to find your system near the stationary point.

But we have the anthropic principle voiding such likelihood concerns.

And in this case the exponential and folding property of eternal inflation makes it look like deterministic chaos. Famously you locally quickly loose track of your initial conditions, and globally the system has no memory due to the chaotic properties. So we could generally never hope to observe any presumed initial boundary.

The problem then is more that we can’t test eternal inflation on this. We need to test it on its predictions on cosmological constant and other stuff.

Now we know what LC has been talking about all the time. About nothing. 😀

The next question is, what is that thing that allows nothing, nothing/something and something. Jeeesus Christ. 😀

It may be that a complete understanding of how the universe arose will forever be beyond understanding. There is no particular reason why our small lump of grey matter should be capable of such a deep understanding, it offers no evolutionary advantage. However, it is remarkable that our present models of the universe have enabled us to develop such very capable technologies.

It would be interesting to have an article or podcast on this topic. Can a human mind ever be capable of understanding the origin of the universe? Do our scientists truly understand the physics the have so far revealed, or do they only understand via the logic of mathematics? What is ‘understanding? What do scientists think about these questions?

I am fairly sure that you wouldn’t claim that theoretical physics is based on logic of mathematics, since you can’t axiomatize much. Logic of algorithms perhaps, but then we are halfway to computer science and measurable computing resources.

In the end the basis is observation, theory and testing on both. Because if you can’t tell what is wrong you can’t tell what is eventually right by elimination, and testing gives us that.

That doesn’t tell you what we would be able to do, but the observable record is quite fantastic so far. If we can summarize cosmology in a fairly simple model, why wouldn’t we be able to summarize its initial conditions in similar manner? Clearly there is no principal hinder, or we get an analogous reasoning as creationists unseen barriers against speciation.

So high hopes. That and 1$ will buy you coffee on Starbucks. =D

Just don’t claim that so biologists see it. This is from a review of a discussion between a biologically knowledgeable philosopher and a religious creationist reasoning the same as above,* but the reviewing evolutionary biologist is a specialist on speciation and has written Why Evolution Is True as you can see from the website:

“Dennett responds correctly: yes, humans are subject to deception by illusions, but on the whole our species, and others, have evolved to have senses that detect what is true about the world, for we couldn’t survive if we just stood our ground as a big predator ran towards us and thought, “Well, that might just be an illusion.”

And that goes for every other species that needs to find food, secure mates, or escape predators: in other words, all species. Animals, by and large, are truth-apprehending organisms (though they can get fooled by things like mimicry), and our own species is also a truth-seeking organism. Further, our ability to actually find truth is shown by the fact that science can make predictions and calculations that are supported: we find microbes that cause disease and antibiotics that kill them, we can predict the structure of a protein from the genetic code, and we can accurately predict when the next solar eclipse will occur.”

——————-

* That is all I could remember at the moment. I guess the context here blanked me on earlier similar descriptions from biologists.

I’m chiming in somewhat late – but better late than never. To quote Icrowell:

“We might think of a universe with positive mass energy in particles, but the gravitational potential is negative. A simple “101” idea is that the positive mass-energy of particles equals the negative mass-energy of gravity fields. So the universe is just nothingness rearranged in some way.”

In other words, “nothing” has always existed (in various forms, which we interpret as “something”) and the sum of existence cancels to zero. This is conceptually more appealling than the alternative scenario in which “something” has always existed, and “nothing” simply has not. Either way, it seems clear that you can’t cross the asymptote from nothing to something… so take your pick.

We can invoke extra dimensions (in which case it becomes “turtles all the way down”), change the question altogether (cheat as in the Kobayashi Maru), or – as I prefer – refine our definition of nothing.

Energy and energy conservation is funny with general relativity. Mass-energy is a local invariant, but globally for spacetimes energy is only conserved if there is an isometry to any particular spacetime that preserved distances. Energy is the generator of time displacemenets, just as momentum is the generator of distance displacements. Certain spacetimes have this isometry, which is defined according to a Killing vector (Killing the name of a mathematician and not involved with homicide), and such spacetimes are type D and type N, which correspond to black holes and gravity waves. Type O solutions correspond to cosmologies, and these do not have Killing vectors which define energy conservation.

This is something which runs counter to physics that most people learn. We have this idea that energy, or mass-energy, conservation is an absolute bedrock. Many PhDs in areas of physics not connected to relativity physics are surprised to learn this. Of course this filters further down to people who may only have their Sears-Zymanski freshman level physics course in their engineering program at college. Yet the foundations of physics and cosmology turn out to be more surprising than ordinarily expected.

LC

We are on the same “wavelength” here: The superluminal expansion of global space-time, and creation of vacuum energy, is probably the best example of non-local, non-conservation of mass-energy.

I also like your description of “energy as the generator of time displacements.” This, to my thinking, results in “the arrow of time.” Though the flow of events is symmetrical with respect to local spacetime, in reality entropy is not: it increases. This is because energy naturally flows toward the ground state, as described by the 2nd law of TD, and not “uphill.” In other words, the flow of energy tends toward zero, which points to “nothing” as the ground state for cosmic inception.

However, to be clear, I have begun thinking of spacetime itself as static with respect to the flow of energy. Spacetime doesn’t flow: it is the multi-dimensional manifold that enables the flow of energy or “change of state.” In this scenario, future and past are human interpretations of the flow of energy toward the ground state in the “ever-present present.” In other words, just as every point in space is the point of origin (of the big bang), the present moment is the very same present moment in which the big bang occurred: We, and everything we observe, are the increased state of entropy of the big bang. Does this make any sense?

We are on the same “wavelength” here: The superluminal expansion of global space-time, and creation of vacuum energy, is probably the best example of non-local, non-conservation of mass-energy.

I also like your description of “energy as the generator of time displacements.” This, to my thinking, results in “the arrow of time.” Though the flow of events is symmetrical with respect to local spacetime, in reality entropy is not: it increases. This is because energy naturally flows toward the ground state, as described by the 2nd law of TD, and not “uphill.” In other words, the flow of energy tends toward zero, which points to “nothing” as the ground state for cosmic inception.

However, to be clear, I have begun thinking of spacetime itself as static with respect to the flow of energy. Spacetime doesn’t flow: it is the multi-dimensional manifold that enables the flow of energy or “change of state.” In this scenario, future and past are human interpretations of the flow of energy toward the ground state in the “ever-present present.” In other words, just as every point in space is the point of origin (of the big bang), the present moment is the very same present moment in which the big bang occurred: We, and everything we observe, are the increased state of entropy of the big bang. Does this make any sense?

The expansion of the universe does in a sense generate energy. The vacuum energy density ? in some unit of volume V has energy E = ?V. Then as this unit of volume increases V — > V + ?V the energy in that expanded volume changes as well E — > ?E for ?E = ??V. The de Sitter spacetime configuration of the universe, which is admittedly an approximation, is such that the pressure term is p = -?, with the so called equation of state with w = -1, in p = w?. There is an over all c in this, which I am setting c = 1. The pressure is negative and the energy associated with it is E = pV, which in some sense counters the growth of energy due to the expansion of volume with a constant energy density. We might think of the pressure as doing negative work that removes this energy created.

Now that I state this it is clear there is something rather odd about this argument. If the deSitter spacetime has an R^3 space the volume is infinite. So even if any local distance in the space expands by an exponential time dependent factor the total generation of energy from this is undefined. Also the pressure term suffers from the fact that pressure only does work if there is a change in pressure across some distance or space, such as a gradient, and in this case the pressure is an absolute constant. So given the space is infinite and there is a strange sense in which we are using the pressure to “remove energy” with “negative work.” So the argument is not entirely solid.

The simple fact is energy conservation is not entirely defined in general relativity. Energy conservation is only defined as a local law, but the global principles of spacetime dynamics do not extend that as a global law. Only spacetimes with a global timelike isometry (distance preserving symmetry) define a generator of time with a constant generator we call energy. This is difficult for many people to understand, particularly after years of science courses up through the college level which pound in the idea of matter and energy conservation.

Entropy is defined on event horizons, and the cosmological event horizon of our universe gives the upper bound to entropy. Currently the entropy of the universe is lower, but as the space expands the hurls mass-energy beyond it and eventually leaves behind a void the entropy approaches this limit. It will take about 10^{100} years for that to happen due to black hole quantum decay. Beyond this time the horizon will also decay over a stupendously long time period, where this decay is a form of the Hawking decay of black holes, called Hawking-Gibbon radiation. As the horizon decays it recedes to infinity and the entropy ~ area of horizon increases to infinity. This then leads again to a funny problem of defining energy in general relativity and cosmology.

LC