According to Wikipedia, a journal club is a group of individuals who meet regularly to critically evaluate recent articles in the scientific literature. And of course, the first rule of Journal Club is… don’t talk about Journal Club.
So, without further ado – today’s journal article is about how new data are limiting the theoretical options available to explain the observed accelerating expansion of the universe.
Zhang et al Testing modified gravity models with recent cosmological observations..
Theorists can develop some pretty ‘out there’ ideas when using limited data sets. But with the advent of new technologies, or new ways to measure things, or even new things to measure – new data becomes available that then constrains the capacity of various theories to explain what we have measured.
Mind you, when new data conflicts with theory, the first question should always be whether the theory is wrong or the data is wrong – and it may take some time to decide which. A case in point is the Gran Sasso faster-than-light neutrino data. This finding conflicts with a range of well established theories which explain a mountain of other data very well. But to confirm that the neutrino data are wrong, we will need to reproduce the test – perhaps with different equipment under different conditions. This might establish an appropriate level of confidence that the data are really wrong – or otherwise that we need to revise the entire theoretical framework of modern physics.
Zhang et al seek to replicate this sort of evidence-based thinking using Bayesian and also Akaike statistics to test whether the latest available data on the expansion of the universe alters the likelihood of existing theories being able to explain that expansion.
These latest available data include:
- the SNLS3 SN1a data set (of 472 Type 1a supernovae);
- the Wilkinson Microwave Anisotropy Probe (WMAP) 7 year observations;
- Baryonic acoustic oscillation results for the Sloan Digital Sky Survey release 7; and
- the latest Hubble constant measures from the Wide Field Camera 3 on the Hubble Space telescope.
The authors run a type of chi-squared analysis to see how the standard Lambda Cold Dark Matter (CDM) model and a total of five different modified gravity (MG models) fit against both the earlier data and now this latest data. Or in their words ‘we constrain the parameter space of these MG models and compare them with the Lambda CDM model’.
It turns out that the latest data best fit the Lambda CDM model, fit less well with most MG models and at least one of the MG models is ‘strongly disfavored’.
They caveat their findings by noting that this analysis only indicates how things stand currently and yet more new data may change the picture again.
And not surprisingly, the paper concludes by determining that what we really need is more new data. Amen to that.
So… comments? Are Bayesian statistics just a fad or a genuinely smarter way to test a hypothesis? Are the first two paragraphs of the paper’s introduction confusing – since Lambda is traditionally placed on ‘the left side of the Einstein equation’? Does anyone feel constrained to suggest an article for the next edition of Journal Club?