Cosmological sampling with PolyChord + CosmoMC
Branch: master
Clone or download
Pull request Compare This branch is 22 commits ahead, 1 commit behind cmbant:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
VisualStudio
batch1
batch2
batch3
camb
chains
data
docs
paramnames
planck_covmats
polychord
python
scripts
source
tests
.gitattributes
.gitignore
.travis.yml
Makefile
README.rst
clik_latex.paramnames
clik_units.paramnames
cosmomc.bib
cosmomc.cbp
distgeneric.ini
distparams.ini
disttest.ini
job_script
job_script_DARWIN
job_script_MOAB
job_script_SLURM
job_script_UGE
params_generic.ini
test.ini
test_pico.ini
test_planck.ini

README.rst

CosmoChord

CosmoChord:PolyChord + CosmoMC for cosmological parameter estimation and evidence calculation
Author: Will Handley
ForkedFrom:https://github.com/cmbant/CosmoMC
Homepage:http://polychord.co.uk
https://travis-ci.org/williamjameshandley/CosmoChord.svg?branch=master

Description and installation

CosmoChord is a fork of CosmoMC, which adds nested sampling provided by PolyChord.

Installation procedure:

.. bash::

   git clone https://github.com/williamjameshandley/CosmoChord
   cd CosmoChord
   make
   export OMP_NUM_THREADS=1
   ./cosmomc test.ini

To run, you should add action=5 to your ini file, and include batch3/polychord.ini. Consider modifying test.ini

Changes

You can see the key changes by running:

.. bash::
   git remote add upstream https://github.com/cmbant/CosmoMC
   git fetch upstream
   git diff --stat upstream/master
   git diff  upstream/master source
   git diff  upstream/master camb


The changes to CosmoMC are minor:

  • Nested sampling heavily samples the tails of the posterior. This means that there need to be more corrections for these regions that are typically unexplored by the default metropolis hastings tool.
  • You should not use openmp parallelisation, as this in inefficient when using PolyChord. Instead, you should use pure MPI parallelisation, and you may use as many cores as you have live points.