Particle Mesh in Python
Type Name Latest commit message Commit time
Failed to load latest commit information.
debug-32/IC add an IC for testing purpose. Dec 16, 2014
utils move gravpm to nbody. its getting too big! Feb 5, 2015
README.rst Add ACG windows. Nov 9, 2017 rename run-tests. Sep 11, 2017 switch to setuptools. May 29, 2018


pmesh: Particle Mesh in Python

The pmesh package lays out the fundation of a parallel Fourier transform particle mesh solver in Python.

To cide pmesh, use

Build Status

Build Status

This readme file is minimal. We shall expand it.

Reference Manual

Refer to for a full API reference and installation guide.

We recommended working with Anaconda's Python distribution. pmesh is available via the BCCP conda channel for Anaconda. Installing from the source requires installing pfft from source, and it may take a while to compile pfft.


pmesh includes a few software components for building particle mesh simulations with Python. It consists

  • pmesh.domain : a cubinoid domain decomposition scheme in n dimensions.
  • : a Particle Mesh solver engine, with real-to-complex, complex-to-real transforms, transfer functions in real and complex fields, and particle-mesh conversions (paint and readout) operations. In order to interface with a higher level differentiable modelling package (e.g. abopt [3]), the back-propagation gradient operators are also implemented.
  • pmesh.window : a variety of resampling windows for converting data representation between particle and mesh: polynomial windows up to cubic. Cloud-In-Cell is the same as the linear window; lanczos windows of order 2 and 3; a few wavelet motivated windows (ref needed) that perserves the power spectrum to high frequency.
  • pmesh.whitenoise : a resolution-invariant whitenoise generator for 2d and 3d fields.

The FFT backend is PFFT [5], provided by the pfft-python binding [4]. We use MPI to provide parallism (inherited from PFFT).

Downstream products that uses pmesh includes nbodykit [1] and fastpm-python [2].

If there are issues starting up a large size MPI job, consult