diff --git a/docs/abacus.rst b/docs/abacus.rst index c9da14cb..64aa5637 100644 --- a/docs/abacus.rst +++ b/docs/abacus.rst @@ -1,8 +1,6 @@ Abacus ======= -Here we briefly describe the Abacus N-body code. - What is Abacus? --------------- @@ -12,7 +10,7 @@ clustered simulations. It is extremely fast: we clock over 30 million particle updates per second on commodity dual-Xeon, dual-GPU computers and nearly 70 million particle updates per second on each node of the Summit supercomputer. But it is also extremely accurate: -typical force accuracy is below 1e-5 and we are using global +typical force accuracy is below :math:`10^{-5}` and we are using global timesteps, so the leapfrog timesteps away from the cluster cores are much smaller than the dynamical time. diff --git a/docs/abacussummit.rst b/docs/abacussummit.rst index bfa3bdca..645b5feb 100644 --- a/docs/abacussummit.rst +++ b/docs/abacussummit.rst @@ -9,24 +9,26 @@ was run on the `Summit `_ supercomputer at th Computing Facility under a time allocation from the Department of Energy's ALCC program. Most of the simulations in AbacusSummit are 6912\ :sup:`3` = 330 billion -particles in 2 Gpc/h volume, yielding a particle mass of about 2e9 Msun/h. +particles in 2 Gpc/*h* volume, yielding a particle mass of about :math:`2\times 10^9\ \mathrm{M}_\odot/h`. -AbacusSummit consists of over 140 of these simulations, plus other smaller simulations, -totaling about 50 trillion +AbacusSummit consists of over 140 of these simulations, plus other larger and smaller simulations, +totaling about 60 trillion particles. Detailed specifications of the :doc:`simulations` and :doc:`cosmologies` are available on other pages. Key portions of the suite are: -* A primary Planck2018 LCDM cosmology with 25 base simulations (330 billion particles in 2 Gpc/h). +* A primary Planck2018 LCDM cosmology with 25 base simulations (330 billion particles in 2 Gpc/*h*). * Four secondary cosmologies with 6 base simulations, phase matched to the first 6 of the primary boxes. -* A grid of 79 other cosmologies, each with 1 phase-matched base simulation, to support interpolation in an 8-dimensional parameter space, including w0, wa, Neff, and running of the spectral index. +* A grid of 79 other cosmologies, each with 1 phase-matched base simulation, to support interpolation in an 8-dimensional parameter space, including :math:`w_0`, :math:`w_a`, :math:`N_\mathrm{eff}`, and running of the spectral index. + +* A suite of 1800 small boxes at the base mass resolution to support covariance estimation * Other base simulations to match the cosmology of external flagship simulations and to explore the effects of our neutrino approximation. -* A 6x higher mass resolution simulation of the primary cosmology to allow study of group finding, and a large-volume 27x lower mass resolution simulation of the primary cosmology to provide full-sky light cone to z>2. +* A 6x higher mass resolution simulation of the primary cosmology to allow study of group finding, and a large-volume 27x lower mass resolution simulation of the primary cosmology to provide full-sky light cone to *z*>2. * Specialty simulations including those with fixed-amplitude white noise and scale-free simulations. diff --git a/docs/compaso.rst b/docs/compaso.rst index b887d14b..7791c86f 100644 --- a/docs/compaso.rst +++ b/docs/compaso.rst @@ -1,14 +1,14 @@ -The CompaSO Halo Finder -======================= +CompaSO Halo Finder +=================== All group finding in AbacusSummit is done on the fly. We are using -a hybrid algorithm, summarized as follows. +a hybrid FoF-SO algorithm, dubbed CompaSO, summarized as follows. First, we compute a kernel density estimate around all particles. -This uses a weighting (1-r2/b2), where b is 0.4 of the interparticle +This uses a weighting :math:`(1-r^2/b^2)`, where :math:`b` is 0.4 of the interparticle spacing. We note that the effective volume of this kernel is -equivalent to a top-hat of 0.737b, so 85 kpc/h comoving, and that -the mean weighted counts at an overdensity delta is about delta/10 +equivalent to a top-hat of :math:`0.737b`, so 85 kpc/*h* comoving, and that +the mean weighted counts at an overdensity :math:`\delta` is about :math:`\delta/10` with a variance of 4/7 of the mean. Second, we segment the particle set into what we call L0 halos. @@ -20,14 +20,13 @@ the bounds of the L0 halo set be set by the kernel density estimate, which has lower variance than the nearest neighbor method of FOF and imposes a physical smoothing scale. -.. note:: The terms *groups* and *halos* have specific meanings in Abacus. - Groups are clusters of particles at any level of group finding - (L0/L1/L2). Halos are L1 groups (although sometimes we do use - "halos" to refer to another level, in which case we say *L0 halos* - or *L2 halos*). +.. note:: In Abacus, L0 groups are large, "fluffy" sets of particles + that typically encompass several L1 groups. L1 groups correspond + to classical "halos". L2 groups correspond to "halo cores" + or perhaps "subhalos". We stress that all L1/L2 finding and all halo statistics are based -solely on the particles in the L0 halo. +solely on the particles in the L0 halo. Third, within each L0 halo, we construct L1 halos by a competitive spherical overdensity algorithm. We begin by selecting the particle @@ -47,7 +46,7 @@ we start another nucleus. With each successive nucleus, we again search for the SO(200) radius, using all L0 particles. Now a particle is assigned to the new group -if is previously unassigned OR if it is estimated to have an enclosed +if is previously unassigned *or* if it is estimated to have an enclosed density with respect to the new group that is twice that of the enclosed density with respect to its assigned group. In detail, these enclosed densities are not computed exactly, but rather scaled diff --git a/docs/conf.py b/docs/conf.py index abc93b25..9ec82d77 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -56,4 +56,6 @@ html_favicon = 'images/icon_red.png' def setup(app): - app.add_css_file('custom.css') \ No newline at end of file + app.add_css_file('custom.css') + +intersphinx_mapping = {'abacusutils': ('https://abacusutils.readthedocs.io/en/latest', None)} diff --git a/docs/cosmologies.rst b/docs/cosmologies.rst index 8c86a565..706c8c50 100644 --- a/docs/cosmologies.rst +++ b/docs/cosmologies.rst @@ -1,6 +1,9 @@ Cosmologies =========== +Cosmology Specifications +------------------------ + This page describes the specification of the Cosmologies and the CLASS parameters that they define. The CLASS parameter files and resulting power spectra and transfer functions are available in the `AbacusSummit/Cosmologies `_ @@ -28,7 +31,7 @@ Further details are below the table. ------- -All cosmologies use tau=0.0544. Most use 60 meV neutrinos, omega_nu = 0.00064420, scaling from z=1. +All cosmologies use tau=0.0544. Most use 60 meV neutrinos, omega_nu = 0.00064420, scaling from *z* = 1. We use HyRec, rather than RecFast. CLASS is run with the pk_ref.pre precision choices, unless the name ends with \_fast, in which case we use the defaults. @@ -37,18 +40,19 @@ for this. Remember that Omega_m = (omega_b+omega_cdm+oemga_ncdm)/h^2. -We output five redshifts from CLASS, z=0.0, 1.0, 3.0, 7.0, and 49, which are called z1,z2,z3,z4,z5. +We output five redshifts from CLASS, *z* = 0.0, 1.0, 3.0, 7.0, and 49, which are called z1,z2,z3,z4,z5. -We use the CDM+Baryon power spectrum at z=1 (z2_pk_cb) and scale back by D(z_init)/D(1) -to define our matter-dominated CDM-only simulation IC. The growth function includes the +We use the CDM+Baryon power spectrum at *z* = 1 (z2_pk_cb) and scale back by the ratio of growth +factors :math:`D(z_\mathrm{init})/D(1)` to define our matter-dominated CDM-only simulation IC. The growth function includes the neutrinos as a smooth component. -.. TODO: better way to link this CSV file? +Cosmologies Table +----------------- Download the cosmologies table `here `_. However, in analysis applications, users are encouraged to use the cosmological parameters stored as in the ``header`` field -of the ASDF data product files (which is loaded into the ``meta`` field of Astropy tables) rather than referencing the -cosmologies table. +of the ASDF data product files (which is loaded into the ``meta`` field of Astropy tables, or the ``header`` field of +``CompaSOHaloCatalog`` objects) rather than referencing the cosmologies table. .. note:: The following table is wide, you may have to scroll to the right to see all the columns. @@ -58,7 +62,9 @@ cosmologies table. :header-rows: 1 :escape: ' -Further details about the cosmology choices: + +Additional Details +------------------ Beyond the Planck2018 LCDM primary cosmology, we chose 4 other secondary cosmologies. One was WMAP7, to have a large change in omega_m, H0, and sigma8. diff --git a/docs/data-access.rst b/docs/data-access.rst index 3ee16fe7..400c5e3c 100644 --- a/docs/data-access.rst +++ b/docs/data-access.rst @@ -24,7 +24,7 @@ What data are available? ------------------------ The :doc:`data-products` page documents the data products. In some cases, extra data products may be archived on tape and can be made available upon request. -Please email lgarrison@flatironinstitute.org for details. +Please email deisenstein@cfa.harvard.edu, lgarrison@flatironinstitute.org, and nina.maksimova@cfa.harvard.edu for details. Note that you will almost certainly need to use the utilities at https://abacusutils.readthedocs.io/en/latest/index.html diff --git a/docs/data-products.rst b/docs/data-products.rst index e6d26ce3..9aab6889 100644 --- a/docs/data-products.rst +++ b/docs/data-products.rst @@ -22,23 +22,33 @@ The key data products are: 3. A **light cone** stretching from the corner of the box and including a single second periodic copy of the box. This provides an octant of sky - to z=0.8 and about 800 sq deg to z=2.4. The outputs are the subsample + to z=0.8 and about 800 sq deg to *z*\=2.4. The outputs are the subsample of particles, as well as the Nside=16384 healpix pixel number for all particles. 4. **Full particle catalogs** for a few timeslices of a few boxes. -We perform group finding at 12 primary redshifts and 24 secondary -redshifts. The primary set is z=0.1, 0.2, 0.3, 0.4, 0.5, 0.8, 1.1, 1.4, -1.7, 2.0, 2.5, and 3.0, and this will be where most users should focus. +We perform group finding at 12 primary redshifts and 21 secondary +redshifts. Most users should focus on the primary redshifts. + +- **Primary redshifts**: *z*\=0.1, 0.2, 0.3, 0.4, 0.5, 0.8, 1.1, 1.4, 1.7, 2.0, 2.5, 3.0 + +- **Primary + Secondary redshifts**: *z*\=0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.575, 0.65, 0.725, 0.8, 0.875, 0.95, 1.025, 1.1, 1.175, 1.25, 1.325, 1.4, 1.475, 1.55, 1.625, 1.7, 1.85, 2.0, 2.25, 2.5, 2.75, 3.0, 5.0, 8.0 + +.. note :: + The 21 "secondary" redshifts are only approximate and may not match + from simulation to simulation. We do not shorten the Abacus + timestep to land exactly on these secondary redshifts as we do + for the primary redshifts. Always use the redshift in the header, + not the directory name. At each primary redshift, we output the properties computed for the L1 -halos in ``halo_info`` files. The list of properties is below. +halos in ``halo_info`` files (see `doc`:compaso). The list of properties is below. We also output a subsample of the particle, split into 3% and 7% sets -(so 10% total), called A and B, so that users can minimize their data +(so 10% total), called "A" and "B", so that users can minimize their data access depending on application. These subsamples are consistent across -redshift and are selected based on a hash of the particle id number, so +redshift and are selected based on a hash of the particle ID number, so effectively randomly. The particles are further split into files based on whether the particle @@ -49,7 +59,7 @@ fetch a random subsample of particles from the L1 halo; we use these for satellite galaxy positions. Particle positions and velocities are output in one file. A separate -file contains the unique particle id number, which is easily parsed as +file contains the unique particle ID number, which is easily parsed as the initial grid position, as well as the kernel density estimate and a sticky bit that is set if the particle has ever been in the most massive L2 halo of a L1 halo with more than 35 particles. @@ -70,6 +80,12 @@ words, if a halo has particles from slabs C..D, it will be in the file for slab C. So one cannot simply take the halo and field files of matching slab range and get the union of all particles. +.. note :: + Most applications will need to load at least one padding file + on either side of the file under consideration in order to ensure + all halos and particles within a compact *x* range are present in + memory. + For the 21 secondary redshifts, we output the halo catalogs and the halo subsample particle IDs (w/densities and sticky L2 tag) only, so not the positions/velocities nor the field particles. @@ -85,44 +101,48 @@ galaxy, then go find that PID in the light cone files. Data Model ---------- -Here we describe the data model of the AbacusSummit data products. +All files are in `ASDF format `_. Most of the files +have only one binary block. The only exception are the ``halo_info`` files, +where every column of the table is a separate block. This allows the user +to load only the columns needed for an analysis. -All files are in ASDF format. Most of the files have only one binary -block. The only exception are the ``halo_info`` files, where every -column of the table is a separate block. This allows the user to load -only the columns needed for an analysis. +.. note :: + Loading a narrow subset of halo catalog columns can save substantial + time and memory. Within abacusutils, use the ``fields`` argument to + the :doc:`CompaSOHaloCatalog constructor ` to achieve this. Within ASDF, we apply compression to each binary block. We do this via -the ``blosc`` package, using bit/byte transposition followed by ``zstd`` -compression. We have found that transposition gives substantially better +the `Blosc package `_, using +bit/byte transposition followed by `zstd compression `_. +We have found that transposition gives substantially better compression ratios (we chose bit vs byte empirically for each file -type), and that ``zstd`` provides fast decompression, fast enough that +type) and that ``zstd`` provides fast decompression, fast enough that one CPU core can keep up with most network disk array read speeds. In -https://github.com/lgarrison/asdf.git, we have provided a fork of the ASDF library that +https://github.com/lgarrison/asdf, we have provided a fork of the ASDF library that includes ``blosc`` as a compression option, so that decompression should be invisible to the user. One needs to use this fork of ASDF with ``abacusutils``. The ASDF header is human-readable, meaning one can use a Linux command line tool like ``less`` to examine the simulation metadata stored in -every ASDF file. We include various descriptive and quantitative aspects -of the simulation in this header. +every ASDF file. This was one motivation for choosing ASDF over HDF5. We +include various descriptive and quantitative aspects of the simulation in this header. The halo statistics are in ``halo_info`` files. These columns of these outputs are described below. Most are substantially compressed, including using ratios (e.g., of radii) to scale variables. As such, the binary format of the columns will differ from that revealed by the -python utility. +Python package. -CRC32 checksums are provided for all files. These should match the GNU +CRC32 checksums are provided for all files in the ``checksums.crc32`` +file that resides in each directory. These should match the GNU ``cksum`` utility, pre-installed in most Linux environments. We also offer a fast implementation of ``cksum`` with about 10x better -performance: https://github.com/abacusorg/fast-cksum. +performance here: https://github.com/abacusorg/fast-cksum. Halo Statistics --------------- -Here we list the statistics computed for each halo. - +Here is the list of statistics computed on each CompaSO halo. In most cases, these quantities are condensed to reduce the bit precision and thereby save space; this is in addition to the transposition/compression performed in the ASDF file storage. Sometimes @@ -137,8 +157,8 @@ Astropy tables (and therefore NumPy arrays) to the user. See https://abacusutils.readthedocs.io/en/latest/index.html for details and installation instructions. -The table below lists the data format in the binary table, but also -gives the format that is revealed to the user when that differs. +The listing below gives the data format in the binary files, but also +gives the format that is revealed to the user by the Python when that differs. In the ``halo_info`` file, positions and radii (where not normalized in a ratio) are in units of the unit box, while velocities are in km/s. @@ -188,36 +208,6 @@ Densities are in units of the cosmic mean (so the mean density is 1). - ``float SO_L2max_radius``: Radius of SO halo (distance to particle furthest from central particle) for the largest L2 subhalo -Once the decompression is performed using the python package -``compaso_halo_catalog.py``, the user can access the corresponding -``numpy`` arrays with data types: - -- ``id``: ``np.uint64`` - -- ``npstartA``, ``npstartB``: ``np.uint64`` - -- ``npoutA``, ``npoutB``: ``np.uint32`` - -- ``ntaggedA``, ``ntaggedB``: ``np.uint32`` - -- ``N``: ``np.uint32`` - -- ``L2_N``: ``np.uint32, 5`` - -- ``L0_N``: ``np.uint32`` - -- ``SO_central_particle``: ``np.float32, 3`` - -- ``SO_central_density``: ``np.float32`` - -- ``SO_radius``: ``np.float32`` - -- ``SO_L2max_central_particle``: ``np.float32, 3`` - -- ``SO_L2max_central_density``: ``np.float32`` - -- ``SO_L2max_radius``: ``np.float32`` - The following quantities are computed using a center defined by the center of mass position and velocity of the largest L2 subhalo. In addition, the same quantities with ``_com`` use a center defined by the @@ -287,52 +277,11 @@ the inner 90% of the mass relative to this center. relative to the L2 center, stored as the ratio to r100 condensed to [0,30000]. -After decompression using the python code ``compaso_halo_catalog.py``, -the following data format is revealed for the halo statistics described -above (with analogous quantities available for outputs with respect to -the L1 center ``_com``): - -- ``x_L2com``: ``np.float32, 3`` - -- ``v_L2com``: ``np.float32, 3`` - -- ``meanSpeed_L2com``, ``meanSpeed_r50_L2com``: ``np.float32`` - -- ``vcirc_max_L2com``: ``np.float32`` - -- ``rvcirc_max_L2com``: ``np.float32`` - -- ``r10_L2com``, ``r25_L2com``, ``r33_L2com``, ``r50_L2com``, - ``r67_L2com``, ``r75_L2com``, ``r90_L2com``, ``r95_L2com``, - ``r98_L2com``, ``r100_L2com``: ``np.float32`` - -- ``sigmav3d_L2com``, ``sigmav3d_r50_L2com``: ``np.float32`` - -- ``sigmavrad_L2com``: ``np.float32`` - -- ``sigmavtan_L2com``: ``np.float32`` - -- ``sigmavMin_L2com``, ``sigmavMid_L2com``, ``sigmavMaj_L2com``: - ``np.float32`` - -- ``sigmar_L2com``: ``np.float32, 3`` - -- ``sigman_L2com``: ``np.float32, 3`` - -- ``sigmav_eigenvecsMin_L2com``, ``sigmav_eigenvecsMid_L2com``, - ``sigmav_eigenvecsMaj_L2com``: ``np.float32, 3`` - -- ``sigmar_eigenvecsMin_L2com``, ``sigmar_eigenvecsMid_L2com``, - ``sigmar_eigenvecsMaj_L2com``: ``np.float32, 3`` - -- ``sigman_eigenvecsMin_L2com``, ``sigman_eigenvecsMid_L2com``, - ``sigman_eigenvecsMaj_L2com``: ``np.float32, 3`` - Particle data ------------- The particle positions and velocities from subsamples are stored in -``RV`` files. The positions and velocities have been condensed into +"RV" files. The positions and velocities have been condensed into three 32-bit integers, for x, y, and z. The positions map [-0.5,0.5] to +-500,000 and are stored in the upper 20 bits. The velocites are mapped from [-6000,6000) km/s to [0,4096) and stored in the lower 12 bits. The @@ -347,8 +296,8 @@ with 12 bits for each position and velocity component. As the base simulations have 1701 cells per dimension, this is about 23 bits of positional precision. -The particle id numbers and kernel densities are stored in ``PID`` files -packed into a 64-bit integer. The id numbers are simply the (i,j,k) +The particle ID numbers and kernel densities are stored in ``PID`` files +packed into a 64-bit integer. The ID numbers are simply the ``(i,j,k)`` index from the initial grid, and these 3 numbers are placed as the lower three 16-bit integers. The kernel density is stored as the square root of the density in cosmic density units in bits 1..12 of the upper 16-bit @@ -361,14 +310,14 @@ Light Cones For the base boxes, the light cone is structured as three periodic copies of the box, centered at (0,0,0), (0,0,2000), and (0,2000,0) in -Mpc/h units. This is observed from the location (-950, -950, -950), -i.e., 50 Mpc inside a corner. This provides an octant to a distance of -1950 Mpc/h (z=0.8), shrinking to two patches each about 800 square -degrees at a distance of 3950 Mpc/h (z=2.4). +Mpc/*h* units. This is observed from the location (-950, -950, -950), +i.e., 50 Mpc/*h* inside a corner. This provides an octant to a distance of +1950 Mpc/*h* (*z*\=0.8), shrinking to two patches each about 800 square +degrees at a distance of 3950 Mpc/*h* (*z*\=2.4). The three boxes are output separately and the positions are referred to the center of each periodic copy, so the particles from the higher -redshift box need to have 2000 Mpc/h added to their z coordinate. +redshift box need to have 2000 Mpc/*h* added to their *z* coordinate. Particles are output from every time step (recall that these simulations use global time steps for each particle). In each step, we linearly @@ -380,11 +329,11 @@ Each time step generates a separate file, which includes the entire box, for each periodic copy. We store only a subsample of particles, the union of the A and B -subsets. Positions are in the ``RV`` format; id numbers and kernel -density estimates are in the ``PID`` format. +subsets (so 10%). Positions are in the "RV" format; ID numbers and kernel +density estimates are in the "PID" format. -The HealPix pixels are computed using +z as the North Pole, i.e., the -usual (x,y,z) coordinate system. We choose Nside=16384 and store the +The HealPix pixels are computed using +\ *z* as the North Pole, i.e., the +usual (*x*\,\ *y*\,\ *z*\) coordinate system. We choose Nside=16384 and store the resulting pixel numbers as int32. We output HealPix from all particles. Particle pixel numbers from each slab in the box are sorted prior to output; this permits better compression (down to 1/3 byte per @@ -392,5 +341,5 @@ particle!). For the huge boxes, the light cone is simply one copy of the box, centered at (0,0,0). This provides a full-sky light cone to the the -half-distance of the box (about 4 Gpc/h), and further toward the eight +half-distance of the box (about 4 Gpc/*h*), and further toward the eight corners. diff --git a/docs/nbody-details.rst b/docs/nbody-details.rst index 3480d904..1ad5048d 100644 --- a/docs/nbody-details.rst +++ b/docs/nbody-details.rst @@ -2,17 +2,18 @@ N-body Details ============== Here we describe technical choices with respect to the N-body method -(e.g. softening, time stepping, ICs). +that affect the accuracy of the outputs (e.g. softening, time stepping, ICs). -All of the simulations start at z=99 utilizing second-order Lagrangian +All of the simulations start at *z* = 99 utilizing second-order Lagrangian Perturbation Theory initial conditions following corrections of first-order particle linear theory; these are described in Garrison -et al. (2016, see :ref:`papers`) and have a target correction redshift of 5. The +et al. (2016, see :ref:`papers`) and have a target correction redshift of 12. The particles are displaced from a cubic grid. The simulations use spline force softening, described in Garrison -et al. (2018). Force softening for the standard boxes is 7.2 kpc/h -(Plummer equivalent), fixed in proper (not comoving) distance +et al. (2018). Force softening for the standard boxes is 7.2 kpc/*h* +(Plummer equivalent), or 1/40th of the initial particle grid spacing. +This softening is fixed in proper (not comoving) distance and capped at 0.3 of the interparticle spacing at early times. We use global time steps that begin capped at :math:`\Delta(\ln a)=0.03` but @@ -20,7 +21,7 @@ quickly drop, tied to a criteria on the ratio of the Mpc-scale velocity dispersion to the Mpc-scale maximum acceleration, with the simulation obeying the most stringent case. This is scaled by a parameter eta, which is 0.25 in these simulations. Simulations -require about 1100 time steps to reach z=0.1. +require about 1100 time steps to reach *z* = 0.1. Users of the outputs probably don't need to know much of the numerical details of the code, but there is one numerical concept that enter diff --git a/docs/simulations.rst b/docs/simulations.rst index d2acd664..851115ff 100644 --- a/docs/simulations.rst +++ b/docs/simulations.rst @@ -1,32 +1,54 @@ Simulations =========== -This page contains the specification of the simulations in AbacusSummit. Simulations specifications are given a descriptive label: +Specifications +-------------- -* **Base**: this is our standard size, 6912^3 particles in 2 Gpc/h. +This page contains the specification of the simulations in AbacusSummit. We tabulate the simulations below. -* **High**: A box with 6x better mass resolution, 6300^3 in 1 Gpc/h. +Simulation specifications are given a descriptive label, which is included in the simulation name: -* **Highbase**: A 1 Gpc/h box with the base mass resolution. +* **Base**: this is our standard size, 6912\ :sup:`3` particles in 2 Gpc/*h*. + +* **High**: A box with 6x better mass resolution, 6300\ :sup:`3` in 1 Gpc/*h*. + +* **Highbase**: A 1 Gpc/*h* box with the base mass resolution. * **Huge**: these are larger boxes run with 27x worse mass resolution. -* **Hugebase**: Re-runs of some 2 Gpc/h boxes with the same 27x worse mass resolution. +* **Hugebase**: Re-runs of some 2 Gpc/*h* boxes with the same 27x worse mass resolution. -* **Fixedbase**: Simulations with the base mass resolution but fixed-amplitude initial conditions, 4096^3 in 1.18 Gpc/h. +* **Fixedbase**: Simulations with the base mass resolution but fixed-amplitude initial conditions, 4096\ :sup:`3` in 1.18 Gpc/*h*. -* **Small**: Simulations with base mass resolution but 1728^3 particles in 0.5 Gpc/h. +* **Small**: Simulations with base mass resolution but 1728\ :sup:`3` particles in 0.5 Gpc/*h*. -Run-time products: +Run-Time Products +----------------- Only a few of our simulations include the full timeslice output; -we typically output only subsamples. The full list is z=3.0, 2.5, -2.0, 1.7, 1.4, 1.1, 0.8, 0.5, 0.4, 0.3, 0.2, 0.1. The partial -list is z = 2.5, 1.4, 0.8, 0.2. Partial+HiZ adds z=3.0 and 2.0 to that. +we typically output only subsamples. The "Full Outputs" column +in the simulations table below specifies the redshifts (if any) +for which a simulation has full timeslices. This column refers +to sets of redshifts using the following abbreviations: + +* **Full**: *z* = 3.0, 2.5, 2.0, 1.7, 1.4, 1.1, 0.8, 0.5, 0.4, 0.3, 0.2, 0.1 + +* **Partial**: *z* = 2.5, 1.4, 0.8, 0.2 + +* **Partial+HiZ**: Adds *z* = 3.0, 2.0 to the Partial list Subsamples of particles, with positions, velocities, ID numbers, and kernel density -estimates, are typically provided at the same 12 redshifts as the Full list in the -previous paragraph. CompaSO group finding is run at these redshifts as well as 21 others. +estimates, are typically provided at the same 12 redshifts as the Full list +(see :doc:`data-products`). CompaSO group finding is run at these redshifts +as well as 21 others (see :doc:`compaso`). + +.. note :: + The 21 "secondary" redshifts are only approximate and may not match + from simulation to simulation. We do not shorten the Abacus + timestep to land exactly on these secondary redshifts as we do + for the primary redshifts. Always use the redshift in the header, + not the directory name. + The huge and hugebase sims have fewer group finding and subsample epochs. Base sims and Huge sims have light-cone outputs; others do not. @@ -34,23 +56,29 @@ Base sims and Huge sims have light-cone outputs; others do not. A base simulation typically produces about 10 TB of subsampled output, and each output slice is another 4 TB above that. -We ran 2000 small simulations, intended for studies of covariance -matrices in periodic boundary conditions. These have particle -subsample outputs at z=1.4, 1.1, 0.8, 0.5, and 0.2, as well as halo -finding at all redshifts >0.2. However, about 15% of these simulations -crashed due to some unresolved issue, almost certainly uncorrelated +Covariance (Small) Boxes +------------------------ + +We ran 2000 small simulations in 500 Mpc/*h* at the base mass resolution, +intended for studies of covariance matrices in periodic boundary conditions. +These have particle subsample outputs at *z* = 1.4, 1.1, 0.8, 0.5, and 0.2, +as well as halo finding at all redshifts >0.2. However, about 15% of these +simulations crashed due to some unresolved issue, almost certainly uncorrelated with any property of the large-scale structure in the simulation. Some of the crashed ones still produced usable outputs at higher redshifts. We have chosen to present the 1883 that yielded outputs -at z=1.1; 1671 of these reached z=0.2. The numbering between ph3000 +at *z* = 1.1; 1671 of these reached *z* = 0.2. The numbering between ph3000 and ph4999 will be irregular. -The cosmologies in the "Cosm" column are tabulated in :doc:`cosmologies`. - ------ +Simulations Table +----------------- Download the simulations table `here `_. +The cosmologies in the "Cosm" column are tabulated in :doc:`cosmologies`. + +The "PPD" column is the number of particles-per-dimension. + .. note:: The following table is wide, you may have to scroll to the right to see all the columns. .. csv-table::