ECO Mock Catalogues
Repository for creating ECO synthetic catalogues
Author: Victor Calderon (email@example.com)
Installing Environment & Dependencies
To use the scripts in this repository, you must have Anaconda installed on the systems that will be running the scripts. This will simplify the process of installing all the dependencies.
For reference, see: https://conda.io/docs/user-guide/tasks/manage-environments.html
The package counts with a Makefile with useful functions. You must use this Makefile to ensure that you have all the necessary dependencies, as well as the correct conda environment.
- Show all available functions in the Makefile
$: make show-help Available rules: clean Delete all compiled Python files environment Set up python interpreter environment - Using environment.yml lint Lint using flake8 remove_environment Delete python interpreter environment test_environment Test python environment is setup correctly update_environment Update python interpreter environment
- Create the environment from the
- Activate the new environment eco_mocks_catls.
source activate eco_mocks_catls
- To update the
environment.ymlfile (when the required packages have changed):
- Deactivate the new environment:
To make it easier to activate the necessary environment, one can check out conda-auto-env, which activates the necessary environment automatically.
This is a summary of the values used to create these mock catalogues: These catalogues are taking into account the extra buffer along the cz direction in redshift space.
|Survey||RA (Deg)||RA Range||DEC (Deg)||DEC Range||zmin||zmax||Vmin (km/s)||Vmax (km/s)||Dist (Mpc)|
|A||(131.25, 236.25)||105.0||(0 ,+5)||5||0.00844||0.0249||2532||7470.||(25.32,70.02)|
|B||(330.0 , 45.0 )||75.0||(-1.25,+1.25)||2.5||0.01416||0.024166||4250||7250.||(42.5 , 72.5)|
|ECO||(130.05, 237.45)||107.4||(-1, +49.85)||50.85||0.00844||0.0249||2532||7470.||(25.32,70.02)|
The next table provides the number of mock catalogues per cubic box of L=180 Mpc/h.
The mock catalogues have the same geometries as those of the real surveys. The mock catalogues consist of a total of 26 columns, each column providing information about the individual galaxy and its host DM halo, and properties from real galaxy catalogues. Aside from the values obtained from the simulation ( Columns 1-12), we matched properties from real catalogues (i.e. ECO and RESOLVE A/B) to the mock catalogues by finding the r-band absolute magnitude in real data that resembles that of the mock galaxy catalogues. For r-band absolute magnitudes between -17.33 < Mr <= -17.00, we matched the mock galaxy's Mr to a galaxy in RESOLVE B (resolvecatalog_str.fits | Updated on 2015-07-16) For r-band absolute magnitudes brighter than Mr = -17.33, we matched the mock galaxy's Mr to a galaxy in ECO (eco_wresa_050815.dat | Updated on 2015-05-08). For each matched galaxy, we attached the galaxy's properties to the matched mock galaxy catalogue.
We also ran the Berlind2006 Friends-of-Friends algorithm on each mock catalogue, and assigned an estimated mass to the galaxy group through Halo Abundance Matching.
For observables in the real data, the joint probability distributions are the same as those in the real data.
For all the values, we use the following cosmology:
Omega_M_0 : 0.302 Omega_lambda_0: 0.698 Omega_k_0 : 0.0 h : 0.698
For the Group finding, we used the following parameters and linking lengths:
Linking Parallel: 1.1 Linking Perpend.: 0.07 Nmin : 1
Theory Columns : Columns 1 - 12
Observables Columns: Columns 13 - 20
Group Columns : Columns 21 - 25
- Right Ascension : RA of the individual galaxy, given in degrees
- Declination : Declination of the ind. galaxy, given in degrees
- CZ_Obs : Velocity of the galaxy ( with redshift distortions), given in km/s
- Mr : Galaxy's magn. in the r-band. Calculated using a CLF approach, but using real photometry from survey.
- Halo ID : DM Halo identification number, as taken from the simulation
- log(MHalo) : Logarithmic value of the DM Halo's Mass, as log( MHalo / (Msun/h) ) with h=0.698.
- NGals_h : Total number of galaxies in DM halo. Number of galaxies in the mock may differ from this value.
- Type : Type of Galaxy, i.e. Central or Satellite. Halo Central = 1, Halo Satellite = 0.
- CZ_Real : Velocity of the galaxy ( without redshift distortions), given in km/s.
- Dist_central : Real Distance between Halo's center and the galaxy, in Mpc. Here, Central galaxy = Halo's center.
- Vp_total : Total value for peculiar velocity, given in km/s.
- Vp_tang : Tangential component of the peculiar velocity, given in km/s.
- Morphology : Galaxy morphology. 'LT': Late Type ; 'ET': Early Type. Used either 'goodmorph' (ECO) or 'MORPH' (RESOLVE) keys. '-9999' if no matched galaxy.
- log Mstar : Log value of Galaxy stellar mass in log Msun. Used either 'rpgoodmstarsnew' (ECO) or 'MSTARS' (RESOLVE) keys in the files.
- r-band mag : Galaxy's r-band apparent magnitude. Used either 'rpsmoothrestrmagnew' (ECO) or 'SMOOTHRESTRMAG' (RESOLVE) keys in the files.
- u-band mag : Galaxy's u-band apparent magnitude. Used either 'rpsmoothrestumagnew' (ECO) or 'SMOOTHRESTUMAG' (RESOLVE) keys in the files.
- FSMGR : Stellar mass produced over last Gyr divided by pre-extisting Stellar mass from new model set. In (1/Gyr). Used 'rpmeanssfr' (ECO) or 'MODELFSMGR' (RESOLVE) keys.
- Match_Flag : Survey, from which the properties of the real matched galaxy were extracted. 'ECOgal' = Galaxy from ECO file. 'RESgal' = Galaxy from RESOLVE file.
- u-r color : Color of the matched galaxy, i.e. umag - rmag (Col 15 - Col 16).
- MHI mass : HI mass in Msun. Used the predicted HI masses (matched to the ECO file, i.e. eco_wresa_050815.dat ) and the key 'MHI' for RESOLVE galaxies. To compute MHI masses using ECO values, we used the formula: 10^(MHI + logMstar). Units in Msun.
- Group ID : Group ID, to which the galaxy belongs after running the Berlind2006 FoF group finder.
- Group NGals : Number of galaxies in a group of galaxies.
- RG projected : Projected radius of the group of galaxies. Units in Mpc.
- CZ Disp. Group : Dispersion in velocities of galaxies in the group. Units in km/s.
- Abund. log MHalo : Abundance matched mass of the group of galaxy. This was calculated by assuming a monotonic relation between dark matter halo mass (MHalo) and the group total luminosity. For RESOLVE B, we used a modified version of the ECO group luminosity function. Units in Msun.
- Group Gal. Type : Type of group galaxy. Group central = 1, Group Satellite = 0.
The relationship between velocities (CZ's) is the following:
( CZ_Obs - CZ_Real)^2 + (Vp_tang)^2 = (Vp_total)^2
Affil : The University of Chicago, Universidad Católica de Chile
- Halo ID : This corresponds to the Halo ID number for the given DM Halo in the simulation box.
- log(MHalo) : Logarithmic value of the DM Halo's Mass, as log( MHalo / (Msun/h) ) with h = 1.0
- ID / Type : This is a flag that shows what the environment of the DM halo is. There are four options for this, i.e. a. ID = 0 ..... Not in a filament b. ID = 1 ..... A filament node c. ID = 2 ..... Part of a filament skeleton d. ID = 3 ..... within a close radius of a filament
- Fil. : The ID of the filament the halo belongs to ( -1 if it is not in a filament).
- Fil. Quality: The quality of the filament (i.e. probability that the filament is real).
├── LICENSE ├── Makefile <- Makefile with commands like `make data` or `make train` ├── README.md <- The top-level README for developers using this project. ├── data │ ├── external <- Data from third party sources. │ ├── interim <- Intermediate data that has been transformed. │ ├── processed <- The final, canonical data sets for modeling. │ └── raw <- The original, immutable data dump. │ ├── docs <- A default Sphinx project; see sphinx-doc.org for details │ ├── models <- Trained and serialized models, model predictions, or model summaries │ ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering), │ the creator's initials, and a short `-` delimited description, e.g. │ `1.0-jqp-initial-data-exploration`. │ ├── references <- Data dictionaries, manuals, and all other explanatory materials. │ ├── reports <- Generated analysis as HTML, PDF, LaTeX, etc. │ └── figures <- Generated graphics and figures to be used in reporting │ ├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g. │ generated with `pip freeze > requirements.txt` │ ├── src <- Source code for use in this project. │ ├── __init__.py <- Makes src a Python module │ │ │ ├── data <- Scripts to download or generate data │ │ │ │ │ ├── utilities_python <- General Python scripts to make the flow of the project a little easier. │ │ │ │ │ └── make_dataset.py │ │ │ ├── features <- Scripts to turn raw data into features for modeling │ │ └── build_features.py │ │ │ ├── models <- Scripts to train models and then use trained models to make │ │ │ predictions │ │ ├── predict_model.py │ │ └── train_model.py │ │ │ └── visualization <- Scripts to create exploratory and results oriented visualizations │ └── visualize.py │ └── tox.ini <- tox file with settings for running tox; see tox.testrun.org
Project based on the cookiecutter data science project template. #cookiecutterdatascience