Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
anaismoller committed Oct 5, 2022
1 parent f79632b commit c3543ff
Showing 1 changed file with 17 additions and 6 deletions.
23 changes: 17 additions & 6 deletions docs/installation/five_minute_guide.rst
Expand Up @@ -23,7 +23,7 @@ Clone the GitHub repository
Setup your environment. 3 options
-----------------------------------


Please beware that SuperNNova only runs properly in Unix systems (Linux, MacOS).
a) Create a docker image: :ref:`DockerConfigurations` .
b) Create a conda virtual env :ref:`CondaConfigurations` .
c) Install packages manually. Inspect ``conda_env.txt`` for the list of packages we use.
Expand All @@ -38,7 +38,8 @@ Using command line
Build the database
.. code::
python run.py --data --dump_dir tests/dump --raw_dir tests/raw --fits_dir tests/fits
python run.py --data --dump_dir tests/dump --raw_dir tests/raw
Train an RNN

Expand All @@ -48,19 +49,29 @@ Train an RNN
With this command you are training and validating our Baseline RNN with the test database. The trained model will be saved in a newly created model folder inside ``tests/dump/models``.

The model folder has been named as follows: ``vanilla_S_0_CLF_2_R_None_saltfit_DF_1.0_N_global_lstm_32x2_0.05_128_True_mean`` (See below for the naming conventions). This folder's contents are:
The model folder has been named as follows: ``vanilla_S_0_CLF_2_R_None_photometry_DF_1.0_N_global_lstm_32x2_0.05_128_True_mean`` (See below for the naming conventions). This folder's contents are:

- **saved model** (``*.pt``): PyTorch RNN model.

- **statistics** (``METRICS*.pickle``): pickled Pandas DataFrame with accuracy and other performance statistics for this model.

- **predictions** (``PRED*.pickle``): pickled Pandas DataFrame with the predicitons of our model on the test set.
- **predictions** (``PRED*.pickle``): pickled Pandas DataFrame with the predictions of our model on the test set.

- **figures** (``train_and_val_*.png``): figures showing the evolution of the chosen metric at each training step.

Remember that our data is split in training, validation and test sets.

**You have trained, validated and tested your model.** You can now inspect the test light-curves and their predictions in ``tests/dump/lightcurves``.

Plot light-curves and their predictions

.. code::
python run.py --dump_dir tests/dump --plot_lcs
You can now inspect the test light-curves and their predictions in ``tests/dump/lightcurves``

**You have trained, validated and tested your model.**


Using Yaml
-----------------------
Expand Down Expand Up @@ -101,7 +112,7 @@ Naming conventions

- **R_None**: host-galaxy redshift provided. Options: ``zpho`` (photometric) or ``zspe`` (spectroscopic)

- **saltfit**: data used. In our database we split light-curves that have a succesful SALT2 fit (``saltfit``) and the complete dataset (``photometry``).
- **photometry**: data used. In our database we split light-curves that have a succesful SALT2 fit (``saltfit``) and the complete dataset (``photometry``).

- **DF_1.0**: data fraction used in training. With large datasets it is usefult to test training with a fraction of the available training set. In this case we use the whole dataset (``1.0``).

Expand Down

0 comments on commit c3543ff

Please sign in to comment.