2018 - Spring Vanderbilt ML Group
REpository for learning basic ML concepts, and more advance methods such as Deep Learning, etc.
Author: Victor Calderon (firstname.lastname@example.org)
Installing Environment & Dependencies
To use the scripts in this repository, you must have Anaconda installed on the systems that will be running the scripts. This will simplify the process of installing all the dependencies.
For reference, see: https://conda.io/docs/user-guide/tasks/manage-environments.html
The package counts with a Makefile with useful functions. You must use this Makefile to ensure that you have all the necessary dependencies, as well as the correct conda environment.
- Show all available functions in the Makefile
$: make show-help Available rules: clean Delete all compiled Python files environment Set up python interpreter environment - Using environment.yml lint Lint using flake8 remove_environment Delete python interpreter environment test_environment Test python environment is setup correctly update_environment Update python interpreter environment
- Create the environment from the
- Activate the new environment 2018_spring_vanderbilt_ml_bootcamp.
source activate 2018_spring_vanderbilt_ml_bootcamp
- To update the
environment.ymlfile (when the required packages have changed):
- Deactivate the new environment:
To make it easier to activate the necessary environment, one can check out conda-auto-env, which activates the necessary environment automatically.
├── LICENSE ├── Makefile <- Makefile with commands like `make data` or `make train` ├── README.md <- The top-level README for developers using this project. ├── data │ ├── external <- Data from third party sources. │ ├── interim <- Intermediate data that has been transformed. │ ├── processed <- The final, canonical data sets for modeling. │ └── raw <- The original, immutable data dump. │ ├── docs <- A default Sphinx project; see sphinx-doc.org for details │ ├── models <- Trained and serialized models, model predictions, or model summaries │ ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering), │ the creator's initials, and a short `-` delimited description, e.g. │ `1.0-jqp-initial-data-exploration`. │ ├── references <- Data dictionaries, manuals, and all other explanatory materials. │ ├── reports <- Generated analysis as HTML, PDF, LaTeX, etc. │ └── figures <- Generated graphics and figures to be used in reporting │ ├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g. │ generated with `pip freeze > requirements.txt` │ ├── src <- Source code for use in this project. │ ├── __init__.py <- Makes src a Python module │ │ │ ├── data <- Scripts to download or generate data │ │ │ │ │ ├── utilities_python <- General Python scripts to make the flow of the project a little easier. │ │ │ │ │ └── make_dataset.py │ │ │ ├── features <- Scripts to turn raw data into features for modeling │ │ └── build_features.py │ │ │ ├── models <- Scripts to train models and then use trained models to make │ │ │ predictions │ │ ├── predict_model.py │ │ └── train_model.py │ │ │ └── visualization <- Scripts to create exploratory and results oriented visualizations │ └── visualize.py │ └── tox.ini <- tox file with settings for running tox; see tox.testrun.org
Project based on the cookiecutter data science project template. #cookiecutterdatascience