Skip to content

Latest commit

 

History

History
134 lines (95 loc) · 4.68 KB

README_TEMPLATE.md

File metadata and controls

134 lines (95 loc) · 4.68 KB

DeepCodebase

Official repository for "DeepCodebase for Deep Learning".

A fancy image here

Figure: DeepCodebase for Deep Learning. (Read DEVELOPMENT.md to learn more about this template.)

DeepCodebase for Deep Learning
Xin Hong
Published on Github

News

  • [2022-07-21] Initial release of the DeepCodebase.

Description

DeepCodebase is a codebase/template for deep learning researchers, so that do experiments and releasing projects becomes easier. Do right things with suitable tools!

This README.md is meant to be the template README of the releasing project. Read the Development Guide to realize and start to use DeepCodebase.

If you find this code useful, please consider to star this repo and cite us:

@inproceedings{deepcodebase,
  title={DeepCodebase for Deep Learning},
  author={Xin Hong},
  booktitle={Github},
  year={2022}
}

Environment Setup

This project recommends to run experiments with docker. However, we also provide a way to install the experiment environment with conda directly on the host machine. Check our introduction about the environment for details.

Quick Start

The following steps are to build a docker image for this project and run.

Step 1. Install docker-compose in your host machine.

# (set PyPI mirror is optional)
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
pip install docker-compose

Step 2. Build a docker image according to docker/Dockerfile and start a docker container.

python docker.py prepare --build

*When you first run docker.py, it will prompt you to set variables such as project name, data root, etc. These variables are related to the docker container and will be stored in .env file under the root. Read the DEVELOPMENT.md for more information.

Step 3. Enter the docker container at any time, the environment is ready for experiments.

python docker.py [enter]

Data Preparation

MNIST will be automatically downloaded to DATA_ROOT and prepared by torch.dataset.MNIST.

Training

Commonly used training commands:

# training a mnist_lenet on GPU 0
python train.py experiment=mnist_lenet devices="[0]"

# training a mnist_lenet on GPU 1
python train.py experiment=mnist_dnn devices="[1]"

# training a mnist_lenet on two gpus, and change the experiment name
python train.py experiment=mnist_lenet devices="[2,3]" name="mnist lenet 2gpus"

Read sections about the configuration to learn how to configure your experiments structurally and simply override them from command line.

Testing

Commonly used testing commands:

# test the model, <logdir> has been printed twice (start & end) on training log
python test.py <logdir>
# test the model, with multiple config overrides, e.g.: to test multiple datasets
python test.py <logdir> --update_func test_original test_example
# update wandb, and prefix the metrics
python test.py --update_func test_original test_example --prefix original example --update_wandb
# generate LaTex Tables
python scripts/generate_latex_table.py

Read sections about wandb to learn how exporting the LaTex table from experimental records works.

Acknowledgement

Many best practices are learned from Lightning-Hydra-Template, thanks to the maintainers of this project.

License

MIT License


The is a DeepCodebase template based project.