Skip to content

sdesrozis/why-ignite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Why use PyTorch-Ignite ?

Installation

To install dependencies, use the following pip command

pip install -r requirements.txt 

Documentation and tutorials

Please read with attention the following links to official documentation and tutorials

Configuration

Use the script check_config.py to get some information about the configuration and environment

python check_config.py

On a cluster with slurm manager, use srun command

srun --nodes=1
     --ntasks=1
     --job-name=check_config_Divers
     --time=00:01:00 
     --partition=gpgpu
     --gres=gpu:2
     python check_config.py

The variable SLURM_WCKEY should be defined to a relevant project id value.

srun can be used in a scripting mode and submit to scheduler by sbatch command. Scripting can help to configure the environment more precisely.

Please see here for relocated environments if needed.

Example of configuration (using check_config.py) on a GPU node :

torch version : 1.7.1
torch git version : 57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57
torch version cuda : 10.1
number of cuda device(s) : 2
- device 0 : _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', major=6, minor=0, total_memory=16280MB, multi_processor_count=56)
- device 1 : _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', major=6, minor=0, total_memory=16280MB, multi_processor_count=56)

PyTorch backends

It exists several backends available in PyTorch :

  • gloo for GPUs and CPUs
  • nccl for GPUs
  • mpi for CPUs
  • xla for TPUs

The nccl backend should be prefered to handle GPUs but only one process per GPU is allowed.

About

Why should we use PyTorch-Ignite ?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published