Skip to content

Latest commit

 

History

History
52 lines (36 loc) · 1.14 KB

skrl.trainers.parallel.rst

File metadata and controls

52 lines (36 loc) · 1.14 KB

Parallel trainer

Concept

Parallel trainer

Basic usage

Note

Each process adds a GPU memory overhead (~1GB, although it can be much higher) due to PyTorch's CUDA kernels. See PyTorch Issue #12873 for more details

Note

At the moment, only simultaneous training and evaluation of agents with local memory (no memory sharing) is implemented

Snippet

../snippets/trainer.py

Configuration

../../../skrl/trainers/torch/parallel.py

API

skrl.trainers.torch.parallel.ParallelTrainer

__init__