Skip to content

Latest commit

 

History

History
298 lines (224 loc) · 7.66 KB

File metadata and controls

298 lines (224 loc) · 7.66 KB

Welcome to ⚡ PyTorch Lightning

</div>


Install Lightning

Pip users

pip install pytorch-lightning

Conda users

conda install pytorch-lightning -c conda-forge

</div>

Or read the advanced install guide

We are fully compatible with any stable PyTorch version v1.10 and above.


Get Started

</div>


Current Lightning Users

</div>

starter/introduction starter/installation

levels/core_skills levels/intermediate levels/advanced levels/expert

common/lightning_module common/trainer

api_references

Avoid overfitting <common/evaluation> model/build_model.rst common/hyperparameters common/progress_bar deploy/production advanced/training_tricks cli/lightning_cli tuning/profiler Manage experiments <visualize/logging_intermediate> Organize existing PyTorch into Lightning <starter/converting> clouds/cluster Save and load model progress <common/checkpointing> Save memory with half-precision <common/precision> Training over the internet <strategies/hivemind> advanced/model_parallel clouds/cloud_training Train on single or multiple GPUs <accelerators/gpu> Train on single or multiple HPUs <accelerators/hpu> Train on single or multiple IPUs <accelerators/ipu> Train on single or multiple TPUs <accelerators/tpu> Train on MPS <accelerators/mps> Use a pretrained model <advanced/pretrained> Inject Custom Data Iterables <data/custom_data_iterables> model/own_your_loop

Accelerators <extensions/accelerator> Callback <extensions/callbacks> Checkpointing <common/checkpointing> Cluster <clouds/cluster> Cloud checkpoint <common/checkpointing_advanced> Console Logging <common/console_logs> Debugging <debug/debugging> Early stopping <common/early_stopping> Experiment manager (Logger) <visualize/experiment_managers> Fault tolerant training <clouds/fault_tolerant_training> Finetuning <advanced/finetuning> Flash <https://lightning-flash.readthedocs.io/en/stable/> Grid AI <clouds/cloud_training> GPU <accelerators/gpu> Half precision <common/precision> HPU <accelerators/hpu> Inference <deploy/production_intermediate> IPU <accelerators/ipu> Lightning CLI <cli/lightning_cli> Lightning Lite <model/build_model_expert> LightningDataModule <data/datamodule> LightningModule <common/lightning_module> Lightning Transformers <https://pytorch-lightning.readthedocs.io/en/stable/ecosystem/transformers.html> Log <visualize/loggers> Loops <extensions/loops> TPU <accelerators/tpu> Metrics <https://torchmetrics.readthedocs.io/en/stable/> Model <model/build_model.rst> Model Parallel <advanced/model_parallel> Collaborative Training <strategies/hivemind> Plugins <extensions/plugins> Progress bar <common/progress_bar> Production <deploy/production_advanced> Predict <deploy/production_basic> Pretrained models <advanced/pretrained> Profiler <tuning/profiler> Pruning and Quantization <advanced/pruning_quantization> Remote filesystem and FSSPEC <common/remote_fs> Strategy <extensions/strategy> Strategy registry <advanced/strategy_registry> Style guide <starter/style_guide> Sweep <clouds/run_intermediate> SWA <advanced/training_tricks> SLURM <clouds/cluster_advanced> Transfer learning <advanced/transfer_learning> Trainer <common/trainer> Torch distributed <clouds/cluster_intermediate_2>

generated/CODE_OF_CONDUCT.md generated/CONTRIBUTING.md generated/BECOMING_A_CORE_CONTRIBUTOR.md governance versioning generated/CHANGELOG.md