You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, not sure why go through the effort of implementing your own trainer again...
In lightning we already support:
automatic checkpoint loading/saving
multi-cpu
multip-gpu
multi-tpu core
16-bit precision (amp and native)
accumulated gradients
and about 40+ more features.
Not to mention it's maintained by a team of over 20+ fulltime engineers and 200+ open-source contributors and has been adopted by over 400 companies and research labs.
The text was updated successfully, but these errors were encountered:
williamFalcon
changed the title
Use PyTorch Lightning to handle the training
Use PyTorch Lightning to handle the training (free checkpointing + logging + 16-bit precision)
Aug 19, 2020
Awesome repo!
However, not sure why go through the effort of implementing your own trainer again...
In lightning we already support:
Not to mention it's maintained by a team of over 20+ fulltime engineers and 200+ open-source contributors and has been adopted by over 400 companies and research labs.
https://pytorch-lightning.readthedocs.io/en/latest/new-project.html
The text was updated successfully, but these errors were encountered: