-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
The model seems to be usable after epoch 7, but I would assume the results would improve with more iterations, right?
I'm running this on dual RTX A6000's and this is the message I get at termination:
Saving latest checkpoint... /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/trainer/deprecated_api.py:32: LightningDeprecationWarning:Trainer.train_loophas been renamed toTrainer.fit_loopand will be removed in v1.6. rank_zero_deprecation( /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/trainer/deprecated_api.py:32: LightningDeprecationWarning:Trainer.train_loophas been renamed toTrainer.fit_loopand will be removed in v1.6. rank_zero_deprecation( Global seed set to 23 /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py:423: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup. rank_zero_deprecation( LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
Is there a way to force it to train longer?