Skip to content

Textual Inversion training stops consistently at Epoch 7.  #493

@thelamedia

Description

@thelamedia

The model seems to be usable after epoch 7, but I would assume the results would improve with more iterations, right?
I'm running this on dual RTX A6000's and this is the message I get at termination:

Saving latest checkpoint... /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/trainer/deprecated_api.py:32: LightningDeprecationWarning:Trainer.train_loophas been renamed toTrainer.fit_loopand will be removed in v1.6. rank_zero_deprecation( /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/trainer/deprecated_api.py:32: LightningDeprecationWarning:Trainer.train_loophas been renamed toTrainer.fit_loopand will be removed in v1.6. rank_zero_deprecation( Global seed set to 23 /opt/conda/envs/ldm/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py:423: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup. rank_zero_deprecation( LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]

Is there a way to force it to train longer?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions