You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The quality is worse than DDPM paper, but also according to my fellow researcher it is worse than the lucidbrains repo. Perhaps there is still a bug or missing setting somewhere?
The text was updated successfully, but these errors were encountered:
I thought maybe you have some trainings logs from your training with lucydrains repo or some loss curves from your training run here, but no worries if not :-)
Here are my cifar10 32x32 results, with 7x NVIDIA GeForce RTX 2080 Ti with 11GB VRAM trained for ~10 hours with:
python3 -m torch.distributed.run --nproc_per_node 7 train_unconditional.py --dataset="cifar10" --resolution=32 --output_dir="cifar10-ddpm-" --batch_size=16 --num_epochs=100 --gradient_accumulation_steps=1 --lr=1e-4 --warmup_steps=500
cifar10-ddpm.zip
The quality is worse than DDPM paper, but also according to my fellow researcher it is worse than the lucidbrains repo. Perhaps there is still a bug or missing setting somewhere?
The text was updated successfully, but these errors were encountered: