Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training issues #3

Open
fupiao1998 opened this issue Apr 7, 2023 · 16 comments
Open

Training issues #3

fupiao1998 opened this issue Apr 7, 2023 · 16 comments

Comments

@fupiao1998
Copy link

Hi,
I found a file "landscape_linear1000_16x64x64_shiftT_window148_lr1e-4_ema_100000.pt" in the training script. Is this file the landscape.pt in the open-source model? I am looking forward to your answer, thank you very much.

@ludanruan
Copy link
Collaborator

Sorry, it should be "guided-diffusion_64_256_upsampler.pt". README and scripts have been updated.

@fupiao1998
Copy link
Author

Get it! Thanks so much for such a quick reply! Thank you!

@fupiao1998
Copy link
Author

Also, I would like to know which file is the "landscape_linear1000_16x64x64_shiftT_window148_lr1e-4_ema_100000.pt" in multimodal_train.sh.

@ludanruan
Copy link
Collaborator

Training multimodal-generation model requires no initialization, it has been updated now.

@fupiao1998
Copy link
Author

Thinks for your reply!

@fupiao1998
Copy link
Author

Hello, I'm disturbing you again. I saw in the supplemental material of the paper that the training process uses 32 V100s with a batch size of 128, but the current open source training script has a batch size of 4 and the number of graphics cards used is 1. Could you please provide me with the training script you used in your experiments? I look forward to your reply, thank you very much.

@fupiao1998 fupiao1998 reopened this Apr 9, 2023
@ludanruan
Copy link
Collaborator

ludanruan commented Apr 10, 2023

The batchsize aims at one GPU. For example, set "--GPU 0,1,2,3,4,5,6,7 mpiexec -n 8 python..." , the total batchsize equals 4*8=32. Our training requires 4 Nodes, that is 32*GPUs, you need to apply the scripts across multiple nodes according the requirements of your own cluster.

@fupiao1998
Copy link
Author

Get it,thank you!

@fupiao1998
Copy link
Author

Hello, I am training AIST dataset with 8 A100 cards, each card has a batch size of 12 and the overall batch size is 96. After 10,000 steps of training, the video as well as the sound from the test is still full noise. I'm not sure what the reason is at the moment.
How long do you think it takes to converge to a reasonable result during training?
Below is my training script, is there any difference between this and your original script?

#!/bin/bash

#################256 x 256 uncondition###########################################################
MODEL_FLAGS="--cross_attention_resolutions 2,4,8 --cross_attention_windows 1,4,8
--cross_attention_shift True --dropout 0.1 
--video_attention_resolutions 2,4,8
--audio_attention_resolutions -1
--video_size 16,3,64,64 --audio_size 1,25600 --learn_sigma False --num_channels 128
--num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True 
--use_scale_shift_norm True --num_workers 12"

# Modify --devices to your own GPU ID
TRAIN_FLAGS="--lr 0.0001 --batch_size 12 
--devices 0,1,2,3,4,5,6,7 --log_interval 1 --save_interval 500 --use_db False" #--schedule_sampler loss-second-moment
DIFFUSION_FLAGS="--noise_schedule linear --diffusion_steps 1000 --save_type mp4 --sample_fn dpm_solver++" 

# Modify the following pathes to your own paths
DATA_DIR="/nvme/datasets/video_diffusion/AIST++_crop/train"
OUTPUT_DIR="debug"
NUM_GPUS=8

mpiexec -n $NUM_GPUS  python py_scripts/multimodal_train.py --data_dir ${DATA_DIR} --output_dir ${OUTPUT_DIR} $MODEL_FLAGS $TRAIN_FLAGS $VIDEO_FLAGS $DIFFUSION_FLAGS 

Looking forward to your reply, thank you!

@ludanruan
Copy link
Collaborator

ludanruan commented Apr 11, 2023

In my experiments, updating to 50000 steps will have meaningful results.
I recomand you to set --save_interval 10000 to save the storage.
Set --sample_fn ddpm to test the intermediate checkpoints. Because when the model does not converge enough, accelerated sampling methods can produce worse results.

You can follow these advices and continue training on the current training checkpoints.

@fupiao1998
Copy link
Author

Thank you very much for your reply, I am sure it will help me in my experiment!

@aselimc
Copy link

aselimc commented Jun 19, 2023

Hello @ludanruan , thanks for sharing information. I was wondering what is the average time in hours to have meaningful results (or average step time) on Landscape or AIST++ datasets?

@ludanruan
Copy link
Collaborator

ludanruan commented Jun 19, 2023 via email

@aselimc
Copy link

aselimc commented Jun 19, 2023

@ludanruan Perhaps, my question was not so clear. I meant the average training time in hours/days to achieve this 50,000 iterations with V100 GPU's (as I have read from the paper.)?

Thanks, best

@ludanruan
Copy link
Collaborator

ludanruan commented Jun 19, 2023 via email

@aselimc
Copy link

aselimc commented Jun 19, 2023

Thank you for the information! I needed since I am planning to do research on this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants