You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thank you for this implementation. I have several quetions:
I am trying to train stylex on BDD dataset (driving scenes) and i was wondering what is the difference between stylex_train_new and stylex_train? I saw that in th new code you set a lower learning rate for the encoder which helped me stabilize the training and avoid big loss values.
I have another question regarding the encoder. you seem to have tested other encoder architechtures other than the discriminator architechture. Did you get better results using other architectures? In my training the encoder is recontructing almost the same image at each step. At first I thought it was a mode collapse but when I checked images generated without the encoder they were more diverse.
In early training ( iteration < 15k) I always have a mode collapse. I am using a total batch size of 32 (4 gpus) and a gradient accumulation=4 and img_size=64. Do you have any advice to prevent mode collapse?
Generated images at iteration 3250 (not generated using encoder)
Is using a batch size of 8 and gradient accumulation of 4 equivalent to batch size of 32 and gradient accumulation = 1 in your implementation?
The text was updated successfully, but these errors were encountered:
Hello,
Thank you for this implementation. I have several quetions:
I am trying to train stylex on BDD dataset (driving scenes) and i was wondering what is the difference between stylex_train_new and stylex_train? I saw that in th new code you set a lower learning rate for the encoder which helped me stabilize the training and avoid big loss values.
I have another question regarding the encoder. you seem to have tested other encoder architechtures other than the discriminator architechture. Did you get better results using other architectures? In my training the encoder is recontructing almost the same image at each step. At first I thought it was a mode collapse but when I checked images generated without the encoder they were more diverse.
In early training ( iteration < 15k) I always have a mode collapse. I am using a total batch size of 32 (4 gpus) and a gradient accumulation=4 and img_size=64. Do you have any advice to prevent mode collapse?
Generated images at iteration 3250 (not generated using encoder)
Is using a batch size of 8 and gradient accumulation of 4 equivalent to batch size of 32 and gradient accumulation = 1 in your implementation?
The text was updated successfully, but these errors were encountered: