Skip to content
This repository has been archived by the owner on Oct 11, 2023. It is now read-only.

Training Warp stage stops at epoch 3 #40

Open
phenomenal-manish opened this issue Jun 17, 2020 · 3 comments
Open

Training Warp stage stops at epoch 3 #40

phenomenal-manish opened this issue Jun 17, 2020 · 3 comments

Comments

@phenomenal-manish
Copy link

Hi,

I ran the train.py for training warp stage twice. (python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion)
However, the training does not proceed beyond epoch 3. Could you help me with this issue?
I have attached screenshots for reference.

IMG-20200617-WA0001
Capture

@andrewjong
Copy link
Owner

Hi! Sorry I'm not sure what the issue is. I haven't encountered this before.

@phenomenal-manish
Copy link
Author

One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training?
I went through the code, but could not find such things.

@tuan-seoultech
Copy link

One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training?
I went through the code, but could not find such things.

The loss values are the same may come from the printing format for %.3f.
Please try in visualizer.py | line 242 message+ = '%s: %.3f ' --> if you increase to %.6f it may show the difference between iterations?

For your issues, may be you can debug and check the values of: opt.start_epoch + 1, opt.n_epochs + 1

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants