Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training a new TecoGAN model gives an increasing loss #38

Open
Liang-Rui opened this issue Dec 12, 2019 · 4 comments
Open

Training a new TecoGAN model gives an increasing loss #38

Liang-Rui opened this issue Dec 12, 2019 · 4 comments

Comments

@Liang-Rui
Copy link

Hi,

Anyone has tried to train a new TecoGAN model? My training processing gives an increasing loss, can you give me any hint by inspecting the log file?

Screen Shot 2019-12-12 at 9 57 40 am

Screen Shot 2019-12-12 at 9 58 05 am

Screen Shot 2019-12-12 at 9 58 30 am

Screen Shot 2019-12-12 at 9 58 48 am

Screen Shot 2019-12-12 at 9 59 24 am

Screen Shot 2019-12-12 at 9 59 57 am

Screen Shot 2019-12-12 at 10 00 13 am

Screen Shot 2019-12-12 at 10 00 32 am

logfile.txt

@Feihong-cc
Copy link

Excuse me, can you train this model? I have faced a problem that vgg model can't be restored, have you encountered this problem?

@alessiapacca
Copy link

hey @Liang-Rui , how did you print those losses plots?

@FrankLinxzx
Copy link

hey @Liang-Rui , how did you print those losses plots?
1.a=your checkpoint /log folder address

  1. replace a with step1
    tensorboard --logdir=a --host=127.0.0.1
    3, use http://127.0.0.1:6006/ or something that cmd will appear use that address
  2. will see the plots ,enjoy

@santosh-shriyan
Copy link

@Liang-Rui Did you figure out why is there a increase in loss, cause I'm facing the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants