Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to put the data to evaluate the result? #4

Closed
cookie-ke opened this issue Jun 13, 2021 · 10 comments
Closed

How to put the data to evaluate the result? #4

cookie-ke opened this issue Jun 13, 2021 · 10 comments

Comments

@cookie-ke
Copy link

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

@wtliao
Copy link
Owner

wtliao commented Jul 23, 2021

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

Sorry for so late reply. Hope you have solved it. But I also give an example here in case not.

  • In IS.py, you need to set argument --input_image_dir to point to the folder of your generated images.
  • In test_lpips.py, you need to set --orig_image_path to point to the original image. Note that, you need to resize the original image to the same size as your generated image. Then, you need to set --generated_image_path to point to your generated image, as in IS.py.
  • For FID evaluation, please refer to https://github.com/bioinf-jku/TTUR. I implemented their code.

@priyankaupadhyay090
Copy link

priyankaupadhyay090 commented Jan 28, 2022

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

Hey, were you able to solve this? I want some help to generate images?

@cookie-ke
Copy link
Author

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

Hey, were you able to solve this? I want some help to generate images?

I just generated the original images during the test stage so I could compare them with fake image to obtain the metric result.

@priyankaupadhyay090
Copy link

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

Hey, were you able to solve this? I want some help to generate images?

I just generated the original images during the test stage so I could compare them with fake image to obtain the metric result.

before generating the images. I have a doubt into the sampling() method of main.py. Can I contact you on email ?

@cookie-ke
Copy link
Author

Could you please tell me how to put the image data into a folder when I want to use FID, LPIPS, or IS? The train set and data set are split by ".pikle" files and I don't know how to the evaluation metric.

Hey, were you able to solve this? I want some help to generate images?

I just generated the original images during the test stage so I could compare them with fake image to obtain the metric result.

before generating the images. I have a doubt into the sampling() method of main.py. Can I contact you on email ?

You can contact me through GitHub if I can help you. I am now on vacation at home and my experiment codes are in the school laboratory.

@priyankaupadhyay090
Copy link

priyankaupadhyay090 commented Jan 28, 2022

so when I run main.py. I get this

Total filenames: 11788 001.Black_footed_Albatross/Black_Footed_Albatross_0046_18.jpg
Load filenames from: data/birds/train/filenames.pickle (8855)
Load filenames from: data/birds/test/filenames.pickle (2933)
Load from: data/birds/captions.pickle
5450 10
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py:61: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
"num_layers={}".format(dropout, num_layers))
Downloading: "https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth" to /root/.cache/torch/hub/checkpoints/inception_v3_google-1a9a5a14.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104M/104M [00:01<00:00, 64.4MB/s]
Load pretrained inception v3 model
Load image encoder from: DAMSMencoders/bird/image_encoder200.pth
Traceback (most recent call last):
File "main.py", line 547, in
sampling(text_encoder, netG, dataloader, ixtoword, device) # generate images for the whole valid dataset
File "main.py", line 76, in sampling
start_epoch = int(cfg.TRAIN.NET_G[istart:iend])
ValueError: invalid literal for int() with base 10: ''

this is the main.py
def sampling(text_encoder, netG, dataloader, ixtoword, device):
model_dir = cfg.TRAIN.NET_G
istart = cfg.TRAIN.NET_G.rfind('_') + 1
iend = cfg.TRAIN.NET_G.rfind('.')
start_epoch = int(cfg.TRAIN.NET_G[istart:iend])

start_epoch = int(cfg.TRAIN.NET_G[istart:iend]) lien giving the error which is calling bird.yml file

cfg/bird.yml file where "NET_G" is defined
TRAIN:
NF: 64 # default 64
BATCH_SIZE: 24 #24
MAX_EPOCH: 600
NET_G: '' # when validation, put the path of the trained model here
WARMUP_EPOCHS: 100
GSAVE_INTERVAL: 10
DSAVE_INTERVAL: 10

which trained model path should go in NET_G ? we have 2 trained model folders: fixed.zip and finetune.zip

should it be : fixed trained model

NET_G: 'trained_model/fixed/cub/netG_590.pth'

or finetune trained model

NET_G: 'trained_model/finetune/cub/which_file'
where which_file has 4 models: netG_550.pth, netD_550.pth, image_encoder_550.pth, text_encoder_550.pth,

if we are chosing finetune model then which model from the above 4 I should choose?

Sorry for long message

@cookie-ke
Copy link
Author

so when I run main.py. I get this

Total filenames: 11788 001.Black_footed_Albatross/Black_Footed_Albatross_0046_18.jpg Load filenames from: data/birds/train/filenames.pickle (8855) Load filenames from: data/birds/test/filenames.pickle (2933) Load from: data/birds/captions.pickle 5450 10 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py:61: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1 "num_layers={}".format(dropout, num_layers)) Downloading: "https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth" to /root/.cache/torch/hub/checkpoints/inception_v3_google-1a9a5a14.pth 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104M/104M [00:01<00:00, 64.4MB/s] Load pretrained inception v3 model Load image encoder from: DAMSMencoders/bird/image_encoder200.pth Traceback (most recent call last): File "main.py", line 547, in sampling(text_encoder, netG, dataloader, ixtoword, device) # generate images for the whole valid dataset File "main.py", line 76, in sampling start_epoch = int(cfg.TRAIN.NET_G[istart:iend]) ValueError: invalid literal for int() with base 10: ''

this is the main.py def sampling(text_encoder, netG, dataloader, ixtoword, device): model_dir = cfg.TRAIN.NET_G istart = cfg.TRAIN.NET_G.rfind('_') + 1 iend = cfg.TRAIN.NET_G.rfind('.') start_epoch = int(cfg.TRAIN.NET_G[istart:iend])

start_epoch = int(cfg.TRAIN.NET_G[istart:iend]) lien giving the error which is calling bird.yml file

cfg/bird.yml file where "NET_G" is defined TRAIN: NF: 64 # default 64 BATCH_SIZE: 24 #24 MAX_EPOCH: 600 NET_G: '' # when validation, put the path of the trained model here WARMUP_EPOCHS: 100 GSAVE_INTERVAL: 10 DSAVE_INTERVAL: 10

which trained model path should go in NET_G ? we have 2 trained model folders: fixed.zip and finetune.zip

should it be : fixed trained model

NET_G: 'trained_model/fixed/cub/netG_590.pth'

or finetune trained model

NET_G: 'trained_model/finetune/cub/which_file' where which_file has 4 models: netG_550.pth, netD_550.pth, image_encoder_550.pth, text_encoder_550.pth,

if we are chosing finetune model then which model from the above 4 I should choose?

Sorry for long message

Sorry, I've never had that problem.

@priyankaupadhyay090
Copy link

priyankaupadhyay090 commented Jan 28, 2022

cfg/bird.yml file where "NET_G" is defined
TRAIN:
NF: 64 # default 64
BATCH_SIZE: 24 #24
MAX_EPOCH: 600
NET_G: '' # when validation, put the path of the trained model here
WARMUP_EPOCHS: 100
GSAVE_INTERVAL: 10
DSAVE_INTERVAL: 10

okay. can you let me know which path you passed into NET_G for bird.yml file?

NET_G: '' # when validation, put the path of the trained model here

@cookie-ke
Copy link
Author

cfg/bird.yml file where "NET_G" is defined
TRAIN:
NF: 64 # default 64
BATCH_SIZE: 24 #24
MAX_EPOCH: 600
NET_G: '' # when validation, put the path of the trained model here
WARMUP_EPOCHS: 100
GSAVE_INTERVAL: 10
DSAVE_INTERVAL: 10

okay. can you let me know which path you passed into NET_G for bird.yml file?

NET_G: '' # when validation, put the path of the trained model here

I'm not using this data set, so it may be structurally inconsistent.

@priyankaupadhyay090
Copy link

cfg/bird.yml file where "NET_G" is defined
TRAIN:
NF: 64 # default 64
BATCH_SIZE: 24 #24
MAX_EPOCH: 600
NET_G: '' # when validation, put the path of the trained model here
WARMUP_EPOCHS: 100
GSAVE_INTERVAL: 10
DSAVE_INTERVAL: 10

okay. can you let me know which path you passed into NET_G for bird.yml file?
NET_G: '' # when validation, put the path of the trained model here

I'm not using this data set, so it may be structurally inconsistent.

thank you :)

@wtliao wtliao closed this as completed Mar 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants