Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in running the evaluation script #10

Closed
VIROBO-15 opened this issue Jul 17, 2023 · 13 comments
Closed

Problem in running the evaluation script #10

VIROBO-15 opened this issue Jul 17, 2023 · 13 comments

Comments

@VIROBO-15
Copy link

VIROBO-15 commented Jul 17, 2023

I still have the problem in running the evaluation script.

After certain point of the iteration the code is getting stuck.

Saving to /home/mbzuaiser/gill/gill_vist_outputs/514809043.png████████████████████████████████████████████▉ | 49/50 [00:02<00:00, 17.78it/s]100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:02<00:00, 17.43it/s]
Saving to /home/mbzuaiser/gill/gill_vist_outputs/514808431.png ████████████████████████████████████████████▉ | 49/50 [00:02<00:00, 17.76it/s]
6%|█████▍ | 279/4990 [51:08<10:47:19, 8.24s/it]

In both the case of VIST and VisDial.
As you have given the solution earlier I have [add an except for OSError](). Still I am facing the same issue.

It would be great if you can please help me in this

@kohjingyu
Copy link
Owner

What's the error message here? (if there is none, what does it say when you exit with ctrl+c?)

@VIROBO-15
Copy link
Author

VIROBO-15 commented Jul 17, 2023

There isn't any error messages. Anf if I do ctrl + c this is displayed

1%|▉ | 16/2064 [5:48:32<743:32:32, 1307.01s/it]
Traceback (most recent call last):
File "evals/generate_visdial_images.py", line 70, in
return_outputs = model.generate_for_images_and_texts(
File "/home/mbzuaiser/gill/gill/models.py", line 688, in generate_for_images_and_texts
img = utils.get_image_from_url(self.path_array[img_idx])
File "/home/mbzuaiser/gill/gill/utils.py", line 27, in get_image_from_url
response = requests.get(url)
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/home/mbzuaiser/gill/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/home/mbzuaiser/anaconda3/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/home/mbzuaiser/anaconda3/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/home/mbzuaiser/anaconda3/lib/python3.8/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/home/mbzuaiser/anaconda3/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/home/mbzuaiser/anaconda3/lib/python3.8/ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "/home/mbzuaiser/anaconda3/lib/python3.8/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
KeyboardInterrupt

@kohjingyu
Copy link
Owner

I see, it looks like it's having trouble reading one of the URLs for image retrieval. Could you try pulling from HEAD (c7de07a) and seeing if it works? I've disabled loading of retrieval embeddings by default since we don't need them for evals.

@VIROBO-15
Copy link
Author

For Visdial I am getting the error

Traceback (most recent call last):
File "evals/generate_visdial_images.py", line 27, in
model = models.load_gill('checkpoints/gill_opt/', load_ret_embs=False)
File "/home/mbzuaiser/gill/gill/models.py", line 895, in load_gill
emb_matrix = torch.tensor(emb_matrix, dtype=logit_scale.dtype).to(logit_scale.device)
TypeError: must be real number, not NoneType

@kohjingyu
Copy link
Owner

Sorry, this should be fixed with d85ad06. Not sure why I didn't catch it when I ran the eval earlier.

@VIROBO-15
Copy link
Author

VIROBO-15 commented Jul 18, 2023

Thank you for the help kohjingyu

What is the maximum epochs you have trained to get the final result reported in the paper As I am not able to reproduce the same number given in the table. OR Is there some other issues which can create this problem and what is the image size have you used fro calculating the LPIPS score OR what is the image resize operation have you used like cv2, F.intrepolate or PIL resize

LPIPS SCore(VIST) - reproduced (0.7314)
LPIPS SCore(VIST) -reproduced (0.7811)

Clip Score(VIST) - (reproduced (0.64018)
Clip Score(VISDial) - reproduced (0.64401)

@kohjingyu
Copy link
Owner

Was this a model you trained yourself? The models we released were trained as follows:

Screen Shot 2023-07-18 at 10 57 23 AM

what is the image size have you used fro calculating the LPIPS score

Since the CLIP scores you have are similar to those of the paper, it seems like the issue might be with resizing for LPIPS. We have to resize them to 256x256 since the model being used is AlexNet. We used the torchvision resize for this:

img0 = torchvision.transforms.functional.resize(img0, (256, 256), antialias=True)
img1 = torchvision.transforms.functional.resize(img1, (256, 256), antialias=True)

@VIROBO-15
Copy link
Author

Thank you for helping me out. I have obtained the equivalent numbers as reported in the paper.
Can you please also let me know how to reproduce the Table 3, Table 4 and Table 5 reported in the paper

@kohjingyu
Copy link
Owner

Those tables are mostly ablation results and we probably won't be releasing the scripts for those. For the contextual image retrieval eval, you can refer to the FROMAGe repo for instructions.

@avipartho
Copy link

@kohjingyu How many iterations did you have per epoch? (you highlighted 20k iterations with a batch size of 200) Was it 200 iterations/epoch for a total of 100 epochs?

@kohjingyu
Copy link
Owner

@kohjingyu How many iterations did you have per epoch? (you highlighted 20k iterations with a batch size of 200) Was it 200 iterations/epoch for a total of 100 epochs?

The epoch doesn't really matter since the data is randomly shuffled. I think it only affects how often the evals are run. I think I used 2000 iterations / epoch for 10 epochs, but in principle the iterations * batch_size is the only one that affects the final results (i.e., the model should see ~4m image-text pairs). Hope that makes sense!

@avipartho
Copy link

avipartho commented Jul 23, 2023

Thanks for your answer, I was also trying to figure out the number of image-text pairs you used for each epoch in your training. In this setup, your model saw 400k randomly selected image-text pairs from the training set in each epoch, right?

@kohjingyu
Copy link
Owner

That’s correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants