Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi, can I ask some question about your paper ? (●'◡'●) #2

Open
jun0wanan opened this issue Oct 11, 2020 · 4 comments
Open

Hi, can I ask some question about your paper ? (●'◡'●) #2

jun0wanan opened this issue Oct 11, 2020 · 4 comments

Comments

@jun0wanan
Copy link

jun0wanan commented Oct 11, 2020

Thank you very much for an extraordinary job!

I'm very interested in your work and I hope to follow in your footsteps (●'◡'●)

Can your work run in many gpu ?

@jun0wanan
Copy link
Author

hope to recieve your reply

@BigRedT
Copy link
Owner

BigRedT commented Oct 14, 2020

Hi @jun0wanan,

In current form, our code base only supports single GPU training. Part of the challenge in supporting multi-gpu training is the contrastive loss which requires one caption to be compared to all other images in the mini-batch to compute the loss. Note that this is different from typical classification tasks where an image and its label are sufficient to compute loss for that sample and therefore it is easy to partition the batch and place each partition on a separate GPU.

One solution that might work reasonably well is to only use images placed on the same GPU as negatives for the contrastive loss. For example, if the batch size is 100 and you have 4 GPUs, each GPU handles a subset of size 25. So instead of using 99 images as negatives for each caption, you would be using 24.

Hope this helps!

@jun0wanan
Copy link
Author

jun0wanan commented Oct 27, 2020

Hi @jun0wanan,

In current form, our code base only supports single GPU training. Part of the challenge in supporting multi-gpu training is the contrastive loss which requires one caption to be compared to all other images in the mini-batch to compute the loss. Note that this is different from typical classification tasks where an image and its label are sufficient to compute loss for that sample and therefore it is easy to partition the batch and place each partition on a separate GPU.

One solution that might work reasonably well is to only use images placed on the same GPU as negatives for the contrastive loss. For example, if the batch size is 100 and you have 4 GPUs, each GPU handles a subset of size 25. So instead of using 99 images as negatives for each caption, you would be using 24.

Hope this helps!

hi,author~
I find a .py have a little error? Maybe it is that I didn't understand your setting..

In https://github.com/BigRedT/info-ground/blob/master/exp/ground/run/eval_flickr_phrase_loc_model_selection.py

model_nums = find_all_model_numbers(exp_const.model_dir)
for num in model_nums:
continue
if num <= 3000:
continue
model_const.model_num = n

continue why ?

@jun0wanan
Copy link
Author

Hi @jun0wanan,

In current form, our code base only supports single GPU training. Part of the challenge in supporting multi-gpu training is the contrastive loss which requires one caption to be compared to all other images in the mini-batch to compute the loss. Note that this is different from typical classification tasks where an image and its label are sufficient to compute loss for that sample and therefore it is easy to partition the batch and place each partition on a separate GPU.

One solution that might work reasonably well is to only use images placed on the same GPU as negatives for the contrastive loss. For example, if the batch size is 100 and you have 4 GPUs, each GPU handles a subset of size 25. So instead of using 99 images as negatives for each caption, you would be using 24.

Hope this helps!

hi ,and I find my model will decrease a lot:
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants