Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use Pre-trained Models? #35

Closed
webYFDT opened this issue Jun 6, 2020 · 5 comments
Closed

How to use Pre-trained Models? #35

webYFDT opened this issue Jun 6, 2020 · 5 comments

Comments

@webYFDT
Copy link

webYFDT commented Jun 6, 2020

I have such a data set, each of its records is like this: <text>,<image>,<label>. It is a simple classification task, I have the following questions, I hope you can answer:
1. Can you use this pre-trained model for fine-tuning? So how do you do it?
2. How do I load my dataset into the model?
3. How to get the output of the model?
Thank you for answering my question

@webYFDT webYFDT closed this as completed Jun 6, 2020
@webYFDT webYFDT reopened this Jun 6, 2020
@jackroos
Copy link
Owner

jackroos commented Jun 6, 2020

Yes, you can fine-tune VL-BERT for you task by simply adding a classification head on top of the output feature of first token [CLS] and fine-tuning it together with VL-BERT.
For how to load data and conduct fine-tuning, you can follow our code for downstream tasks (e.g., VQA, RefCOCO+, etc.).

@faizanahemad
Copy link

@jackroos Would this work even if I have no precomputed features/boxes for the images. It peeked inside vqa/data//datasets/vqa.py and it seems we need to have precomputed stuff.

@webYFDT Did you make it work?

@jackroos
Copy link
Owner

jackroos commented Jun 23, 2020

@faizanahemad We need precomputed boxes in our VL-BERT. You can use pre-trained Faster RCNN to compute the boxes, following our instructions on preparing Conceptual Captions dataset.

@faizanahemad
Copy link

@jackroos Thanks Jack, I do have a pre-trained Faster RCNN setup. This is based on the same caffe model you have in the instructions. I generate 100 boxes with nms=0.5 and confidence_threshold = 0.2,
Q1. Would it be possible for you to post 1 image and it's generated boxes, features here so I can verify if my setup is runnable.

Q2. Also I believe I need to use the same FRCNN as you did in case I want to use pretrained models. changing the FRCNN to SSD or any other detector will not work without retrained.

@jackroos
Copy link
Owner

jackroos commented Jul 5, 2020

@faizanahemad Sorry for the late reply, for examples of the image and generated boxes, maybe you can refer to the caffe bottom-up-attention repo, there are pre-computed boxes and features for coco images. And for your second question, the answer is yes, you need to use the same detector to extract visual features while using the FRCNN in VL-BERT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants