-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Pre-trained Models? #35
Comments
Yes, you can fine-tune VL-BERT for you task by simply adding a classification head on top of the output feature of first token [CLS] and fine-tuning it together with VL-BERT. |
@faizanahemad We need precomputed boxes in our VL-BERT. You can use pre-trained Faster RCNN to compute the boxes, following our instructions on preparing Conceptual Captions dataset. |
@jackroos Thanks Jack, I do have a pre-trained Faster RCNN setup. This is based on the same Q2. Also I believe I need to use the same FRCNN as you did in case I want to use pretrained models. changing the FRCNN to SSD or any other detector will not work without retrained. |
@faizanahemad Sorry for the late reply, for examples of the image and generated boxes, maybe you can refer to the caffe bottom-up-attention repo, there are pre-computed boxes and features for coco images. And for your second question, the answer is yes, you need to use the same detector to extract visual features while using the FRCNN in VL-BERT. |
I have such a data set, each of its records is like this: <text>,<image>,<label>. It is a simple classification task, I have the following questions, I hope you can answer:
1. Can you use this pre-trained model for fine-tuning? So how do you do it?
2. How do I load my dataset into the model?
3. How to get the output of the model?
Thank you for answering my question
The text was updated successfully, but these errors were encountered: