Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning SPECTER? #20

Open
jacklxc opened this issue Dec 19, 2020 · 3 comments
Open

Fine-tuning SPECTER? #20

jacklxc opened this issue Dec 19, 2020 · 3 comments

Comments

@jacklxc
Copy link

jacklxc commented Dec 19, 2020

  1. Is there any way that instead of training from SciBERT, but directly fine-tune on SPECTER? It seems that the format of the model weights of SPECTER is different from SciBERT.

  2. How do I fine-tune SPECTER like SciBERT on classification tasks?

@ZzyChris97
Copy link

I have the same problem, how did you solve this

@armancohan
Copy link
Collaborator

The model that is on huggingface should be easily fine-tunable like SciBERT.
You can follow instructions here https://huggingface.co/docs/transformers/training but instead of bert-base-uncased use allenai/specter as the pre-trained model name.

@gabriead
Copy link

gabriead commented Jan 13, 2023

How does a custom training dataset has to look like? I understand from the repo that it consists of title+abstract+id for each paper
in metadata.json but I don't understand was data.json does? Are those the positive and negative examples of papers for each paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants