Skip to content
This repository has been archived by the owner on Apr 11, 2023. It is now read-only.

Request for pre-fine tuning self-art weights #108

Closed
carlos-gemmell opened this issue Feb 3, 2020 · 1 comment
Closed

Request for pre-fine tuning self-art weights #108

carlos-gemmell opened this issue Feb 3, 2020 · 1 comment

Comments

@carlos-gemmell
Copy link

Hi

I would like to obtain the self attention model weights before any fine tuning was done to it.
Does this link from the leaderboard contain such a weights file. If it is the fine tuned one, could you make the base self-att model for python available.

My intention is to pass these weights to a model benefitting from contextual word embeddings in PyTorch. Any information pertaining to the structure of the weights file would be beneficial.

Thank you

Carlos

@hamelsmu
Copy link
Contributor

hamelsmu commented Feb 9, 2020

Hello Carlos, we did not use any pre-training / fine tuning approaches. This is one of the areas of opportunity we expressly identified for future researchers in our paper.

We would love to see what kind of results you get by applying this approach! Please let us know how it goes!

@hamelsmu hamelsmu closed this as completed Feb 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants