Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Siamese Multi-depth Transformer-based Hierarchical Encoder #9526

Open
3 tasks done
lalitpagaria opened this issue Jan 11, 2021 · 3 comments
Open
3 tasks done

Siamese Multi-depth Transformer-based Hierarchical Encoder #9526

lalitpagaria opened this issue Jan 11, 2021 · 3 comments
Labels
Feature request Request for a new feature New model

Comments

@lalitpagaria
Copy link
Contributor

🌟 New model addition

Model description

Recently Google is published paper titled "Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching". And according to paper for long-form document matching SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT.

I feel it is will add value to already awesome transformers models collection 🙂

Open source status

@lalitpagaria
Copy link
Contributor Author

Linking Haystack issue deepset-ai/haystack#719

@Engineering-Geek
Copy link

Frequent user of hugging face here, I'm a fan of this new publication and would love to see it implemented. Commenting here for the GitHub algorithm to ++

@ChanCheeKean
Copy link

Hi all, rather than waiting for the implementation in huggingface. Is there a simple way to utilize the pretrained model from the smith repo on our own dataset (to generate document embedding)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request Request for a new feature New model
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants