You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Frequent user of hugging face here, I'm a fan of this new publication and would love to see it implemented. Commenting here for the GitHub algorithm to ++
Hi all, rather than waiting for the implementation in huggingface. Is there a simple way to utilize the pretrained model from the smith repo on our own dataset (to generate document embedding)?
🌟 New model addition
Model description
Recently Google is published paper titled "Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching". And according to paper for long-form document matching SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT.
I feel it is will add value to already awesome transformers models collection 🙂
Open source status
The text was updated successfully, but these errors were encountered: