Skip to content
#

pretrained-models

Here are 369 public repositories matching this topic...

BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.

  • Updated Aug 10, 2020
  • Python

Library for handling atomistic graph datasets focusing on transformer-based implementations, with utilities for training various models, experimenting with different pre-training tasks, and a suite of pre-trained models with huggingface integrations

  • Updated Jun 18, 2024
  • Python

Improve this page

Add a description, image, and links to the pretrained-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pretrained-models topic, visit your repo's landing page and select "manage topics."

Learn more