🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
Updated
Oct 31, 2024 - Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official Python client for the Huggingface Hub.
Mimix: A Text Generation Tool and Pretrained Chinese Models
A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
A curated list of pretrained sentence and word embedding models
Superduper: Integrate AI models and machine learning workflows with your database to implement custom AI applications, without moving your data. Including streaming inference, scalable model hosting, training and vector search.
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
The official code for "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)". TEMPO is one of the very first open source Time Series Foundation Models for forecasting task v1.0 version.
This repo contains files for detecting faces zooming in on them and taking photos, using multi threading to optimise performance and the YOLOv3-tiny classifier.
An open source implementation of CLIP.
A treasure chest for visual classification and recognition powered by PaddlePaddle
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
[CVPR 2024 Extension] 160K volumes (42M slices) datasets, new segmentation datasets, 31M-1.2B pre-trained models, various pre-training recipes, 50+ downstream tasks implementation
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Deezer source separation library including pretrained models.
Experience the power of Clarifai’s AI platform with the python SDK. 🌟 Star to support our work!
Library for handling atomistic graph datasets focusing on transformer-based implementations, with utilities for training various models, experimenting with different pre-training tasks, and a suite of pre-trained models with huggingface integrations
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Toolkit to segment text into sentences or other semantic units in a robust, efficient and adaptable way.
Add a description, image, and links to the pretrained-models topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-models topic, visit your repo's landing page and select "manage topics."