An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io/vallex/
-
Updated
Feb 11, 2024 - Python
An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io/vallex/
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
This repo contains the updated version of all the assignments/labs (done by me) of Deep Learning Specialization on Coursera by Andrew Ng. It includes building various deep learning models from scratch and implementing them for object detection, facial recognition, autonomous driving, neural machine translation, trigger word detection, etc.
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
Minimalist NMT for educational purposes
Inference Llama 2 in one file of pure 🔥
Implementation of the Swin Transformer in PyTorch.
Code for CRATE (Coding RAte reduction TransformEr).
Self-contained Machine Learning and Natural Language Processing library in Go
The repository of ET-BERT, a network traffic classification model on encrypted traffic. The work has been accepted as The Web Conference (WWW) 2022 accepted paper.
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
PyContinual (An Easy and Extendible Framework for Continual Learning)
[IGARSS'22]: A Transformer-Based Siamese Network for Change Detection
Official implementation of "Particle Transformer for Jet Tagging".
Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)
🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Attention Is All You Need | a PyTorch Tutorial to Transformers
Fine-tuned pre-trained GPT2 for custom topic specific text generation. Such system can be used for Text Augmentation.
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Seq2SeqSharp is a tensor based fast & flexible deep neural network framework written by .NET (C#). It has many highlighted features, such as automatic differentiation, different network types (Transformer, LSTM, BiLSTM and so on), multi-GPUs supported, cross-platforms (Windows, Linux, x86, x64, ARM), multimodal model for text and images and so on.
Add a description, image, and links to the transformer-architecture topic page so that developers can more easily learn about it.
To associate your repository with the transformer-architecture topic, visit your repo's landing page and select "manage topics."