Reading Wikipedia to Answer Open-Domain Questions
-
Updated
Nov 10, 2017 - Python
Reading Wikipedia to Answer Open-Domain Questions
Dogs & Cats Kaggle challenge
all the sources are in this repository for participating in korquad finetuning contest
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.
With the help of this repo you can build image search algorithm on your image dataset.
A module that combines the power of Reformer/FastFormer, Electra and memory efficient compositional embeddings
BioMedical Language Processing with ELECTRA
Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Finetuning and clustering library for image perceptual similarity models.
Training and serving XLM-RoBERTa for named entity recognition on custom dataset with PyTorch.
Adaptively fine tuning transformer based models for multiple domains and multiple tasks
Finetuning large language models for GDScript generation.
Comparing Selective Masking Methods for Depression Detection in Social Media
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
A PyTorch Library for Meta-learning Research
A repository to get train transformers to access longer context for causal language models, most of these methods are still in testing. Try them out if you'd like but please lmk your results so we don't duplicate work :)
Small(7B and below) finetuned LLMs for a diverse set of useful tasks
Deeplearning utils for multimodal research
llama2 finetuning with deepspeed and lora
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."