🎓Automatically Update CV Papers Daily using Github Actions (Update Every 24th hours)
-
Updated
Jun 10, 2024 - Python
🎓Automatically Update CV Papers Daily using Github Actions (Update Every 24th hours)
Train transformer-based models.
使用LLaMA-Factory微调多模态大语言模型的示例代码 Demo of Finetuning Multimodal LLM with LLaMA-Factory
Language Modeling Research Hub, a comprehensive compendium for enthusiasts and scholars delving into the fascinating realm of language models (LMs), with a particular focus on large language models (LLMs)
Official Repository for the Uni-Mol Series Methods
PITI: Pretraining is All You Need for Image-to-Image Translation
Saprot: Protein Language Model with Structural Alphabet
Customized Pretraining for NLG Tasks
Code repository for the conference paper "Organoid Segmentation Using Self-Supervised Learning: How Complex Should the Pretext Task Be?" published and presented at the International Conference on Biomedical and Bioinformatics Engineering (ICBBE) 2023.
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
[NeurIPS2022] Egocentric Video-Language Pretraining
[ICCV2023] UniVTG: Towards Unified Video-Language Temporal Grounding
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
Official implementation of ICML 2024 paper "Matrix Information Theory for Self-supervised Learning" (https://arxiv.org/abs/2305.17326)
PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Very incomplete right now, pretrained ARGVAET system for generating, classifying, and predicting the properties of molecules. I couldn't upload the dataset or checkpoints due to size constraints.
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Benchmarking framework for protein representation learning. Includes a large number of pre-training and downstream task datasets, models and training/task utilities. (ICLR 2024)
Add a description, image, and links to the pretraining topic page so that developers can more easily learn about it.
To associate your repository with the pretraining topic, visit your repo's landing page and select "manage topics."