Making large AI models cheaper, faster and more accessible
-
Updated
Jun 26, 2024 - Python
Making large AI models cheaper, faster and more accessible
Evaluation framework for oncology foundation models (FMs)
This repository contains the python package for Helical
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
SaprotHub: Making Protein Modeling Accessible to All Biologists
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
A toolkit for developing foundation models using Electronic Health Record (EHR) data
Code and demos for contructing Data-Driven Digital Twins of Photovoltaic & Advanced Manufacturing systems
EfficientSAM for Osam.
Images to inference with no labeling (use foundation models to train supervised models).
Core functionality for Osam.
[ICML 2024] A novel, efficient approach combining convolutional operations with adaptive spectral analysis as a foundation model for different time series tasks
First temporal graph foundation model dataset and benchmark
Get up and running with SAM, Efficient-SAM, and other segment-anything models locally.
World Model based Autonomous Driving Platform in CARLA 🚗
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting
ONNX models of YOLO-World (an open-vocabulary object detection).
Add a description, image, and links to the foundation-models topic page so that developers can more easily learn about it.
To associate your repository with the foundation-models topic, visit your repo's landing page and select "manage topics."