Highlights
- Pro
Stars
⭐ ICL
7 repositories
Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
[NeurIPS 2024] Official Code for the Paper "Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning"
Function Vectors in Large Language Models (ICLR 2024)
Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025
A comprehensive comparison of multimodal models - llama3.2-vision, minicpm-v, llava-llama3, llava, llava13:b and closed source models for animal classification tasks. This project evaluates various…

