Official implementation of I2I-Mamba, an image-to-image translation model based on selective state spaces
-
Updated
Sep 20, 2024 - Python
Official implementation of I2I-Mamba, an image-to-image translation model based on selective state spaces
[ACM MM'24 Oral] RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining
[WACV2025] SUM: Saliency Unification through Mamba for Visual Attention Modeling
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Official repository for Mamba-based Segmentation Model for Speaker Diarization
MambaMIM: Pre-training Mamba with State Space Token-interpolation
Library for Federated Emergence & Foundation Models
This evaluation explores the In-context learning (ICL) capabilities of pre-trained language models on arithmetic tasks and sentiment analysis using synthetic datasets. The goal is to use different prompting strategies—zero-shot, few-shot, and chain-of-thought—to assess the performance of these models on the given tasks.
Add a description, image, and links to the mamba-state-space-models topic page so that developers can more easily learn about it.
To associate your repository with the mamba-state-space-models topic, visit your repo's landing page and select "manage topics."