MoPE: Parameter-Efficient and Scalable Multimodal Fusion via Mixture of Prompt Experts| Arxiv Official Implementation for MoPE: Parameter-Efficient and Scalable Multimodal Fusion via Mixture of Prompt Experts