Skip to content

Master Thesis on Multimodal Video Captioning, done at Huawei's Research Center in Amsterdam.

Notifications You must be signed in to change notification settings

El-Zag/Multimodal-Video-Captioning

Repository files navigation

Multimodal Video Captioning

Code for my Master Thesis on Multimodal Video Captioning. The SwinBERT model was used as baseline, and I integrated audio features extracted with VGGish to the architecture, resulting in an up to +1.6 gain in captioning Metrics.