Skip to content

This project aims to evaluate Multimodal-Large Language Models (MLLMs) like InstructBLIP and Open-Flamingo and identifies hallucination scenarios based on various augmentatios applied to input image-prompt pairs.

License

Notifications You must be signed in to change notification settings

devamsheth21/MLLM-hallucination-evaluation

Repository files navigation

MLLM-hallucination-evaluation

This project aims to evaluate Multimodal-Large Language Models (MLLMs) like InstructBLIP and Open-Flamingo and identifies hallucination scenarios based on various augmentatios applied to input image-prompt pairs.

About

This project aims to evaluate Multimodal-Large Language Models (MLLMs) like InstructBLIP and Open-Flamingo and identifies hallucination scenarios based on various augmentatios applied to input image-prompt pairs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •