Skip to content

FuseCap: Large Language Model for Visual Data Fusion in Enriched Caption Generation

License

Notifications You must be signed in to change notification settings

RotsteinNoam/FuseCap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 

Repository files navigation

FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions

Welcome to the GitHub repository of FuseCap, a framework designed to enhance image captioning by incorporating detailed visual information into traditional captions.

🎉 Exciting News: Paper accepted at WACV 2024!

Resources

  • 💻 Project Page: For more details, visit the official project page.

  • 📝 Read the Paper: You can find the paper here.

  • 🚀 Demo: Try out our BLIP-based model demo trained using FuseCap, hosted on Huggingface Spaces.

Release Status

Done

  • ✅ Paper publication.
  • ✅ Release of the FuseCap dataset.
  • ✅ HuggingFace Captioner demo, including captioner weights.

Hugging Face Demo

Try out our BLIP-based captioning model trained using FuseCap quickly with this Python snippet. This code demonstrates how to use the model to generate captions for an image:

import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
processor = BlipProcessor.from_pretrained("noamrot/FuseCap")
model = BlipForConditionalGeneration.from_pretrained("noamrot/FuseCap").to(device)

img_url = 'https://huggingface.co/spaces/noamrot/FuseCap/resolve/main/bike.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

text = "a picture of "
inputs = processor(raw_image, text, return_tensors="pt").to(device)

out = model.generate(**inputs, num_beams = 3)
print(processor.decode(out[0], skip_special_tokens=True))

Datasets

We provide the fused captions that were created using the FuseCap framework. These captions were used for both pretraining and training phases of our image captioning model. The images can downloaded from the respective dataset websites or the provided urls (SBU, CC3, CC12).

Dataset FuseCap Captions
COCO Train, Val, Test
SBU Train
CC3 Train
CC12 Train

BibTeX

@inproceedings{rotstein2024fusecap,
  title={Fusecap: Leveraging large language models for enriched fused image captions},
  author={Rotstein, Noam and Bensa{\"\i}d, David and Brody, Shaked and Ganz, Roy and Kimmel, Ron},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={5689--5700},
  year={2024}
}

About

FuseCap: Large Language Model for Visual Data Fusion in Enriched Caption Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages