Skip to content

arman-aminian/image-search

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image-search

CLIP

CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

CLIP

With CLIP, we can train any two image and text encoder models together to relate images and text. It gives a score for relatedness of any given text and image! We fine-tuned Vision Transformer(ViT) as the vision encoder and the roberta-zwnj-wnli-mean-tokens as the farsi text encoder.

You can find how to train the model in the CLIP training notebook.

Data

To train (fine-tune) this model, we need examples that are pairs of images and Persian text that are the text associated with the image. Since Persian data in this field is not easily available and manual labeling of data is costly, we decided to translate the available English data and obtain the other part of the data from the web crawling method.

Translation

There weren't datasets with Persian captioned images, so we translated datasets with English captions to Persian with Google Translate using googletrans python package.

Then we evaluated these translations with a sentence-bert bilingual model named distiluse-base-multilingual-cased-v2 trained for sentence similarity. We calculated cosine similarity for embeddings of English caption and its Persian translation. The histogram of this score is shown below:

translation-score.png

Finally, we filtered out top translations. Some samples of the final dataframe:

translation-sample-df.png

More details of translation part can be found in this notebook.

Crawler

For improve our model performance we crawled divar posts with it's API. we saved image-title pairs in google drive. You can see more details in this notebook. Some samples of final data is shown below: divar.png

Evaluation

Accuracy @ k

This metric is used for evaluating how good an image search of a model is.

Acc@k definition: Is the best image (the most related to the text query), among the top-k outputs of the model?

We calculated this metric for both models (CLIP & baseline) on two datasets:

  • flickr30k: some intersections with the training data.
  • nocaps: completely zero-shot for models!

We can see the results of our CLIP model on a sample of flickr dataset with size 1000 (the right diagram has a log scale in its x-axis:

clip-flickr clip-flickr-log

And here are the results of our CLIP model on a sample of nocaps dataset with size 1000 (the right diagram has a log scale in its x-axis:

clip-nocaps clip-nocaps-log

You can find more details in notebooks for CLIP evaluation and baseline evaluation

Zero-shot

The model is zero-shot. So it should works on new tasks without new training.

We used both models (CLIP & baseline), to classify images in two datasets:

  • STL10: unseen data with 10 different categories.
  • OxfordIIIT Pet: unseen data with 37 different types of pets.

We created a dataset from "OxfordIIIT Pet" which has only "dog" and "cat" labels.

We can see the results of CLIP model classification on the two datasets:

zero-shot-clip-pet zero-shot-clip-stl10

And here are the results of the baseline model in classification:

zero-shot-baseline-pet zero-shot-baselinne-stl10

You can find more details in notebooks for CLIP zero-sho and baseline zero-shot

Inference (How to use our model)

from transformers import AutoModel, AutoTokenizer, CLIPVisionModel

# load finetuned vision encoder
vision_encoder = CLIPVisionModel.from_pretrained('arman-aminian/farsi-image-search-vision')
# load our finetuned text encoder and tokenizer
text_encoder = AutoModel.from_pretrained('arman-aminian/farsi-image-search-text')
text_tokenizer = AutoTokenizer.from_pretrained('arman-aminian/farsi-image-search-text')

search = ImageSearchDemo(vision_encoder, text_encoder, text_tokenizer, device='cuda')
# encode images
search.compute_image_embeddings(test.image.to_list())
search.image_search('ورزش کردن گروهی')

We have deployed our model on Huggingface site, which you can query through https://huggingface.co/spaces/arman-aminian/farsi-image-search right now! Please keep in mind that our model had time and hardware limitations to train the model. Also, the demo searches for your query from a limited dataset and shows you the ten best results, so there may not be ten photos completely related to your query in the demo dataset that the model wants to find :D

In order for the result to be reliable, the dataset selected for the demo is completely new and taken from the unsplash, and even other parts of this dataset have not been seen during the training of the model.

animal-walking-on-the-street

flock-of-birds-in-flight

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages