Skip to content

harshm121/ClipVQA

Repository files navigation

CLIP-VQA

CLIP (OpenAI's recent multimodal neural network) which computes the relevance between (Image, Text) pairs. This code repository aims to use pre-trained released version of CLIP to solve the task of Visual Question Answering.

Usage

Createt a Virtual Enviroment and run pip install requirements.txt which installs all the dependencies for Python 3.7.3.

Here's a demo code:

from LanguageModels.appendQAModel import AppendQAModel
from CLIPInterface.clipInterface import CLIPInterface
from VQAInterface.vqaInterface import VQAInterface
from CLIPVQA.clipvqa import CLIPVQA

appendModel = AppendQAModel(separator = " ", candidateAnswerGenerator = 'most_common') #Naive Language Model which appends answer to the question to generate sentence
clipInterface = CLIPInterface(device = "cpu")
vqaInterface = VQAInterface(dataDir = './data', versionType = "v2", taskType = "OpenEnded", dataType = "mscoco")

clipVqaModel = CLIPVQA(clipInterface, appendModel, vqaInterface) #Wrapper model which wraps all functionalities of pre-trained Language and CLIP model to generate VQA Results

results = clipVqaModel.generateResults(evalDataSubType = "val2014", answersDataSubType = "train2014", numCandidates = 1000, outFile = "./Results/resultTest.json")

Folder Structure

├── data
│  ├── Annotations
│  │  ├── v2_mscoco_train2014_annotations.json
│  │  └── v2_mscoco_val2014_annotations.json
│  ├── Images
│  │  └── mscoco
│  │    └── val2014
│  └── Questions
│    ├── v2_OpenEnded_mscoco_train2014_questions.json
│    └── v2_OpenEnded_mscoco_val2014_questions.json
├── Results

API

1. Language Models

All Language models inherit LanguageModeBase class. It has the following functionalities:

  1. getText(self, question, answer) - Takes a question: str and answer: str and generates text based on the corresponding Language Model Class

  2. getCandidateAnswers(self, question, allAnswers, k) takes questions, all possible Answers to generate numCandidate candidate answers for the question type based on corresponding Language Model Class's logic

  3. getTextFromAllPossibleAnswers wrapper function which inputs questions and all possible answers to generate candidate texts to be input to CLIP.

AppendQAModel is a naive language model which generates candidate answers based on co-occurances only (prior probabilities) and appends answers to questions to generate text

2. CLIP Interface

A simple Interface class for the pretrained CLIP model

  1. getProbs takes imageFilePath (single image file path or a list of file paths) and texts and output the probability of each text being pared with the each of the images. Return shape: #imageFilePaths x #texts

3. VQAInterface

Interface class to understand VQA data.

  1. getAllAnswers(self, dataSubType): Gets frequence of all answers present in the corresponding dataSubType. The answer chosen per annotation is the 'multiple_choice_answer' field in the annotation file in VQA Annotations file
  2. getQIPairs(self, dataSubType): Generates a dictionary which maps question_id to (1) absolute 'image_path' (2) str 'question'

4. CLIPVQA

Wrapper class which takes the above three classes as inputs and uses them to generate final results

  1. generateImageTextPairs(self, evalDataSubType, answersDataSubType, numCandidates): Generates (question_id, image_path, texts, answers) tuples which is the is used by CLIPInterface to compute probabilities of each answer. evalDataSubType is the dataSubType used to get images and questions, answerDataSubType is the dataSubType used to get possible answers.
  2. generateResults(self, evalDataSubType, answersDataSubType, numCandidates, outFile = None): Generates final results and saves them in outFile (if passed as argument). evalDataSubType is the dataSubType used to get images and questions, answerDataSubType is the dataSubType used to get possible answers. numCandidates is the number of candidate answers used for all questions.
  3. generateResultsDataLoader(self, evalDataSubType, answersDataSubType, numCandidates, outFile = None): The same function as above but utilizes a data loader instead of loading all Image,Text pairs in memory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published