Skip to content

TencentARC-QQ/QA-CLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

中文说明 | English

Introduction

This project aims to provide a better Chinese CLIP model. The training data used in this project consists of publicly accessible image URLs and related Chinese text descriptions, totaling 400 million. After screening, we ultimately used 100 million data for training. This project is produced by QQ-ARC Joint Lab, Tencent PCG.

Models and Results

Model Card

QA-CLIP currently has three different open-source models of different sizes, and their model information and download links are shown in the table below:

ModelCkpParamsVisionParams of VisionTextParams of TextResolution
QA-CLIPRN50Download77MResNet5038MRBT339M224
QA-CLIPViT-B/16Download188MViT-B/1686MRoBERTa-wwm-Base102M224
QA-CLIPViT-L/14Download406MViT-L/14304MRoBERTa-wwm-Base102M224
QA-CLIPViT-L/14@336pxDownload407MViT-L/14304MRoBERTa-wwm-Base102M336

Results

We conducted zero-shot tests on MUGE Retrieval, Flickr30K-CN, and COCO-CN datasets for image-text retrieval tasks. For the image zero-shot classification task, we tested on the ImageNet dataset. The test results are shown in the table below:

Flickr30K-CN Zero-shot Retrieval (Official Test Set):

TaskText-to-ImageImage-to-Text
MetricR@1R@5R@10R@1R@5R@10
CN-CLIPRN5048.876.084.660.085.992.0
⭐QA-CLIPRN5050.577.486.167.187.993.2
CN-CLIPViT-B/1662.786.992.874.693.597.1
⭐QA-CLIPViT-B/1663.888.093.278.496.198.5
CN-CLIPViT-L/1468.089.794.480.296.698.2
AltClipViT-L/1469.790.194.884.897.799.1
⭐QA-CLIPViT-L/1469.390.394.785.397.999.2
CN-CLIPViT-L/14@336px68.990.795.483.297.298.6
⭐QA-CLIPViT-L/14@336px71.191.595.887.298.199.1

MUGE Zero-shot Retrieval (Official Validation Set):

TaskText-to-ImageImage-to-Text
MetricR@1R@5R@10R@1R@5R@10
CN-CLIPRN5042.668.578.030.056.266.9
⭐QA-CLIPRN5044.069.979.532.459.570.3
CN-CLIPViT-B/1652.176.784.438.765.675.1
⭐QA-CLIPViT-B/1653.277.785.140.768.277.2
CN-CLIPViT-L/1456.479.886.242.669.878.6
AltClipViT-L/1429.649.958.821.442.051.9
⭐QA-CLIPViT-L/1457.481.087.745.573.081.4
CN-CLIPViT-L/14@336px59.081.587.746.273.782.1
⭐QA-CLIPViT-L/14@336px59.681.988.147.574.783.1

COCO-CN Zero-shot Retrieval (Official Test Set):

TaskText-to-ImageImage-to-Text
MetricR@1R@5R@10R@1R@5R@10
CN-CLIPRN5048.181.390.550.981.190.5
⭐QA-CLIPRN5050.182.591.756.785.292.9
CN-CLIPViT-B/1662.287.194.956.384.093.3
⭐QA-CLIPViT-B/1662.987.794.761.587.694.8
CN-CLIPViT-L/1464.988.894.260.684.493.1
AltClipViT-L/1463.587.693.562.688.595.9
⭐QA-CLIPViT-L/1465.790.295.064.588.395.1
CN-CLIPViT-L/14@336px64.789.094.563.687.594.6
⭐QA-CLIPViT-L/14@336px65.990.294.966.288.395.7

Zero-shot Image Classification on ImageNet:

TaskImageNet
CN-CLIPRN5033.5
⭐QA-CLIPRN5035.5
CN-CLIPViT-B/1648.4
⭐QA-CLIPViT-B/1649.7
CN-CLIPViT-L/1454.7
⭐QA-CLIPViT-L/1455.8
CN-CLIPViT-L/14@336px56.7
⭐QA-CLIPViT-L/14@336px58.1



Getting Started

Installation Requirements

Environment configuration requirements:

  • python >= 3.6.4
  • pytorch >= 1.8.0 (with torchvision >= 0.9.0)
  • CUDA Version >= 10.2

Install required packages:

cd /yourpath/QA-CLIP-main
pip install --upgrade pip
pip install -r requirements.txt

Inference Code

export PYTHONPATH=/yourpath/QA-CLIP-main

Inference code example:

import torch 
from PIL import Image

import clip as clip
from clip import load_from_name, available_models
print("Available models:", available_models())  
# Available models: ['ViT-B-16', 'ViT-L-14', 'RN50']

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')
model.eval()
image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device)
text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    # Normalize the features. Please use the normalized features for downstream tasks.
    image_features /= image_features.norm(dim=-1, keepdim=True) 
    text_features /= text_features.norm(dim=-1, keepdim=True)    

    logits_per_image, logits_per_text = model.get_similarity(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)

Prediction and Evaluation

Download Image-text Retrieval Test Dataset

In Project Chinese-CLIP, the test set has already been preprocessed. Here is the download link they provided:

MUGE dataset:download link

Flickr30K-CN dataset:download link

Additionally, obtaining the COCO-CN dataset requires applying to the original author.

Download ImageNet Dataset

Please download the raw data yourself,Chinese Label and English Label are provided by Project Chinese-CLIP

Image-text Retrieval Evaluation

The image-text retrieval evaluation code can be referred to as follows:

split=test # Designate the computation of features for the valid or test set
resume=your_ckp_path
DATAPATH=your_DATAPATH
dataset_name=Flickr30k-CN
# dataset_name=MUGE

python -u eval/extract_features.py \
    --extract-image-feats \
    --extract-text-feats \
    --image-data="${DATAPATH}/datasets/${dataset_name}/lmdb/${split}/imgs" \
    --text-data="${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl" \
    --img-batch-size=32 \
    --text-batch-size=32 \
    --context-length=52 \
    --resume=${resume} \
    --vision-model=ViT-B-16 \
    --text-model=RoBERTa-wwm-ext-base-chinese

python -u eval/make_topk_predictions.py \
    --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \
    --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \
    --top-k=10 \
    --eval-batch-size=32768 \
    --output="${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl"

python -u eval/make_topk_predictions_tr.py \
    --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \
    --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \
    --top-k=10 \
    --eval-batch-size=32768 \
    --output="${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl"

python eval/evaluation.py \
    ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/output1.json
cat  ${DATAPATH}/datasets/${dataset_name}/output1.json

python eval/transform_ir_annotation_to_tr.py \
    --input ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl

python eval/evaluation_tr.py \
    ${DATAPATH}/datasets/${dataset_name}/${split}_texts.tr.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/output2.json
cat ${DATAPATH}/datasets/${dataset_name}/output2.json

ImageNet Zero-shot Classification

The ImageNet zero-shot classification code can be referred to as follows

bash scripts/zeroshot_eval.sh 0 \
    ${DATAPATH} imagenet \
    ViT-B-16 RoBERTa-wwm-ext-base-chinese \
    ./pretrained_weights/QA-CLIP-base.pt



Huggingface Model and Online Demo

We have open-sourced our model on the HuggingFace for easier access and utilization. Additionally, we have prepared a simple online demo for zero-shot classification, allowing everyone to experience it firsthand. We encourage you to give it a try!

⭐QA-CLIP-ViT-B-16⭐

⭐QA-CLIP-ViT-L-14⭐

Here are some examples for demonstration:



Acknowledgments

The project code is based on implementation of Chinese-CLIP, and we are very grateful for their outstanding open-source contributions.

About

Chinese CLIP models with SOTA performance.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •