Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
  • Loading branch information
mehdidc committed Dec 2, 2023
1 parent 4a8b9c7 commit 9fa41df
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,16 @@ or directly in the [notebook](benchmark/results.ipynb).

## Features

* Support for zero-shot classification and zero-shot retrieval, and captioning
* Support for [OpenCLIP](https://github.com/mlfoundations/open_clip) pre-trained models
* Support for zero-shot classification and zero-shot retrieval, linear probing, and captioning.
* Support for [OpenCLIP](https://github.com/mlfoundations/open_clip) pre-trained models, [Japanese CLIP](https://github.com/rinnakk/japanese-clip), and [NLLB CLIP](https://arxiv.org/abs/2309.01859) for general multilingual abilities.
* Support various datasets from [torchvision](https://pytorch.org/vision/stable/datasets.html), [tensorflow datasets](https://www.tensorflow.org/datasets), and [VTAB](https://github.com/google-research/task_adaptation).
* Support [Japanese CLIP by rinna](https://github.com/rinnakk/japanese-clip)
* Support for various multilingual datasets for classification and retrieval
* Support for compositionality tasks

## How to install?

`pip install clip-benchmark`


## How to use?

To evaluate we recommend to create a models.txt like
Expand Down

0 comments on commit 9fa41df

Please sign in to comment.