From 9fa41df4b61c210b2d11a36d72e92defbce84888 Mon Sep 17 00:00:00 2001 From: Mehdi Cherti Date: Sat, 2 Dec 2023 02:02:10 +0100 Subject: [PATCH] update README --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index efbfda4..8ab90f7 100644 --- a/README.md +++ b/README.md @@ -15,16 +15,16 @@ or directly in the [notebook](benchmark/results.ipynb). ## Features -* Support for zero-shot classification and zero-shot retrieval, and captioning -* Support for [OpenCLIP](https://github.com/mlfoundations/open_clip) pre-trained models +* Support for zero-shot classification and zero-shot retrieval, linear probing, and captioning. +* Support for [OpenCLIP](https://github.com/mlfoundations/open_clip) pre-trained models, [Japanese CLIP](https://github.com/rinnakk/japanese-clip), and [NLLB CLIP](https://arxiv.org/abs/2309.01859) for general multilingual abilities. * Support various datasets from [torchvision](https://pytorch.org/vision/stable/datasets.html), [tensorflow datasets](https://www.tensorflow.org/datasets), and [VTAB](https://github.com/google-research/task_adaptation). -* Support [Japanese CLIP by rinna](https://github.com/rinnakk/japanese-clip) +* Support for various multilingual datasets for classification and retrieval +* Support for compositionality tasks ## How to install? `pip install clip-benchmark` - ## How to use? To evaluate we recommend to create a models.txt like