Skip to content

Commit

Permalink
fix(readme): use new demo server (#819)
Browse files Browse the repository at this point in the history
* fix: use new demo server

* fix: update

* fix: warning sunset demo

* fix: warning sunset demo
  • Loading branch information
numb3r3 committed Sep 13, 2022
1 parent fa7e577 commit eca5774
Showing 1 changed file with 25 additions and 6 deletions.
31 changes: 25 additions & 6 deletions README.md
Expand Up @@ -36,7 +36,19 @@ CLIP-as-service is a low-latency high-scalability service for embedding images a

## Try it!

An always-online demo server loaded with `ViT-L/14-336px` is there for you to play & test:
An always-online server `api.clip.jina.ai` loaded with `ViT-L/14-336px` is there for you to play & test.
Before you start, make sure you have created access token from our [console website](https://console.clip.jina.ai/get_started),
or CLI as described in [this guide](https://github.com/jina-ai/jina-hubble-sdk#create-a-new-pat).

```bash
jina auth token create <name of PAT> -e <expiration days>
```

Then, you need to set the created token in HTTP request header `Authorization` as `<your access token>`,
or configure it in the parameter `credential` of the client in python.

鈿狅笍 Our demo server `demo-cas.jina.ai` is sunset and no longer available after **15th of Sept 2022**.


### Text & image embedding

Expand All @@ -50,8 +62,9 @@ An always-online demo server loaded with `ViT-L/14-336px` is there for you to pl

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"text": "First do it"},
{"text": "then do it right"},
{"text": "then do it better"},
Expand All @@ -66,7 +79,9 @@ curl \
# pip install clip-client
from clip_client import Client

c = Client('grpcs://demo-cas.jina.ai:2096')
c = Client(
'grpcs://api.clip.jina.ai:2096', credential={'Authorization': '<your access token>'}
)

r = c.encode(
[
Expand Down Expand Up @@ -101,8 +116,9 @@ There are four basic visual reasoning skills: object recognition, object countin

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/1/300/300",
"matches": [{"text": "there is a woman in the photo"},
{"text": "there is a man in the photo"}]}],
Expand All @@ -129,8 +145,9 @@ gives:

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/133/300/300",
"matches": [
{"text": "the blue car is on the left, the red car is on the right"},
Expand Down Expand Up @@ -165,8 +182,9 @@ gives:

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/102/300/300",
"matches": [{"text": "this is a photo of one berry"},
{"text": "this is a photo of two berries"},
Expand Down Expand Up @@ -655,6 +673,7 @@ Fun time! Note, unlike the previous example, here the input is an image and the
</table>



### Rank image-text matches via CLIP model

From `0.3.0` CLIP-as-service adds a new `/rank` endpoint that re-ranks cross-modal matches according to their joint likelihood in CLIP model. For example, given an image Document with some predefined sentence matches as below:
Expand Down

0 comments on commit eca5774

Please sign in to comment.