Skip to content

Commit

Permalink
adapt Jina Embeddings to new Jina AI Embedding API (#13658)
Browse files Browse the repository at this point in the history
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Embedding platform
- **Twitter handle:** https://twitter.com/JinaAI_

---------

Co-authored-by: Joan Fontanals Martinez <joan.fontanals.martinez@jina.ai>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
  • Loading branch information
3 people committed Dec 5, 2023
1 parent e0c03d6 commit dcccf8f
Show file tree
Hide file tree
Showing 3 changed files with 58 additions and 133 deletions.
69 changes: 7 additions & 62 deletions docs/docs/integrations/providers/jina.mdx
Original file line number Diff line number Diff line change
@@ -1,75 +1,20 @@
# Jina

This page covers how to use the Jina ecosystem within LangChain.
This page covers how to use the Jina Embeddings within LangChain.
It is broken into two parts: installation and setup, and then references to specific Jina wrappers.

## Installation and Setup
- Install the Python SDK with `pip install jina`
- Get a Jina AI Cloud auth token from [here](https://cloud.jina.ai/settings/tokens) and set it as an environment variable (`JINA_AUTH_TOKEN`)

## Wrappers

### Embeddings
- Get a Jina AI API token from [here](https://jina.ai/embeddings/) and set it as an environment variable (`JINA_API_TOKEN`)

There exists a Jina Embeddings wrapper, which you can access with
```python
from langchain.embeddings import JinaEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina)

## Deployment

[Langchain-serve](https://github.com/jina-ai/langchain-serve), powered by Jina, helps take LangChain apps to production with easy to use REST/WebSocket APIs and Slack bots.

### Usage

Install the package from PyPI.

```bash
pip install langchain-serve
```

Wrap your LangChain app with the `@serving` decorator.

```python
# app.py
from lcserve import serving

@serving
def ask(input: str) -> str:
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.agents import AgentExecutor, ZeroShotAgent

tools = [...] # list of tools
prompt = ZeroShotAgent.create_prompt(
tools, input_variables=["input", "agent_scratchpad"],
)
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(
llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools]
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
)
return agent_executor.run(input)
```

Deploy on Jina AI Cloud with `lc-serve deploy jcloud app`. Once deployed, we can send a POST request to the API endpoint to get a response.
from langchain.embeddings import JinaEmbeddings

```bash
curl -X 'POST' 'https://<your-app>.wolf.jina.ai/ask' \
-d '{
"input": "Your Question here?",
"envs": {
"OPENAI_API_KEY": "sk-***"
}
}'
# you can pas jina_api_key, if none is passed it will be taken from `JINA_API_TOKEN` environment variable
embeddings = JinaEmbeddings(jina_api_key='jina_**', model_name='jina-embeddings-v2-base-en')
```

You can also self-host the app on your infrastructure with Docker-compose or Kubernetes. See [here](https://github.com/jina-ai/langchain-serve#-self-host-llm-apps-with-docker-compose-or-kubernetes) for more details.

You can check the list of available models from [here](https://jina.ai/embeddings/)

Langchain-serve also allows to deploy the apps with WebSocket APIs and Slack Bots both on [Jina AI Cloud](https://cloud.jina.ai/) or self-hosted infrastructure.
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina.ipynb)
24 changes: 14 additions & 10 deletions docs/docs/integrations/text_embedding/jina.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "d94c62b4",
"metadata": {},
"outputs": [],
Expand All @@ -28,7 +28,7 @@
"outputs": [],
"source": [
"embeddings = JinaEmbeddings(\n",
" jina_auth_token=jina_auth_token, model_name=\"ViT-B-32::openai\"\n",
" jina_api_key=\"jina_*\", model_name=\"jina-embeddings-v2-base-en\"\n",
")"
]
},
Expand All @@ -55,28 +55,32 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b790fd09",
"id": "aea3ca33-1e6e-499c-8284-b7e26f38c514",
"metadata": {},
"outputs": [],
"source": [
"doc_result = embeddings.embed_documents([text])"
"print(query_result)"
]
},
{
"cell_type": "markdown",
"id": "6f3607a0",
"cell_type": "code",
"execution_count": null,
"id": "b790fd09",
"metadata": {},
"outputs": [],
"source": [
"In the above example, `ViT-B-32::openai`, OpenAI's pretrained `ViT-B-32` model is used. For a full list of models, see [here](https://cloud.jina.ai/user/inference/model/63dca9df5a0da83009d519cd)."
"doc_result = embeddings.embed_documents([text])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd5f148e",
"id": "c2e6b743-768c-4d7e-a331-27d5f0e8e30e",
"metadata": {},
"outputs": [],
"source": []
"source": [
"print(doc_result)"
]
}
],
"metadata": {
Expand All @@ -95,7 +99,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.11"
}
},
"nbformat": 4,
Expand Down
98 changes: 37 additions & 61 deletions libs/langchain/langchain/embeddings/jina.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import os
from typing import Any, Dict, List, Optional

import requests
Expand All @@ -7,69 +6,54 @@

from langchain.utils import get_from_dict_or_env

JINA_API_URL: str = "https://api.jina.ai/v1/embeddings"


class JinaEmbeddings(BaseModel, Embeddings):
"""Jina embedding models."""

client: Any #: :meta private:

model_name: str = "ViT-B-32::openai"
"""Model name to use."""

jina_auth_token: Optional[str] = None
jina_api_url: str = "https://api.clip.jina.ai/api/v1/models/"
request_headers: Optional[dict] = None
session: Any #: :meta private:
model_name: str = "jina-embeddings-v2-base-en"
jina_api_key: Optional[str] = None

@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that auth token exists in environment."""
# Set Auth
jina_auth_token = get_from_dict_or_env(
values, "jina_auth_token", "JINA_AUTH_TOKEN"
)
values["jina_auth_token"] = jina_auth_token
values["request_headers"] = (("authorization", jina_auth_token),)

# Test that package is installed
try:
import jina
except ImportError:
raise ImportError(
"Could not import `jina` python package. "
"Please install it with `pip install jina`."
)
jina_api_key = get_from_dict_or_env(values, "jina_api_key", "JINA_API_KEY")
except ValueError as original_exc:
try:
jina_api_key = get_from_dict_or_env(
values, "jina_auth_token", "JINA_AUTH_TOKEN"
)
except ValueError:
raise original_exc
session = requests.Session()
session.headers.update(
{
"Authorization": f"Bearer {jina_api_key}",
"Accept-Encoding": "identity",
"Content-type": "application/json",
}
)
values["session"] = session
return values

# Setup client
jina_api_url = os.environ.get("JINA_API_URL", values["jina_api_url"])
model_name = values["model_name"]
try:
resp = requests.get(
jina_api_url + f"?model_name={model_name}",
headers={"Authorization": jina_auth_token},
)
def _embed(self, texts: List[str]) -> List[List[float]]:
# Call Jina AI Embedding API
resp = self.session.post( # type: ignore
JINA_API_URL, json={"input": texts, "model": self.model_name}
).json()
if "data" not in resp:
raise RuntimeError(resp["detail"])

if resp.status_code == 401:
raise ValueError(
"The given Jina auth token is invalid. "
"Please check your Jina auth token."
)
elif resp.status_code == 404:
raise ValueError(
f"The given model name `{model_name}` is not valid. "
f"Please go to https://cloud.jina.ai/user/inference "
f"and create a model with the given model name."
)
resp.raise_for_status()
embeddings = resp["data"]

endpoint = resp.json()["endpoints"]["grpc"]
values["client"] = jina.Client(host=endpoint)
except requests.exceptions.HTTPError as err:
raise ValueError(f"Error: {err!r}")
return values
# Sort resulting embeddings by index
sorted_embeddings = sorted(embeddings, key=lambda e: e["index"]) # type: ignore

def _post(self, docs: List[Any], **kwargs: Any) -> Any:
payload = dict(inputs=docs, metadata=self.request_headers, **kwargs)
return self.client.post(on="/encode", **payload)
# Return just the embeddings
return [result["embedding"] for result in sorted_embeddings]

def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to Jina's embedding endpoint.
Expand All @@ -78,12 +62,7 @@ def embed_documents(self, texts: List[str]) -> List[List[float]]:
Returns:
List of embeddings, one for each text.
"""
from docarray import Document, DocumentArray

embeddings = self._post(
docs=DocumentArray([Document(text=t) for t in texts])
).embeddings
return [list(map(float, e)) for e in embeddings]
return self._embed(texts)

def embed_query(self, text: str) -> List[float]:
"""Call out to Jina's embedding endpoint.
Expand All @@ -92,7 +71,4 @@ def embed_query(self, text: str) -> List[float]:
Returns:
Embeddings for the text.
"""
from docarray import Document, DocumentArray

embedding = self._post(docs=DocumentArray([Document(text=text)])).embeddings[0]
return list(map(float, embedding))
return self._embed([text])[0]

0 comments on commit dcccf8f

Please sign in to comment.