diff --git a/notebooks/document-chunking/configuring-chunking-settings-for-inference-endpoints.ipynb b/notebooks/document-chunking/configuring-chunking-settings-for-inference-endpoints.ipynb new file mode 100644 index 000000000..5bade95b0 --- /dev/null +++ b/notebooks/document-chunking/configuring-chunking-settings-for-inference-endpoints.ipynb @@ -0,0 +1,309 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "7a765629", + "metadata": {}, + "source": [ + "# Configuring Chunking Settings For Inference Endpoints\n", + "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/notebooks/document-chunking/configuring-chunking-settings-for-inference-endpoints.ipynb)\n", + "\n", + "\n", + "Learn how to configure [chunking settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#infer-chunking-config) for [Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html) endpoints." + ] + }, + { + "cell_type": "markdown", + "id": "f9101eb9", + "metadata": {}, + "source": [ + "# 🧰 Requirements\n", + "\n", + "For this example, you will need:\n", + "\n", + "- An Elastic deployment:\n", + " - We'll be using [Elastic Cloud](https://www.elastic.co/guide/en/cloud/current/ec-getting-started.html) for this example (available with a [free trial](https://cloud.elastic.co/registration?onboarding_token=vectorsearch&utm_source=github&utm_content=elasticsearch-labs-notebook))\n", + "\n", + "- Elasticsearch 8.16 or above.\n", + "\n", + "- Python 3.7 or above." + ] + }, + { + "cell_type": "markdown", + "id": "4cd69cc0", + "metadata": {}, + "source": [ + "# Create Elastic Cloud deployment or serverless project\n", + "\n", + "If you don't have an Elastic Cloud deployment, sign up [here](https://cloud.elastic.co/registration?utm_source=github&utm_content=elasticsearch-labs-notebook) for a free trial." + ] + }, + { + "cell_type": "markdown", + "id": "f27dffbf", + "metadata": {}, + "source": [ + "# Install packages and connect with Elasticsearch Client\n", + "\n", + "To get started, we'll need to connect to our Elastic deployment using the Python client (version 8.12.0 or above).\n", + "Because we're using an Elastic Cloud deployment, we'll use the **Cloud ID** to identify our deployment.\n", + "\n", + "First we need to `pip` install the following packages:\n", + "\n", + "- `elasticsearch`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8c4b16bc", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install elasticsearch" + ] + }, + { + "cell_type": "markdown", + "id": "41ef96b3", + "metadata": {}, + "source": [ + "Next, we need to import the modules we need. 🔐 NOTE: getpass enables us to securely prompt the user for credentials without echoing them to the terminal, or storing it in memory." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "690ff9af", + "metadata": {}, + "outputs": [], + "source": [ + "from elasticsearch import Elasticsearch\n", + "from getpass import getpass" + ] + }, + { + "cell_type": "markdown", + "id": "23fa2b6c", + "metadata": {}, + "source": [ + "Now we can instantiate the Python Elasticsearch client.\n", + "\n", + "First we prompt the user for their password and Cloud ID.\n", + "Then we create a `client` object that instantiates an instance of the `Elasticsearch` class." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "195cc597", + "metadata": {}, + "outputs": [], + "source": [ + "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#finding-your-cloud-id\n", + "ELASTIC_CLOUD_ID = getpass(\"Elastic Cloud ID: \")\n", + "\n", + "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#creating-an-api-key\n", + "ELASTIC_API_KEY = getpass(\"Elastic Api Key: \")\n", + "\n", + "# Create the client instance\n", + "client = Elasticsearch(\n", + " # For local development\n", + " # hosts=[\"http://localhost:9200\"],\n", + " cloud_id=ELASTIC_CLOUD_ID,\n", + " api_key=ELASTIC_API_KEY,\n", + " request_timeout=120,\n", + " max_retries=10,\n", + " retry_on_timeout=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "b1115ffb", + "metadata": {}, + "source": [ + "### Test the Client\n", + "Before you continue, confirm that the client has connected with this test." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cc0de5ea", + "metadata": {}, + "outputs": [], + "source": [ + "print(client.info())" + ] + }, + { + "cell_type": "markdown", + "id": "659c5890", + "metadata": {}, + "source": [ + "Refer to [the documentation](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html#connect-self-managed-new) to learn how to connect to a self-managed deployment.\n", + "\n", + "Read [this page](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html#connect-self-managed-new) to learn how to connect using API keys." + ] + }, + { + "cell_type": "markdown", + "id": "840d92f0", + "metadata": {}, + "source": [ + "\n", + "## Create the inference endpoint object\n", + "\n", + "Let's create the inference endpoint by using the [Create Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html#put-inference-api-desc).\n", + "\n", + "In this example, you'll be creating an inference endpoint for the [ELSER integration](https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-elser.html) which will deploy Elastic's [ELSER model](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-elser.html) within your cluster. Chunking settings are configurable for any inference endpoint with an embedding task type. A full list of available integrations can be found in the [Create Inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html#put-inference-api-desc) documentation.\n", + "\n", + "To configure chunking settings, the request body must contain a `chunking_settings` map with a `strategy` value along with any required values for the selected chunking strategy. For this example, you'll be configuring chunking settings for a `sentence` strategy with a maximum chunk size of 25 words and 1 sentence overlap between chunks. For more information on available chunking strategies and their configurable values, see the [chunking strategies documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#_chunking_strategies)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0d007737", + "metadata": {}, + "outputs": [], + "source": [ + "client.inference.put(\n", + " task_type=\"sparse_embedding\",\n", + " inference_id=\"my_elser_endpoint\",\n", + " body={\n", + " \"service\": \"elasticsearch\",\n", + " \"service_settings\": {\n", + " \"num_allocations\": 1,\n", + " \"num_threads\": 1,\n", + " \"model_id\": \".elser_model_2\",\n", + " },\n", + " \"chunking_settings\": {\n", + " \"strategy\": \"sentence\",\n", + " \"max_chunk_size\": 25,\n", + " \"sentence_overlap\": 1,\n", + " },\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f01de885", + "metadata": {}, + "source": [ + "\n", + "## Create the index\n", + "\n", + "To see the chunking settings you've configured in action, you'll need to ingest a document into a semantic text field of an index. Let's create an index with a semantic text field linked to the inference endpoint created in the previous step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0eed3e3b", + "metadata": {}, + "outputs": [], + "source": [ + "client.indices.create(\n", + " index=\"my_index\",\n", + " mappings={\n", + " \"properties\": {\n", + " \"infer_field\": {\n", + " \"type\": \"semantic_text\",\n", + " \"inference_id\": \"my_elser_endpoint\",\n", + " }\n", + " }\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "51ae72e4", + "metadata": {}, + "source": [ + "\n", + "## Ingest a document\n", + "\n", + "Now let's ingest a document into the index created in the previous step.\n", + "\n", + "Note: It may take some time Elasticsearch to allocate nodes to the ELSER model deployment that is started when creating the inference endpoint. You will need to wait until the deployment is allocated to a node before the request below can succeed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b8ecaec0", + "metadata": {}, + "outputs": [], + "source": [ + "client.index(\n", + " index=\"my_index\",\n", + " document={\n", + " \"infer_field\": \"This is some sample document data. The data is being used to demonstrate the configurable chunking settings feature. The configured chunking settings will determine how this text is broken down into chunks to help increase inference accuracy.\"\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "ccc7ca3a", + "metadata": {}, + "source": [ + "\n", + "## View the chunks\n", + "\n", + "The generated chunks and their corresponding inference results can be seen stored in the document in the index under the key `chunks` within the `_inference_fields` metafield. The chunks are stored as a list of character offset values. Let's see the chunks generated when ingesting the documenting in the previous step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "58dc9019", + "metadata": {}, + "outputs": [], + "source": [ + "client.search(\n", + " index=\"my_index\",\n", + " body={\"size\": 100, \"query\": {\"match_all\": {}}, \"fields\": [\"_inference_fields\"]},\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "193f5b8d", + "metadata": {}, + "source": [ + "\n", + "## Conclusion\n", + "\n", + "You've now learned how to configure chunking settings for an inference endpoint! For more information about configurable chunking, see the [configuring chunking](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html#infer-chunking-config) documentation." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/supporting-blog-content/colpali/.python-version b/supporting-blog-content/colpali/.python-version new file mode 100644 index 000000000..e4fba2183 --- /dev/null +++ b/supporting-blog-content/colpali/.python-version @@ -0,0 +1 @@ +3.12 diff --git a/supporting-blog-content/colpali/01_colpali.ipynb b/supporting-blog-content/colpali/01_colpali.ipynb new file mode 100644 index 000000000..4e68910d8 --- /dev/null +++ b/supporting-blog-content/colpali/01_colpali.ipynb @@ -0,0 +1,419 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "3fef5a94-b06f-48ae-90e5-2f919d3352bd", + "metadata": {}, + "source": [ + "This notebook shows how to ingest and search images using ColPali with Elasticsearch. Read our accompanying blog post on [ColPali in Elasticsearch](elastiacsearch-colpali-visual-document-search) for more context on this notebook. \n", + "\n", + "We will be using images from the [ViDoRe benchmark](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) as example data. \n", + "\n", + "The URL and API key for your Elasticsearch cluster are expected in a file `elastic.env` in this format: \n", + "```\n", + "ELASTIC_HOST=\n", + "ELASTIC_API_KEY=\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "a1610e61-fbfe-4d7f-9109-601a0ccd0129", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install -r requirements.txt\n", + "from IPython.display import clear_output\n", + "\n", + "clear_output() # for less space usage." + ] + }, + { + "cell_type": "markdown", + "id": "aec6865f-dc2d-4242-a568-2fbf94cf2201", + "metadata": {}, + "source": [ + "First we load the sample data from huggingface and save it to disk." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "baf63024-c058-4e1c-a170-6730bf2f2704", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-02T09:16:41.680203Z", + "start_time": "2025-03-02T09:14:00.648234Z" + } + }, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "6c0aa31eaa8546478c3e48fcc206dbd3", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Saving images to disk: 0%| | 0/500 [00:00 list:\n", + " batch_images = col_pali_processor.process_images([Image.open(image_path)]).to(\n", + " model.device\n", + " )\n", + "\n", + " with torch.no_grad():\n", + " return model(**batch_images).tolist()[0]\n", + "\n", + "\n", + "def create_col_pali_query_vectors(query: str) -> list:\n", + " queries = col_pali_processor.process_queries([query]).to(model.device)\n", + " with torch.no_grad():\n", + " return model(**queries).tolist()[0]" + ] + }, + { + "cell_type": "markdown", + "id": "d12ea156-4e2b-4b84-983e-d0e63c9a6178", + "metadata": {}, + "source": [ + "This is where we are going over all our images and creating our multi-vectors with the ColPali model. " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "fcf55e15-6c4a-4003-b929-aab2931c2389", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-02T09:16:41.682259Z", + "start_time": "2025-03-02T09:14:22.244797Z" + } + }, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "76b9d003d76d49d1b82b25d124deddeb", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Create ColPali Vectors: 0%| | 0/500 [00:00\"image_104.jpg\"\"image_3.jpg\"\"image_2.jpg\"\"image_12.jpg\"\"image_92.jpg\"" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from IPython.display import display, HTML\n", + "import os\n", + "\n", + "query = \"What do companies use for recruiting?\"\n", + "es_query = {\n", + " \"_source\": False,\n", + " \"query\": {\n", + " \"script_score\": {\n", + " \"query\": {\"match_all\": {}},\n", + " \"script\": {\n", + " \"source\": \"maxSimDotProduct(params.query_vector, 'col_pali_vectors')\",\n", + " \"params\": {\"query_vector\": create_col_pali_query_vectors(query)},\n", + " },\n", + " }\n", + " },\n", + " \"size\": 5,\n", + "}\n", + "\n", + "results = es.search(index=INDEX_NAME, body=es_query)\n", + "image_ids = [hit[\"_id\"] for hit in results[\"hits\"][\"hits\"]]\n", + "\n", + "html = \"
\"\n", + "for image_id in image_ids:\n", + " image_path = os.path.join(DOCUMENT_DIR, image_id)\n", + " html += f'\"{image_id}\"'\n", + "html += \"
\"\n", + "\n", + "display(HTML(html))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "16997bc1-ea8d-413b-a312-00f08fca1d0a", + "metadata": {}, + "outputs": [], + "source": [ + "# We kill the kernel forcefully to free up the memory from the ColPali model.\n", + "print(\"Shutting down the kernel to free memory...\")\n", + "import os\n", + "\n", + "os._exit(0)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "dependecy-test-colpali-blog", + "language": "python", + "name": "dependecy-test-colpali-blog" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.6" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/supporting-blog-content/colpali/requirements.txt b/supporting-blog-content/colpali/requirements.txt new file mode 100644 index 000000000..0a8654be3 --- /dev/null +++ b/supporting-blog-content/colpali/requirements.txt @@ -0,0 +1,7 @@ +git+https://github.com/illuin-tech/colpali.git +elasticsearch +numpy +datasets +python-dotenv +jupyterlab +ipywidgets \ No newline at end of file