-
Notifications
You must be signed in to change notification settings - Fork 14k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'langchain-ai:master' into GraphSparqlQAChainGraphDBFix
- Loading branch information
Showing
75 changed files
with
2,908 additions
and
1,522 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
225 changes: 225 additions & 0 deletions
225
docs/docs/modules/data_connection/document_transformers/recursive_json_splitter.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,225 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"id": "a678d550", | ||
"metadata": {}, | ||
"source": [ | ||
"# Recursively split JSON\n", | ||
"\n", | ||
"This json splitter traverses json data depth first and builds smaller json chunks. It attempts to keep nested json objects whole but will split them if needed to keep chunks between a min_chunk_size and the max_chunk_size. If the value is not a nested json, but rather a very large string the string will not be split. If you need a hard cap on the chunk size considder following this with a Recursive Text splitter on those chunks. There is an optional pre-processing step to split lists, by first converting them to json (dict) and then splitting them as such.\n", | ||
"\n", | ||
"1. How the text is split: json value.\n", | ||
"2. How the chunk size is measured: by number of characters." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 1, | ||
"id": "a504e1e7", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import json\n", | ||
"\n", | ||
"import requests" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"id": "3390ae1d", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# This is a large nested json object and will be loaded as a python dict\n", | ||
"json_data = requests.get(\"https://api.smith.langchain.com/openapi.json\").json()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 3, | ||
"id": "7bfe2c1e", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain.text_splitter import RecursiveJsonSplitter" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 4, | ||
"id": "2833c409", | ||
"metadata": { | ||
"scrolled": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"splitter = RecursiveJsonSplitter(max_chunk_size=300)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 5, | ||
"id": "f941aa56", | ||
"metadata": { | ||
"scrolled": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"# Recursively split json data - If you need to access/manipulate the smaller json chunks\n", | ||
"json_chunks = splitter.split_json(json_data=json_data)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 7, | ||
"id": "0839f4f0", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"{\"openapi\": \"3.0.2\", \"info\": {\"title\": \"LangChainPlus\", \"version\": \"0.1.0\"}, \"paths\": {\"/sessions/{session_id}\": {\"get\": {\"tags\": [\"tracer-sessions\"], \"summary\": \"Read Tracer Session\", \"description\": \"Get a specific session.\", \"operationId\": \"read_tracer_session_sessions__session_id__get\"}}}}\n", | ||
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"# The splitter can also output documents\n", | ||
"docs = splitter.create_documents(texts=[json_data])\n", | ||
"\n", | ||
"# or a list of strings\n", | ||
"texts = splitter.split_text(json_data=json_data)\n", | ||
"\n", | ||
"print(texts[0])\n", | ||
"print(texts[1])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 8, | ||
"id": "c34b1f7f", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n", | ||
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"# Let's look at the size of the chunks\n", | ||
"print([len(text) for text in texts][:10])\n", | ||
"\n", | ||
"# Reviewing one of these chunks that was bigger we see there is a list object there\n", | ||
"print(texts[1])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 9, | ||
"id": "992477c2", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# The json splitter by default does not split lists\n", | ||
"# the following will preprocess the json and convert list to dict with index:item as key:val pairs\n", | ||
"texts = splitter.split_text(json_data=json_data, convert_lists=True)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 10, | ||
"id": "2d23b3aa", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"# Let's look at the size of the chunks. Now they are all under the max\n", | ||
"print([len(text) for text in texts][:10])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 11, | ||
"id": "d2c2773e", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"# The list has been converted to a dict, but retains all the needed contextual information even if split into many chunks\n", | ||
"print(texts[1])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 13, | ||
"id": "8963b01a", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"Document(page_content='{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}')" | ||
] | ||
}, | ||
"execution_count": 13, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], | ||
"source": [ | ||
"# We can also look at the documents\n", | ||
"docs[1]" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "168da4f0", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.10.1" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.