diff --git a/examplecode/notebooks.mdx b/examplecode/notebooks.mdx index 748f6d10..e0d0a4be 100644 --- a/examplecode/notebooks.mdx +++ b/examplecode/notebooks.mdx @@ -6,12 +6,33 @@ description: "Notebooks contain complete working sample code for end-to-end solu --- + +
+ This notebook explores using Unstructured API to process financial documents while preserving tabular structure in a way that's usable by downstream applications. +
+ ``Unstructured API`` ``Workflows`` ``S3`` ``Astra DB`` +
+

This notebook explores how you can use Unstructured to gather and process declassified historical records surrounding the assassination of Dr. Martin Luther King, Jr. These processed documents can then be analyzed by using Elasticsearch and RAG.
``Unstructured API`` ``Workflows`` ``S3`` ``VLM`` ``NER`` ``Elasticsearch`` ``MLK`` ``National Archives``
+ +
+ Learn how to build a RAG pipeline without any embedding models. Use Unstructured to preprocess documents, index them into Elasticsearch, and retrieve using classic BM25 scoring. +
+ ``Unstructured API`` ``Workflows`` ``Elasticsearch`` ``BM25`` +
+
+ +
+ Learn how to build data processing workflows using the Unstructured API and Python SDK to preprocess unstructured files from S3 and store the structured outputs in Redis Cloud for retrieval. +
+ ``Unstructured API`` ``Workflows`` ``S3`` ``Redis`` +
+

This notebook walks through using the Unstructured Workflow Endpoint to set up a complete pipeline that pulls documents from S3, processes them using Unstructured, and stores the resulting embeddings in Qdrant for fast vector search and retrieval. @@ -19,6 +40,13 @@ description: "Notebooks contain complete working sample code for end-to-end solu ``Unstructured API`` ``Workflows`` ``S3`` ``Qdrant`` ``VLM`` ``Embeddings``
+ +
+ Improve RAG precision with a two-stage retrieval pipeline: fast vector search followed by reranking using Cohere’s re-ranker models. +
+ ``Unstructured API`` ``Workflows`` ``Cohere`` ``Pinecone`` +
+

Learn how to build an end-to-end document processing pipeline that processes PDFs from S3 and stores structured results in MongoDB. Features VLM-powered partitioning, semantic chunking, and vector embeddings using the Unstructured Workflows API.