diff --git a/platform-api/legacy-api/aws.mdx b/api/legacy-api/aws.mdx
similarity index 99%
rename from platform-api/legacy-api/aws.mdx
rename to api/legacy-api/aws.mdx
index 54e40fd4..79db79ce 100644
--- a/platform-api/legacy-api/aws.mdx
+++ b/api/legacy-api/aws.mdx
@@ -314,7 +314,7 @@ For example, run one of the following, setting the following environment variabl
- Set `UNSTRUCTURED_API_URL` to `http://`, followed by your load balancer's DNS name, followed by `/general/v0/general`.
You can now use this value (`http://`, followed by your load balancer's DNS name, followed by `/general/v0/general`) in place of
- calling the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview) URL as described elsewhere in the Unstructured API documentation.
+ calling the [Unstructured Partition Endpoint](/api/partition/overview) URL as described elsewhere in the Unstructured API documentation.
- Set `LOCAL_FILE_INPUT_DIR` to the path on your local machine to the files for the Unstructured API to process. If you do not have any input files available, you can download any of the ones from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in GitHub.
- Set `LOCAL_FILE_OUTPUT_DIR` to the path on your local machine for Unstructured API to send the processed output in JSON format:
diff --git a/platform-api/legacy-api/azure.mdx b/api/legacy-api/azure.mdx
similarity index 100%
rename from platform-api/legacy-api/azure.mdx
rename to api/legacy-api/azure.mdx
diff --git a/platform-api/legacy-api/free-api.mdx b/api/legacy-api/free-api.mdx
similarity index 82%
rename from platform-api/legacy-api/free-api.mdx
rename to api/legacy-api/free-api.mdx
index 72fddbc7..a5b5daa8 100644
--- a/platform-api/legacy-api/free-api.mdx
+++ b/api/legacy-api/free-api.mdx
@@ -5,7 +5,7 @@ title: Free Unstructured API
The Free Unstructured API is in the process of deprecation by April 4, 2025. It is no longer supported and is not being actively updated.
- Unstructured recommends that you use the [Unstructured Platform API](/platform-api/overview) instead, which provides new users with 14 days of free usage at up to 1000 pages per day during that period.
+ Unstructured recommends that you use the [Unstructured API](/api/overview) instead, which provides new users with 14 days of free usage at up to 1000 pages per day during that period.
This page is not being actively updated. It might contain out-of-date information. This page is provided for legacy reference purposes only.
@@ -32,7 +32,7 @@ The Free Unstructured API is designed for prototyping purposes, and not for prod
* Users of the Free Unstructured API do not get their own dedicated infrastructure.
* The data sent over the Free Unstructured API can be used for model training purposes, and other service improvements.
-If you require a production-ready API, consider using the [Unstructured Platform API](/platform-api/overview) instead.
+If you require a production-ready API, consider using the [Unstructured API](/api/overview) instead.
import SharedPagesBilling from '/snippets/general-shared-text/pages-billing.mdx';
@@ -55,7 +55,7 @@ To work with the Free Unstructured API by using the [Unstructured Ingest CLI](/i
- Set the `UNSTRUCTURED_API_KEY` environment variable to your Free Unstructured API key.
- Set the `UNSTRUCTURED_API_URL` environment variable to your Free Unstructured API URL, which is `https://api.unstructured.io/general/v0/general`
-- Have some compatible files on your local machine to be processed. [See the list of supported file types](/platform-api/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
+- Have some compatible files on your local machine to be processed. [See the list of supported file types](/api/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
Now, use the CLI to call the API, replacing:
@@ -93,7 +93,7 @@ To work with Unstructured by using the [Unstructured Python library](/ingestion/
[Get your API key and API URL](#get-started).
-- Have some compatible files on your local machine to be processed. [See the list of supported file types](/platform-api/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub. If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
+- Have some compatible files on your local machine to be processed. [See the list of supported file types](/api/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub. If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
Now, use the CLI to call the API, replacing:
diff --git a/platform-api/legacy-api/overview.mdx b/api/legacy-api/overview.mdx
similarity index 63%
rename from platform-api/legacy-api/overview.mdx
rename to api/legacy-api/overview.mdx
index 6f0d879b..b744304b 100644
--- a/platform-api/legacy-api/overview.mdx
+++ b/api/legacy-api/overview.mdx
@@ -4,15 +4,15 @@ title: Overview
Unstructured has deprecated the following APIs:
-- The [Free Unstructured API](/platform-api/legacy-api/free-api) is in the process of deprecation by April 4, 2025.
+- The [Free Unstructured API](/api/legacy-api/free-api) is in the process of deprecation by April 4, 2025.
It is no longer supported and is not being actively updated. Unstructured recommends that you use the
- [Unstructured Platform API](/platform-api/overview) instead, which provides new users with 14 days of free usage at up to
+ [Unstructured API](/api/overview) instead, which provides new users with 14 days of free usage at up to
1000 pages per day during that period.
-- The [Unstructured API on AWS](/platform-api/legacy-api/aws) is deprecated. It is no longer supported and is not being actively updated.
+- The [Unstructured API on AWS](/api/legacy-api/aws) is deprecated. It is no longer supported and is not being actively updated.
Unstructured is now available on the AWS Marketplace as a private offering. To explore supported options
for running Unstructured within your virtual private cloud (VPC), email Unstructured Sales at
[sales@unstructured.io](mailto:sales@unstructured.io).
-- The [Unstructured API on Azure](/platform-api/legacy-api/azure) is deprecated. It is no longer supported and is not being actively updated.
+- The [Unstructured API on Azure](/api/legacy-api/azure) is deprecated. It is no longer supported and is not being actively updated.
Unstructured is now available on the AWS Marketplace as a private offering. To explore supported options
for running Unstructured within your virtual private cloud (VPC), email Unstructured Sales at
[sales@unstructured.io](mailto:sales@unstructured.io).
diff --git a/platform-api/overview.mdx b/api/overview.mdx
similarity index 65%
rename from platform-api/overview.mdx
rename to api/overview.mdx
index 81065eb5..6eb9cec3 100644
--- a/platform-api/overview.mdx
+++ b/api/overview.mdx
@@ -2,21 +2,21 @@
title: Overview
---
-The Unstructured Platform API consists of two parts:
+The Unstructured API consists of two parts:
-- The [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) enables a full range of partitioning, chunking, embedding, and
+- The [Unstructured Workflow Endpoint](/api/workflow/overview) enables a full range of partitioning, chunking, embedding, and
enrichment options for your files and data. It is designed to batch-process files and data in remote locations; send processed results to
various storage, databases, and vector stores; and use the latest and highest-performing models on the market today. It has built-in logic
- to deliver the highest quality results at the lowest cost. [Learn more](/platform-api/api/overview).
-- The [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview) is intended for rapid prototyping of Unstructured's
+ to deliver the highest quality results at the lowest cost. [Learn more](/api/workflow/overview).
+- The [Unstructured Partition Endpoint](/api/partition/overview) is intended for rapid prototyping of Unstructured's
various partitioning strategies, with limited support for chunking. It is designed to work only with processing of local files, one file
- at a time. Use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) for production-level scenarios, file processing in
+ at a time. Use the [Unstructured Workflow Endpoint](/api/workflow/overview) for production-level scenarios, file processing in
batches, files and data in remote locations, generating embeddings, applying post-transform enrichments, using the latest and
- highest-performing models, and for the highest quality results at the lowest cost. [Learn more](/platform-api/partition-api/overview).
+ highest-performing models, and for the highest quality results at the lowest cost. [Learn more](/api/partition/overview).
# Benefits over open source
-The Unstructured Platform API provides the following benefits beyond the [Unstructured open source library](/open-source/introduction/overview) offering:
+The Unstructured API provides the following benefits beyond the [Unstructured open source library](/open-source/introduction/overview) offering:
* Designed for production scenarios.
* Significantly increased performance on document and table extraction.
@@ -33,4 +33,4 @@ The Unstructured Platform API provides the following benefits beyond the [Unstru
## Get support
-Should you require any assistance or have any questions regarding the Unstructured Platform API, please [contact us directly](https://unstructured.io/contact).
+Should you require any assistance or have any questions regarding the Unstructured API, please [contact us directly](https://unstructured.io/contact).
diff --git a/platform-api/partition-api/api-parameters.mdx b/api/partition/api-parameters.mdx
similarity index 92%
rename from platform-api/partition-api/api-parameters.mdx
rename to api/partition/api-parameters.mdx
index 3edfc586..d9e42370 100644
--- a/platform-api/partition-api/api-parameters.mdx
+++ b/api/partition/api-parameters.mdx
@@ -3,7 +3,7 @@ title: Platform Endpoint parameters
sidebarTitle: Endpoint parameters
---
-The Unstructured Platform Partition Endpoint provides parameters to customize the processing of documents. These parameters include:
+The Unstructured Partition Endpoint provides parameters to customize the processing of documents. These parameters include:
The only required parameter is `files` - the file you wish to process.
@@ -12,26 +12,26 @@ The only required parameter is `files` - the file you wish to process.
| POST, Python | JavaScript/TypeScript | Description |
|-------------------------------------------|------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `files` (_shared.Files_) | `files` (_File_, _Blob_, _shared.Files_) | The file to process. |
-| `chunking_strategy` (_str_) | `chunkingStrategy` (_string_) | Use one of the supported strategies to chunk the returned elements after partitioning. When no chunking strategy is specified, no chunking is performed and any other chunking parameters provided are ignored. Supported strategies: `basic`, `by_title`, `by_page`, and `by_similarity`. [Learn more](/platform-api/partition-api/chunking). |
+| `chunking_strategy` (_str_) | `chunkingStrategy` (_string_) | Use one of the supported strategies to chunk the returned elements after partitioning. When no chunking strategy is specified, no chunking is performed and any other chunking parameters provided are ignored. Supported strategies: `basic`, `by_title`, `by_page`, and `by_similarity`. [Learn more](/api/partition/chunking). |
| `content_type` (_str_) | `contentType` (_string_) | A hint to Unstructured about the content type to use (such as `text/markdown`), when there are problems processing a specific file. This value is a MIME type in the format `type/subtype`. For available MIME types, see [model.py](https://github.com/Unstructured-IO/unstructured/blob/main/unstructured/file_utils/model.py). |
-| `coordinates` (_bool_) | `coordinates` (_boolean_) | True to return bounding box coordinates for each element extracted with OCR. Default: false. [Learn more](/platform-api/partition-api/examples#saving-bounding-box-coordinates). |
+| `coordinates` (_bool_) | `coordinates` (_boolean_) | True to return bounding box coordinates for each element extracted with OCR. Default: false. [Learn more](/api/partition/examples#saving-bounding-box-coordinates). |
| `encoding` (_str_) | `encoding` (_string_) | The encoding method used to decode the text input. Default: `utf-8`. |
-| `extract_image_block_types` (_List[str]_) | `extractImageBlockTypes` (_string[]_) | The types of elements to extract, for use in extracting image blocks as Base64 encoded data stored in element metadata fields, for example: `["Image","Table"]`. Supported filetypes are image and PDF. [Learn more](/platform-api/partition-api/extract-image-block-types). |
+| `extract_image_block_types` (_List[str]_) | `extractImageBlockTypes` (_string[]_) | The types of elements to extract, for use in extracting image blocks as Base64 encoded data stored in element metadata fields, for example: `["Image","Table"]`. Supported filetypes are image and PDF. [Learn more](/api/partition/extract-image-block-types). |
| `gz_uncompressed_content_type` (_str_) | `gzUncompressedContentType` (_string_) | If file is gzipped, use this content type after unzipping. Example: `application/pdf` |
-| `hi_res_model_name` (_str_) | `hiResModelName` (_string_) | The name of the inference model used when strategy is `hi_res`. Options are `layout_v1.1.0` and `yolox`. Default: `layout_v1.1.0`. [Learn more](/platform-api/partition-api/examples#changing-partition-strategy-for-a-pdf). |
+| `hi_res_model_name` (_str_) | `hiResModelName` (_string_) | The name of the inference model used when strategy is `hi_res`. Options are `layout_v1.1.0` and `yolox`. Default: `layout_v1.1.0`. [Learn more](/api/partition/examples#changing-partition-strategy-for-a-pdf). |
| `include_page_breaks` (_bool_) | `includePageBreaks` (_boolean_) | True for the output to include page breaks if the filetype supports it. Default: false. |
-| `languages` (_List[str]_) | `languages` (_string[]_) | The languages present in the document, for use in partitioning and OCR. [View the list of available languages](https://github.com/tesseract-ocr/tessdata). [Learn more](/platform-api/partition-api/examples#specifying-the-language-of-a-document-for-better-ocr-results). |
+| `languages` (_List[str]_) | `languages` (_string[]_) | The languages present in the document, for use in partitioning and OCR. [View the list of available languages](https://github.com/tesseract-ocr/tessdata). [Learn more](/api/partition/examples#specifying-the-language-of-a-document-for-better-ocr-results). |
| `output_format` (_str_) | `outputFormat` (_string_) | The format of the response. Supported formats are `application/json` and `text/csv`. Default: `application/json`. |
| `pdf_infer_table_structure` (_bool_) | `pdfInferTableStructure` (_boolean_) | **Deprecated!** If true and `strategy` is `hi_res`, any `Table` elements extracted from a PDF will include an additional metadata field, `text_as_html`, where the value (string) is a just a transformation of the data into an HTML table. |
| `skip_infer_table_types` (_List[str]_) | `skipInferTableTypes` (_string[]_) | The document types that you want to skip table extraction for. Default: `[]`. |
| `starting_page_number` (_int_) | `startingPageNumber` (_number_) | The page number to be be assigned to the first page in the document. This information will be included in elements' metadata and can be be especially useful when partitioning a document that is part of a larger document. |
-| `strategy` (_str_) | `strategy` (_string_) | The strategy to use for partitioning PDF and image files. Options are `auto`, `vlm`, `hi_res`, `fast`, and `ocr_only`. Default: `auto`. [Learn more](/platform-api/partition-api/partitioning). |
+| `strategy` (_str_) | `strategy` (_string_) | The strategy to use for partitioning PDF and image files. Options are `auto`, `vlm`, `hi_res`, `fast`, and `ocr_only`. Default: `auto`. [Learn more](/api/partition/partitioning). |
| `unique_element_ids` (_bool_) | `uniqueElementIds` (_boolean_) | True to assign UUIDs to element IDs, which guarantees their uniqueness (useful when using them as primary keys in database). Otherwise a SHA-256 of the element's text is used. Default: false. |
| `vlm_model` (_str_) | (Not yet available) | Applies only when `strategy` is `vlm`. The name of the vision language model (VLM) provider to use for partitioning. `vlm_model_provider` must also be specified. For a list of allowed values, see the end of this article. |
| `vlm_model_provider` (_str_) | (Not yet available) | Applies only when `strategy` is `vlm`. The name of the vision language model (VLM) to use for partitioning. `vlm_model` must also be specified. For a list of allowed values, see the end of this article. |
| `xml_keep_tags` (_bool_) | `xmlKeepTags` (_boolean_) | True to retain the XML tags in the output. Otherwise it will just extract the text from within the tags. Only applies to XML documents. |
-The following parameters only apply when a chunking strategy is specified. Otherwise, they are ignored. [Learn more](/platform-api/partition-api/chunking).
+The following parameters only apply when a chunking strategy is specified. Otherwise, they are ignored. [Learn more](/api/partition/chunking).
| POST, Python | JavaScript/TypeScript | Description |
|----------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -44,16 +44,16 @@ The following parameters only apply when a chunking strategy is specified. Other
| `overlap_all` (_bool_) | `overlapAll` (_boolean_) | True to have an overlap also applied to "normal" chunks formed by combining whole elements. Use with caution, as this can introduce noise into otherwise clean semantic units. Default: none. |
| `similarity_threshold` (_float_) | `similarityThreshold` (_number_) | Applies only when the chunking strategy is set to `by_similarity`. The minimum similarity text in consecutive elements must have to be included in the same chunk. Must be between 0.0 and 1.0, exclusive (0.01 to 0.99, inclusive). Default: 0.5. |
-The following parameters are specific to the Python and JavaScript/TypeScript clients and are not sent to the server. [Learn more](/platform-api/partition-api/sdk-python#page-splitting).
+The following parameters are specific to the Python and JavaScript/TypeScript clients and are not sent to the server. [Learn more](/api/partition/sdk-python#page-splitting).
| POST, Python | JavaScript/TypeScript | Description |
|---------------------------------------|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `split_pdf_page` (_bool_) | `splitPdfPage` (_boolean_) | True to split the PDF file client-side. [Learn more](/platform-api/partition-api/sdk-python#page-splitting). |
+| `split_pdf_page` (_bool_) | `splitPdfPage` (_boolean_) | True to split the PDF file client-side. [Learn more](/api/partition/sdk-python#page-splitting). |
| `split_pdf_allow_failed` (_bool_) | `splitPdfAllowFailed` (_boolean_) | When `true`, a failed split request will not stop the processing of the rest of the document. The affected page range will be ignored in the results. When `false`, a failed split request will cause the entire document to fail. Default: `false`. |
| `split_pdf_concurrency_level` (_int_) | `splitPdfConcurrencyLevel` (_number_) | The number of split files to be sent concurrently. Default: 5. Maximum: 15. |
| `split_pdf_page_range` (_List[int]_) | `splitPdfPageRange` (_number[]_) | A list of 2 integers within the range `[1, length_of_pdf]`. When pdf splitting is enabled, this will send only the specified page range to the API. |
-Need help getting started? Check out the [Examples page](/platform-api/partition-api/examples) for some inspiration.
+Need help getting started? Check out the [Examples page](/api/partition/examples) for some inspiration.
Allowed values for `vlm_model_provider` and `vlm_model` pairs include the following:
diff --git a/platform-api/partition-api/api-validation-errors.mdx b/api/partition/api-validation-errors.mdx
similarity index 93%
rename from platform-api/partition-api/api-validation-errors.mdx
rename to api/partition/api-validation-errors.mdx
index 4afb0aab..f647b15a 100644
--- a/platform-api/partition-api/api-validation-errors.mdx
+++ b/api/partition/api-validation-errors.mdx
@@ -1,6 +1,6 @@
---
title: Endpoint validation errors
-description: This section details the structure of HTTP validation errors returned by the Unstructured Platform Partition Endpoint.
+description: This section details the structure of HTTP validation errors returned by the Unstructured Partition Endpoint.
---
## HTTPValidationError
diff --git a/platform-api/partition-api/chunking.mdx b/api/partition/chunking.mdx
similarity index 100%
rename from platform-api/partition-api/chunking.mdx
rename to api/partition/chunking.mdx
diff --git a/platform-api/partition-api/document-elements.mdx b/api/partition/document-elements.mdx
similarity index 100%
rename from platform-api/partition-api/document-elements.mdx
rename to api/partition/document-elements.mdx
diff --git a/platform-api/partition-api/examples.mdx b/api/partition/examples.mdx
similarity index 99%
rename from platform-api/partition-api/examples.mdx
rename to api/partition/examples.mdx
index e41ef17e..7921671e 100644
--- a/platform-api/partition-api/examples.mdx
+++ b/api/partition/examples.mdx
@@ -1,10 +1,10 @@
---
title: Examples
-description: This page provides some examples of accessing Unstructured Platform Partition Endpoint via different methods.
+description: This page provides some examples of accessing Unstructured Partition Endpoint via different methods.
---
To use these examples, you'll first need to set an environment variable named `UNSTRUCTURED_API_KEY`,
-representing your Unstructured API key. [Get your API key](/platform-api/partition-api/overview).
+representing your Unstructured API key. [Get your API key](/api/partition/overview).
For the POST and Unstructured JavaScript/TypeScript SDK examples, you'll also need to set an environment variable named `UNSTRUCTURED_API_URL` to the
value `https://api.unstructuredapp.io/general/v0/general`
diff --git a/platform-api/partition-api/extract-image-block-types.mdx b/api/partition/extract-image-block-types.mdx
similarity index 78%
rename from platform-api/partition-api/extract-image-block-types.mdx
rename to api/partition/extract-image-block-types.mdx
index 9e601f8b..110c64e9 100644
--- a/platform-api/partition-api/extract-image-block-types.mdx
+++ b/api/partition/extract-image-block-types.mdx
@@ -15,7 +15,7 @@ and then show it.
## To run this example
You will need a document that is one of the document types supported by the `extract_image_block_types` argument.
-See the `extract_image_block_types` entry in [API Parameters](/platform-api/partition-api/api-parameters).
+See the `extract_image_block_types` entry in [API Parameters](/api/partition/api-parameters).
This example uses a PDF file with embedded images and tables.
import SharedAPIKeyURL from '/snippets/general-shared-text/api-key-url.mdx';
@@ -23,11 +23,11 @@ import ExtractImageBlockTypesPy from '/snippets/how-to-api/extract_image_block_t
## Code
-For the [Unstructured Python SDK](/platform-api/partition-api/sdk-python), you'll need:
+For the [Unstructured Python SDK](/api/partition/sdk-python), you'll need:
## See also
-- [Extract text as HTML](/platform-api/partition-api/text-as-html)
+- [Extract text as HTML](/api/partition/text-as-html)
- [Table extraction from PDF](/examplecode/codesamples/apioss/table-extraction-from-pdf)
\ No newline at end of file
diff --git a/platform-api/partition-api/generate-schema.mdx b/api/partition/generate-schema.mdx
similarity index 100%
rename from platform-api/partition-api/generate-schema.mdx
rename to api/partition/generate-schema.mdx
diff --git a/platform-api/partition-api/get-chunked-elements.mdx b/api/partition/get-chunked-elements.mdx
similarity index 93%
rename from platform-api/partition-api/get-chunked-elements.mdx
rename to api/partition/get-chunked-elements.mdx
index efaee4d7..1990151c 100644
--- a/platform-api/partition-api/get-chunked-elements.mdx
+++ b/api/partition/get-chunked-elements.mdx
@@ -55,11 +55,11 @@ You will need to chunk a document during processing. This example uses a PDF fil
import GetChunkedElementsPy from '/snippets/how-to-api/get_chunked_elements.py.mdx';
import SharedAPIKeyURL from '/snippets/general-shared-text/api-key-url.mdx';
-For the [Unstructured Python SDK](/platform-api/partition-api/sdk-python), you'll need:
+For the [Unstructured Python SDK](/api/partition/sdk-python), you'll need:
## See also
- [Recovering chunk elements](/open-source/core-functionality/chunking#recovering-chunk-elements)
-- [Chunking strategies](/platform-api/partition-api/chunking)
\ No newline at end of file
+- [Chunking strategies](/api/partition/chunking)
\ No newline at end of file
diff --git a/platform-api/partition-api/get-elements.mdx b/api/partition/get-elements.mdx
similarity index 91%
rename from platform-api/partition-api/get-elements.mdx
rename to api/partition/get-elements.mdx
index 89e75afb..4c9d8593 100644
--- a/platform-api/partition-api/get-elements.mdx
+++ b/api/partition/get-elements.mdx
@@ -4,7 +4,7 @@ title: Get element contents
## Task
-You want to get, manipulate, and print or save, the contents of the [document elements and metadata](/platform-api/partition-api/document-elements) from the processed data that Unstructured returns.
+You want to get, manipulate, and print or save, the contents of the [document elements and metadata](/api/partition/document-elements) from the processed data that Unstructured returns.
## Approach
@@ -14,7 +14,7 @@ The programmatic approach you take to get these document elements will depend on
- For the [Unstructured Python SDK](/platform-api/partition-api/sdk-python), calling an `UnstructuredClient` object's `general.partition_async` method returns a `PartitionResponse` object.
+ For the [Unstructured Python SDK](/api/partition/sdk-python), calling an `UnstructuredClient` object's `general.partition_async` method returns a `PartitionResponse` object.
This `PartitionResponse` object's `elements` variable contains a list of key-value dictionaries (`List[Dict[str, Any]]`). For example:
@@ -78,7 +78,7 @@ The programmatic approach you take to get these document elements will depend on
```
- For the [Unstructured JavaScript/TypeScript SDK](/platform-api/partition-api/sdk-jsts), calling an `UnstructuredClient` object's `general.partition` method returns a `Promise` object.
+ For the [Unstructured JavaScript/TypeScript SDK](/api/partition/sdk-jsts), calling an `UnstructuredClient` object's `general.partition` method returns a `Promise` object.
This `PartitionResponse` object's `elements` property contains an `Array` of string-value objects (`{ [k: string]: any; }[]`). For example:
diff --git a/api/partition/output-bounding-box-coordinates.mdx b/api/partition/output-bounding-box-coordinates.mdx
new file mode 100644
index 00000000..98152117
--- /dev/null
+++ b/api/partition/output-bounding-box-coordinates.mdx
@@ -0,0 +1,4 @@
+---
+title: "Output bounding box coordinates"
+url: "/api/partition/examples#saving-bounding-box-coordinates"
+---
\ No newline at end of file
diff --git a/platform-api/partition-api/overview.mdx b/api/partition/overview.mdx
similarity index 83%
rename from platform-api/partition-api/overview.mdx
rename to api/partition/overview.mdx
index 43d6d50d..9ac9cfec 100644
--- a/platform-api/partition-api/overview.mdx
+++ b/api/partition/overview.mdx
@@ -2,15 +2,15 @@
title: Overview
---
-The Unstructured Platform Partition Endpoint, part of the [Unstructured Platform API](/platform-api/overview), is intended for rapid prototyping of Unstructured's
+The Unstructured Partition Endpoint, part of the [Unstructured API](/api/overview), is intended for rapid prototyping of Unstructured's
various partitioning strategies, with limited support for chunking. It is designed to work only with processing of local files, one file
-at a time. Use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) for production-level scenarios, file processing in
+at a time. Use the [Unstructured Workflow Endpoint](/api/workflow/overview) for production-level scenarios, file processing in
batches, files and data in remote locations, generating embeddings, applying post-transform enrichments, using the latest and
highest-performing models, and for the highest quality results at the lowest cost.
## Get started
-To call the Unstructured Platform Partition Endpoint, you need an Unstructured account and an Unstructured API key:
+To call the Unstructured Partition Endpoint, you need an Unstructured account and an Unstructured API key:
@@ -29,8 +29,8 @@ To call the Unstructured Platform Partition Endpoint, you need an Unstructured a
To save money by switching from a pay-per-page to a subscribe-and-save plan, go to the
[Unstructured Subscribe & Save](https://unstructured.io/subscribeandsave) page and complete the on-screen instructions.
- By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of the Unstructured Platform on
- Unstructured's own hosted cloud resources. If you would rather run the Unstructured Platform within the context of your own virtual private cloud (VPC),
+ By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of the Unstructured API on
+ Unstructured's own hosted cloud resources. If you would rather run the Unstructured API within the context of your own virtual private cloud (VPC),
(or you want to save even more money by making a long-term billing commitment),
stop here and sign up through the [For Enterprise](https://unstructured.io/enterprise) page instead.
@@ -44,9 +44,9 @@ To call the Unstructured Platform Partition Endpoint, you need an Unstructured a
be different. For enterprise sign-in guidance, contact Unstructured Sales at [sales@unstructured.io](mailto:sales@unstructured.io).
- 1. After you have signed up for a pay-per-page plan, the Unstructured Platform sign-in page appears.
+ 1. After you have signed up for a pay-per-page plan, the Unstructured account sign-in page appears.
- 
+ 
2. Click **Google** or **GitHub** to sign in with the Google or GitHub account that you signed up with.
Or, enter the email address that you signed up with, and then click **Sign In**.
@@ -65,9 +65,9 @@ To call the Unstructured Platform Partition Endpoint, you need an Unstructured a
- 
+ 
- 
+ 
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -89,11 +89,11 @@ If you signed up for a pay-per-page plan, you can enjoy a free 14-day trial with
At the end of the 14-day free trial, or if you need to go past the trial's page processing limits during the 14-day free trial, you must set up your billing information to keep using
-the Unstructured Platform Partition API:
+the Unstructured Partition Endpoint:
-
+
-
+
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -113,7 +113,7 @@ import SharedPagesBilling from '/snippets/general-shared-text/pages-billing.mdx'
## Quickstart
-This example uses the [curl](https://curl.se/) utility on your local machine to call the Unstructured Platform Partition Endpoint. It sends a source (input) file from your local machine to the Unstructured Platform Partition Endpoint which then delivers the processed data to a destination (output) location, also on your local machine. Data is processed on Unstructured-hosted compute resources.
+This example uses the [curl](https://curl.se/) utility on your local machine to call the Unstructured Partition Endpoint. It sends a source (input) file from your local machine to the Unstructured Partition Endpoint which then delivers the processed data to a destination (output) location, also on your local machine. Data is processed on Unstructured-hosted compute resources.
If you do not have a source file readily available, you could use for example a sample PDF file containing the text of the United States Constitution,
available for download from [https://constitutioncenter.org/media/files/constitution.pdf](https://constitutioncenter.org/media/files/constitution.pdf).
@@ -122,7 +122,7 @@ available for download from [https://constitutioncenter.org/media/files/constitu
From your terminal or Command Prompt, set the following two environment variables.
- - Replace `` with the Unstructured Platform Partition Endpoint URL, which is `https://api.unstructuredapp.io/general/v0/general`
+ - Replace `` with the Unstructured Partition Endpoint URL, which is `https://api.unstructuredapp.io/general/v0/general`
- Replace `` with your Unstructured API key, which you generated earlier on this page.
```bash
@@ -159,7 +159,7 @@ available for download from [https://constitutioncenter.org/media/files/constitu
-You can also call the Unstructured Platform Partition Endpoint by using the [Unstructured Python SDK](/platform-api/partition-api/sdk-python) or the [Unstructured JavaScript/TypeScript SDK](/platform-api/partition-api/sdk-jsts).
+You can also call the Unstructured Partition Endpoint by using the [Unstructured Python SDK](/api/partition/sdk-python) or the [Unstructured JavaScript/TypeScript SDK](/api/partition/sdk-jsts).
## Telemetry
diff --git a/platform-api/partition-api/partitioning.mdx b/api/partition/partitioning.mdx
similarity index 100%
rename from platform-api/partition-api/partitioning.mdx
rename to api/partition/partitioning.mdx
diff --git a/platform-api/partition-api/pipeline-1.mdx b/api/partition/pipeline-1.mdx
similarity index 100%
rename from platform-api/partition-api/pipeline-1.mdx
rename to api/partition/pipeline-1.mdx
diff --git a/platform-api/partition-api/post-requests.mdx b/api/partition/post-requests.mdx
similarity index 82%
rename from platform-api/partition-api/post-requests.mdx
rename to api/partition/post-requests.mdx
index def219fe..253a9091 100644
--- a/platform-api/partition-api/post-requests.mdx
+++ b/api/partition/post-requests.mdx
@@ -3,17 +3,17 @@ title: Process an individual file by making a direct POST request
sidebarTitle: POST request
---
-To make POST requests to the Unstructured Platform Partition Endpoint, you will need:
+To make POST requests to the Unstructured Partition Endpoint, you will need:
import SharedAPIKeyURL from '/snippets/general-shared-text/api-key-url.mdx';
-[Get your API key](/platform-api/partition-api/overview).
+[Get your API key](/api/partition/overview).
The API URL is `https://api.unstructuredapp.io/general/v0/general`
-Let's start with a simple example in which you use [curl](https://curl.se/) to send a local PDF file (`*.pdf`) to partition via the Unstructured Platform Partition Endpoint.
+Let's start with a simple example in which you use [curl](https://curl.se/) to send a local PDF file (`*.pdf`) to partition via the Unstructured Partition Endpoint.
In this command, be sure to replace `` with the path to your local PDF file.
@@ -32,14 +32,14 @@ curl --request 'POST' \
```
In the example above we're representing the API endpoint with the environment variable `UNSTRUCTURED_API_URL`. Note, however, that you also need to authenticate yourself with
-your individual API Key, represented by the environment variable `UNSTRUCTURED_API_KEY`. Learn how to obtain an API URL and API key in the [Unstructured Platform Partition Endpoint guide](/platform-api/partition-api/overview).
+your individual API Key, represented by the environment variable `UNSTRUCTURED_API_KEY`. Learn how to obtain an API URL and API key in the [Unstructured Partition Endpoint guide](/api/partition/overview).
## Parameters & examples
-The API parameters are the same across all methods of accessing the Unstructured Platform Partition Endpoint.
+The API parameters are the same across all methods of accessing the Unstructured Partition Endpoint.
-* Refer to the [API parameters](/platform-api/partition-api/api-parameters) page for the full list of available parameters.
-* Refer to the [Examples](/platform-api/partition-api/examples) page for some inspiration on using the parameters.
+* Refer to the [API parameters](/api/partition/api-parameters) page for the full list of available parameters.
+* Refer to the [Examples](/api/partition/examples) page for some inspiration on using the parameters.
[//]: # (TODO: when we have the concepts page shared across products, link it from here for the users to learn about partition strategies, chunking strategies and other important shared concepts)
@@ -61,7 +61,7 @@ Unstructured offers a [Postman collection](https://learning.postman.com/docs/col
5. On the sidebar, click **Collections**.
6. Expand **Unstructured POST**.
-7. Click **(Platform Partition Endpoint) Basic Request**.
+7. Click **(Partition Endpoint) Basic Request**.
8. On the **Headers** tab, next to `unstructured-api-key`, enter your Unstructured API key in the **Value** column.
9. On the **Body** tab, next to `files`, click the **Select files** box in the **Value** column.
10. Click **New file from local machine**.
diff --git a/platform-api/partition-api/sdk-jsts.mdx b/api/partition/sdk-jsts.mdx
similarity index 95%
rename from platform-api/partition-api/sdk-jsts.mdx
rename to api/partition/sdk-jsts.mdx
index f5fead98..547bfc03 100644
--- a/platform-api/partition-api/sdk-jsts.mdx
+++ b/api/partition/sdk-jsts.mdx
@@ -2,10 +2,10 @@
title: JavaScript/TypeScript SDK
---
-The [Unstructured JavaScript/TypeScript SDK](https://github.com/Unstructured-IO/unstructured-js-client) client allows you to send one file at a time for processing by the Unstructured Platform Partition API.
+The [Unstructured JavaScript/TypeScript SDK](https://github.com/Unstructured-IO/unstructured-js-client) client allows you to send one file at a time for processing by the Unstructured Partition Endpoint.
To use the JavaScript/TypeScript SDK, you'll first need to set an environment variable named `UNSTRUCTURED_API_KEY`,
-representing your Unstructured API key. [Get your API key](/platform-api/partition-api/overview).
+representing your Unstructured API key. [Get your API key](/api/partition/overview).
## Installation
@@ -23,7 +23,7 @@ representing your Unstructured API key. [Get your API key](/platform-api/partiti
## Basics
- Let's start with a simple example in which you send a PDF document to the Unstructured Platform Parition Endpoint to be partitioned by Unstructured.
+ Let's start with a simple example in which you send a PDF document to the Unstructured Partition Endpoint to be partitioned by Unstructured.
The JavaScript/TypeScript SDK has the following breaking changes in v0.11.0:
@@ -286,6 +286,6 @@ The parameter names used in this document are for the JavaScript/TypeScript SDK,
convention. The Python SDK follows the `snake_case` convention. Other than this difference in naming convention,
the names used in the SDKs are the same across all methods.
-* Refer to the [API parameters](/platform-api/partition-api/api-parameters) page for the full list of available parameters.
-* Refer to the [Examples](/platform-api/partition-api/examples) page for some inspiration on using the parameters.
+* Refer to the [API parameters](/api/partition/api-parameters) page for the full list of available parameters.
+* Refer to the [Examples](/api/partition/examples) page for some inspiration on using the parameters.
diff --git a/platform-api/partition-api/sdk-python.mdx b/api/partition/sdk-python.mdx
similarity index 95%
rename from platform-api/partition-api/sdk-python.mdx
rename to api/partition/sdk-python.mdx
index 347eec8b..6553d820 100644
--- a/platform-api/partition-api/sdk-python.mdx
+++ b/api/partition/sdk-python.mdx
@@ -3,10 +3,10 @@ title: Python SDK
---
The [Unstructured Python SDK](https://github.com/Unstructured-IO/unstructured-python-client) client allows you to send one file at a time for processing by
-the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview).
+the [Unstructured Partition Endpoint](/api/partition/overview).
To use the Python SDK, you'll first need to set an environment variable named `UNSTRUCTURED_API_KEY`,
-representing your Unstructured API key. [Get your API key](/platform-api/partition-api/overview).
+representing your Unstructured API key. [Get your API key](/api/partition/overview).
## Installation
@@ -23,7 +23,7 @@ representing your Unstructured API key. [Get your API key](/platform-api/partiti
## Basics
- Let's start with a simple example in which you send a PDF document to the Unstructured Platform Parition Endpoint to be partitioned by Unstructured.
+ Let's start with a simple example in which you send a PDF document to the Unstructured Partition Endpoint to be partitioned by Unstructured.
```python Python
import os, json
@@ -250,8 +250,8 @@ The parameter names used in this document are for the Python SDK, which follow s
convention. Other than this difference in naming convention,
the names used in the SDKs are the same across all methods.
-* Refer to the [API parameters](/platform-api/partition-api/api-parameters) page for the full list of available parameters.
-* Refer to the [Examples](/platform-api/partition-api/examples) page for some inspiration on using the parameters.
+* Refer to the [API parameters](/api/partition/api-parameters) page for the full list of available parameters.
+* Refer to the [Examples](/api/partition/examples) page for some inspiration on using the parameters.
## Migration guide
diff --git a/platform-api/partition-api/speed-up-large-files-batches.mdx b/api/partition/speed-up-large-files-batches.mdx
similarity index 82%
rename from platform-api/partition-api/speed-up-large-files-batches.mdx
rename to api/partition/speed-up-large-files-batches.mdx
index 9cd3704b..dcfedb6f 100644
--- a/platform-api/partition-api/speed-up-large-files-batches.mdx
+++ b/api/partition/speed-up-large-files-batches.mdx
@@ -4,13 +4,13 @@ title: Speed up processing of large files and batches
When you use Unstructured, here are some techniques that you can try to help speed up the processing of large files and large batches of files.
-Choose your partitioning strategy wisely. For example, if you have simple PDFs that don't have images and tables, you might be able to use the `fast` strategy. Try the `fast` strategy on a few of your documents before you try using the `hi_res` strategy. [Learn more](/platform-api/partition-api/partitioning).
+Choose your partitioning strategy wisely. For example, if you have simple PDFs that don't have images and tables, you might be able to use the `fast` strategy. Try the `fast` strategy on a few of your documents before you try using the `hi_res` strategy. [Learn more](/api/partition/partitioning).
-To speed up PDF file processing, the [Unstructured SDK for Python](/platform-api/partition-api/sdk-python) and the [Unstructured SDK for JavaScript/TypeScript](/platform-api/partition-api/sdk-jsts) provide the following parameters to help speed up processing a large PDF file:
+To speed up PDF file processing, the [Unstructured SDK for Python](/api/partition/sdk-python) and the [Unstructured SDK for JavaScript/TypeScript](/api/partition/sdk-jsts) provide the following parameters to help speed up processing a large PDF file:
- `split_pdf_page` (Python) or `splitPdfPage` (JavaScript/TypeScript), when set to true, splits the PDF file on the client side before sending it as batches to Unstructured for processing. The number of pages in each batch is determined internally. Batches can contain between 2 and 20 pages.
- `split_pdf_concurrency_level` (Python) or `splitPdfConcurrencyLevel` (JavaScript/TypeScript) is an integer that specifies the number of parallel requests. The default is 5. The maximum is 15. This behavior is ignored unless `split_pdf_page` (Python) or `splitPdfPage` (JavaScript/TypeScript) is also set to true.
- `split_pdf_allow_failed` (Python) or splitPdfAllowFailed` (JavaScript/TypeScript), when set to true, allows partitioning to continue even if some pages fail.
- `split_pdf_page_range` (Python only) is a list of two integers that specify the beginning and ending page numbers of the PDF file to be sent. A `ValueError` is raised if the specified range is not valid. This behavior is ignored unless `split_pdf_page` is also set to true.
-[Learn more](/platform-api/partition-api/sdk-python#page-splitting).
+[Learn more](/api/partition/sdk-python#page-splitting).
diff --git a/platform-api/partition-api/text-as-html.mdx b/api/partition/text-as-html.mdx
similarity index 84%
rename from platform-api/partition-api/text-as-html.mdx
rename to api/partition/text-as-html.mdx
index 82f83b70..4e4d45f6 100644
--- a/platform-api/partition-api/text-as-html.mdx
+++ b/api/partition/text-as-html.mdx
@@ -21,11 +21,11 @@ import ExtractTextAsHTMLPy from '/snippets/how-to-api/extract_text_as_html.py.md
## Code
-For the [Unstructured Python SDK](/platform-api/partition-api/sdk-python), you'll need:
+For the [Unstructured Python SDK](/api/partition/sdk-python), you'll need:
## See also
-- [Extract images and tables from documents](/platform-api/partition-api/extract-image-block-types)
+- [Extract images and tables from documents](/api/partition/extract-image-block-types)
- [Table Extraction from PDF](/examplecode/codesamples/apioss/table-extraction-from-pdf)
\ No newline at end of file
diff --git a/platform-api/partition-api/transform-schemas.mdx b/api/partition/transform-schemas.mdx
similarity index 100%
rename from platform-api/partition-api/transform-schemas.mdx
rename to api/partition/transform-schemas.mdx
diff --git a/platform-api/supported-file-types.mdx b/api/supported-file-types.mdx
similarity index 100%
rename from platform-api/supported-file-types.mdx
rename to api/supported-file-types.mdx
diff --git a/platform-api/troubleshooting/api-key-url.mdx b/api/troubleshooting/api-key-url.mdx
similarity index 76%
rename from platform-api/troubleshooting/api-key-url.mdx
rename to api/troubleshooting/api-key-url.mdx
index e47c06bc..c5f596e0 100644
--- a/platform-api/troubleshooting/api-key-url.mdx
+++ b/api/troubleshooting/api-key-url.mdx
@@ -1,11 +1,11 @@
---
-title: Troubleshooting Unstructured Platform API keys and URLs
+title: Troubleshooting Unstructured API keys and URLs
sidebarTitle: API keys and URLs
---
## Issue
-When you run script or code to call an Unstructured Platform API, you get one of the following warnings or errors:
+When you run script or code to call an Unstructured API, you get one of the following warnings or errors:
```
UserWarning: If intending to use the paid API, please define `server_url` in your request.
@@ -37,20 +37,20 @@ API error occurred: Status 404
For the API URL, note the following:
-- For the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview), the API URL is typically `https://platform.unstructuredapp.io/api/v1`.
-- For the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview), the API URL is typically `https://api.unstructuredapp.io/general/v0/general`.
+- For the [Unstructured Workflow Endpoint](/api/workflow/overview), the API URL is typically `https://platform.unstructuredapp.io/api/v1`.
+- For the [Unstructured Partition Endpoint](/api/partition/overview), the API URL is typically `https://api.unstructuredapp.io/general/v0/general`.
-For the API key, the same API key works for both the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) key or [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview). This API key is in your Unstructured account dashboard. To access your dashboard:
+For the API key, the same API key works for both the [Unstructured Workflow Endpoint](/api/workflow/overview) key or [Unstructured Partition Endpoint](/api/partition/overview). This API key is in your Unstructured account dashboard. To access your dashboard:
- 
+ 
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
3. On the **API Keys** tab, click the copy icon next to your key.
-For the API URL, note the value of the the Unstructured **Platform API URL** (for the Unstructured Platform Workflow Endpoint) or the Unstructured **Serverless API URL** (for the Unstructured Platform Partition Endpoint).
+For the API URL, note the value of the the Unstructured **API URL** (for the Unstructured Workflow Endpoint) or the Unstructured **Serverless API URL** (for the Unstructured Partition Endpoint).
- 
+ 
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
diff --git a/platform-api/api/destinations/astradb.mdx b/api/workflow/destinations/astradb.mdx
similarity index 100%
rename from platform-api/api/destinations/astradb.mdx
rename to api/workflow/destinations/astradb.mdx
diff --git a/platform-api/api/destinations/azure-ai-search.mdx b/api/workflow/destinations/azure-ai-search.mdx
similarity index 100%
rename from platform-api/api/destinations/azure-ai-search.mdx
rename to api/workflow/destinations/azure-ai-search.mdx
diff --git a/platform-api/api/destinations/couchbase.mdx b/api/workflow/destinations/couchbase.mdx
similarity index 100%
rename from platform-api/api/destinations/couchbase.mdx
rename to api/workflow/destinations/couchbase.mdx
diff --git a/platform-api/api/destinations/databricks-delta-table.mdx b/api/workflow/destinations/databricks-delta-table.mdx
similarity index 88%
rename from platform-api/api/destinations/databricks-delta-table.mdx
rename to api/workflow/destinations/databricks-delta-table.mdx
index 6cfb46da..8c512a12 100644
--- a/platform-api/api/destinations/databricks-delta-table.mdx
+++ b/api/workflow/destinations/databricks-delta-table.mdx
@@ -6,10 +6,10 @@ title: Delta Tables in Databricks
This article covers connecting Unstructured to Delta Tables in Databricks.
For information about connecting Unstructured to Delta Tables in Amazon S3 instead, see
- [Delta Tables in Amazon S3](/platform-api/api/destinations/delta-table).
+ [Delta Tables in Amazon S3](/api/workflow/destinations/delta-table).
For information about connecting Unstructured to Databricks Volumes instead, see
- [Databricks Volumes](/platform-api/api/destinations/databricks-volumes).
+ [Databricks Volumes](/api/workflow/destinations/databricks-volumes).
Send processed data from Unstructured to a Delta Table in Databricks.
diff --git a/platform-api/api/destinations/databricks-volumes.mdx b/api/workflow/destinations/databricks-volumes.mdx
similarity index 92%
rename from platform-api/api/destinations/databricks-volumes.mdx
rename to api/workflow/destinations/databricks-volumes.mdx
index 87ae3362..7170de5e 100644
--- a/platform-api/api/destinations/databricks-volumes.mdx
+++ b/api/workflow/destinations/databricks-volumes.mdx
@@ -6,7 +6,7 @@ title: Databricks Volumes
This article covers connecting Unstructured to Databricks Volumes.
For information about connecting Unstructured to Delta Tables in Databricks instead, see
- [Delta Tables in Databricks](/platform-api/api/destinations/databricks-delta-table).
+ [Delta Tables in Databricks](/api/workflow/destinations/databricks-delta-table).
Send processed data from Unstructured to Databricks Volumes.
diff --git a/platform-api/api/destinations/delta-table.mdx b/api/workflow/destinations/delta-table.mdx
similarity index 91%
rename from platform-api/api/destinations/delta-table.mdx
rename to api/workflow/destinations/delta-table.mdx
index ac968bd7..78e7eb42 100644
--- a/platform-api/api/destinations/delta-table.mdx
+++ b/api/workflow/destinations/delta-table.mdx
@@ -5,7 +5,7 @@ title: Delta Tables in Amazon S3
This article covers connecting Unstructured to Delta Tables in Amazon S3. For information about
connecting Unstructured to Delta Tables in Databricks instead, see
- [Delta Tables in Databricks](/platform-api/api/destinations/databricks-delta-table).
+ [Delta Tables in Databricks](/api/workflow/destinations/databricks-delta-table).
Send processed data from Unstructured to a Delta Table, stored in Amazon S3.
diff --git a/platform-api/api/destinations/elasticsearch.mdx b/api/workflow/destinations/elasticsearch.mdx
similarity index 100%
rename from platform-api/api/destinations/elasticsearch.mdx
rename to api/workflow/destinations/elasticsearch.mdx
diff --git a/platform-api/api/destinations/google-cloud.mdx b/api/workflow/destinations/google-cloud.mdx
similarity index 100%
rename from platform-api/api/destinations/google-cloud.mdx
rename to api/workflow/destinations/google-cloud.mdx
diff --git a/platform-api/api/destinations/kafka.mdx b/api/workflow/destinations/kafka.mdx
similarity index 100%
rename from platform-api/api/destinations/kafka.mdx
rename to api/workflow/destinations/kafka.mdx
diff --git a/platform-api/api/destinations/milvus.mdx b/api/workflow/destinations/milvus.mdx
similarity index 100%
rename from platform-api/api/destinations/milvus.mdx
rename to api/workflow/destinations/milvus.mdx
diff --git a/platform-api/api/destinations/mongodb.mdx b/api/workflow/destinations/mongodb.mdx
similarity index 100%
rename from platform-api/api/destinations/mongodb.mdx
rename to api/workflow/destinations/mongodb.mdx
diff --git a/platform-api/api/destinations/motherduck.mdx b/api/workflow/destinations/motherduck.mdx
similarity index 100%
rename from platform-api/api/destinations/motherduck.mdx
rename to api/workflow/destinations/motherduck.mdx
diff --git a/platform-api/api/destinations/neo4j.mdx b/api/workflow/destinations/neo4j.mdx
similarity index 100%
rename from platform-api/api/destinations/neo4j.mdx
rename to api/workflow/destinations/neo4j.mdx
diff --git a/platform-api/api/destinations/onedrive.mdx b/api/workflow/destinations/onedrive.mdx
similarity index 100%
rename from platform-api/api/destinations/onedrive.mdx
rename to api/workflow/destinations/onedrive.mdx
diff --git a/api/workflow/destinations/overview.mdx b/api/workflow/destinations/overview.mdx
new file mode 100644
index 00000000..64d296a0
--- /dev/null
+++ b/api/workflow/destinations/overview.mdx
@@ -0,0 +1,42 @@
+---
+title: Overview
+---
+
+To use the [Unstructured Workflow Endpoint](/api/workflow/overview) to manage destination connectors, do the following:
+
+- To get a list of available destination connectors, use the `UnstructuredClient` object's `destinations.list_destinations` function (for the Python SDK) or
+ the `GET` method to call the `/destinations` endpoint (for `curl` or Postman).. [Learn more](/api/workflow/overview#list-destination-connectors).
+- To get information about a destination connector, use the `UnstructuredClient` object's `destinations.get_destination` function (for the Python SDK) or
+ the `GET` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#get-a-destination-connector).
+- To create a destination connector, use the `UnstructuredClient` object's `destinations.create_destination` function (for the Python SDK) or
+ the `POST` method to call the `/destinations` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#create-a-destination-connector).
+- To update a destination connector, use the `UnstructuredClient` object's `destinations.update_destination` function (for the Python SDK) or
+ the `PUT` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#update-a-destination-connector).
+- To delete a destination connector, use the `UnstructuredClient` object's `destinations.delete_destination` function (for the Python SDK) or
+ the `DELETE` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#delete-a-destination-connector).
+
+To create or update a destination connector, you must also provide settings that are specific to that connector.
+For the list of specific settings, see:
+
+- [Astra DB](/api/workflow/destinations/astradb) (`ASTRADB` for the Python SDK or `astradb` for `curl` or Postman)
+- [Azure AI Search](/api/workflow/destinations/azure-ai-search) (`AZURE_AI_SEARCH` for the Python SDK or `azure_ai_search` for `curl` or Postman)
+- [Couchbase](/api/workflow/destinations/couchbase) (`COUCHBASE` for the Python SDK or `couchbase` for `curl` or Postman)
+- [Databricks Volumes](/api/workflow/destinations/databricks-volumes) (`DATABRICKS_VOLUMES` for the Python SDK or `databricks_volumes` for `curl` or Postman)
+- [Delta Tables in Amazon S3](/api/workflow/destinations/delta-table) (`DELTA_TABLE` for the Python SDK or `delta_table` for `curl` or Postman)
+- [Delta Tables in Databricks](/api/workflow/destinations/databricks-delta-table) (`DATABRICKS_VOLUME_DELTA_TABLES` for the Python SDK or `databricks_volume_delta_tables` for `curl` or Postman)
+- [Elasticsearch](/api/workflow/destinations/elasticsearch) (`ELASTICSEARCH` for the Python SDK or `elasticsearch` for `curl` or Postman)
+- [Google Cloud Storage](/api/workflow/destinations/google-cloud) (`GCS` for the Python SDK or `gcs` for `curl` or Postman)
+- [Kafka](/api/workflow/destinations/kafka) (`KAFKA_CLOUD` for the Python SDK or `kafka-cloud` for `curl` or Postman)
+- [Milvus](/api/workflow/destinations/milvus) (`MILVUS` for the Python SDK or `milvus` for `curl` or Postman)
+- [MongoDB](/api/workflow/destinations/mongodb) (`MONGODB` for the Python SDK or `mongodb` for `curl` or Postman)
+- [MotherDuck](/api/workflow/destinations/motherduck) (`MOTHERDUCK` for the Python SDK or `motherduck` for `curl` or Postman)
+- [Neo4j](/api/workflow/destinations/neo4j) (`NEO4J` for the Python SDK or `neo4j` for `curl` or Postman)
+- [OneDrive](/api/workflow/destinations/onedrive) (`ONEDRIVE` for the Python SDK or `onedrive` for `curl` or Postman)
+- [Pinecone](/api/workflow/destinations/pinecone) (`PINECONE` for the Python SDK or `pinecone` for `curl` or Postman)
+- [PostgreSQL](/api/workflow/destinations/postgresql) (`POSTGRES` for the Python SDK or `postgres` for `curl` or Postman)
+- [Qdrant](/api/workflow/destinations/qdrant) (`QDRANT_CLOUD` for the Python SDK or `qdrant-cloud` for `curl` or Postman)
+- [Redis](/api/workflow/destinations/redis) (`REDIS` for the Python SDK or `redis` for `curl` or Postman)
+- [Snowflake](/api/workflow/destinations/snowflake) (`SNOWFLAKE` for the Python SDK or `snowflake` for `curl` or Postman)
+- [S3](/api/workflow/destinations/s3) (`S3` for the Python SDK or `s3` for `curl` or Postman)
+- [Weaviate](/api/workflow/destinations/weaviate) (`WEAVIATE` for the Python SDK or `weaviate` for `curl` or Postman)
+
diff --git a/platform-api/api/destinations/pinecone.mdx b/api/workflow/destinations/pinecone.mdx
similarity index 100%
rename from platform-api/api/destinations/pinecone.mdx
rename to api/workflow/destinations/pinecone.mdx
diff --git a/platform-api/api/destinations/postgresql.mdx b/api/workflow/destinations/postgresql.mdx
similarity index 100%
rename from platform-api/api/destinations/postgresql.mdx
rename to api/workflow/destinations/postgresql.mdx
diff --git a/platform-api/api/destinations/qdrant.mdx b/api/workflow/destinations/qdrant.mdx
similarity index 100%
rename from platform-api/api/destinations/qdrant.mdx
rename to api/workflow/destinations/qdrant.mdx
diff --git a/platform-api/api/destinations/redis.mdx b/api/workflow/destinations/redis.mdx
similarity index 100%
rename from platform-api/api/destinations/redis.mdx
rename to api/workflow/destinations/redis.mdx
diff --git a/platform-api/api/destinations/s3.mdx b/api/workflow/destinations/s3.mdx
similarity index 100%
rename from platform-api/api/destinations/s3.mdx
rename to api/workflow/destinations/s3.mdx
diff --git a/platform-api/api/destinations/snowflake.mdx b/api/workflow/destinations/snowflake.mdx
similarity index 100%
rename from platform-api/api/destinations/snowflake.mdx
rename to api/workflow/destinations/snowflake.mdx
diff --git a/platform-api/api/destinations/weaviate.mdx b/api/workflow/destinations/weaviate.mdx
similarity index 100%
rename from platform-api/api/destinations/weaviate.mdx
rename to api/workflow/destinations/weaviate.mdx
diff --git a/platform-api/api/jobs.mdx b/api/workflow/jobs.mdx
similarity index 59%
rename from platform-api/api/jobs.mdx
rename to api/workflow/jobs.mdx
index 2e065d14..5ddd8fdc 100644
--- a/platform-api/api/jobs.mdx
+++ b/api/workflow/jobs.mdx
@@ -2,13 +2,13 @@
title: Jobs
---
-To use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) to manage jobs, do the following:
+To use the [Unstructured Workflow Endpoint](/api/workflow/overview) to manage jobs, do the following:
- To get a list of available jobs, use the `UnstructuredClient` object's `jobs.list_jobs` function (for the Python SDK) or
- the `GET` method to call the `/jobs` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#list-jobs).
+ the `GET` method to call the `/jobs` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#list-jobs).
- To get information about a job, use the `UnstructuredClient` object's `jobs.get_job` function (for the Python SDK) or
- the `GET` method to call the `/jobs/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#get-a-job).
-- A job is created automatically whenever a workflow runs on a schedule; see [Create a workflow](/platform-api/api/workflows#create-a-workflow).
- A job is also created whenever you run a workflow manually; see [Run a workflow](/platform-api/api/overview#run-a-workflow).
+ the `GET` method to call the `/jobs/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#get-a-job).
+- A job is created automatically whenever a workflow runs on a schedule; see [Create a workflow](/api/workflow/workflows#create-a-workflow).
+ A job is also created whenever you run a workflow manually; see [Run a workflow](/api/workflow/overview#run-a-workflow).
- To cancel a running job, use the `UnstructuredClient` object's `jobs.cancel_job` function (for the Python SDK) or
- the `POST` method to call the `/jobs//cancel` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#cancel-a-job).
\ No newline at end of file
+ the `POST` method to call the `/jobs//cancel` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#cancel-a-job).
\ No newline at end of file
diff --git a/platform-api/api/overview.mdx b/api/workflow/overview.mdx
similarity index 96%
rename from platform-api/api/overview.mdx
rename to api/workflow/overview.mdx
index 4ecee2f4..decde5ef 100644
--- a/platform-api/api/overview.mdx
+++ b/api/workflow/overview.mdx
@@ -2,20 +2,20 @@
title: Overview
---
-The [Unstructured Platform UI](/platform/overview) features a no-code user interface for transforming your unstructured data into data that is ready
+The [Unstructured UI](/ui/overview) features a no-code user interface for transforming your unstructured data into data that is ready
for Retrieval Augmented Generation (RAG).
-The Unstructured Platform Workflow Endpoint, part of the [Unstructured Platform API](/platform-api/overview), enables a full range of partitioning, chunking, embedding, and
+The Unstructured Workflow Endpoint, part of the [Unstructured API](/api/overview), enables a full range of partitioning, chunking, embedding, and
enrichment options for your files and data. It is designed to batch-process files and data in remote locations; send processed results to
various storage, databases, and vector stores; and use the latest and highest-performing models on the market today. It has built-in logic
to deliver the highest quality results at the lowest cost.
-This page provides an overview of the Unstructured Platform Workflow Endpoint. This endpoint enables Unstructured Platform UI automation usage
+This page provides an overview of the Unstructured Workflow Endpoint. This endpoint enables Unstructured UI automation usage
scenarios as well as for documentation, reporting, and recovery needs.
## Getting started
-Choose one of the following options to get started with the Unstructured Platform Workflow Endpoint:
+Choose one of the following options to get started with the Unstructured Workflow Endpoint:
- Follow the [quickstart](#quickstart), which uses the Unstructured Python SDK from a remote hosted Google Collab notebook.
- Start using the [Unstructred Python SDK](#unstructured-python-sdk).
@@ -30,7 +30,7 @@ import SharedPlatformAPI from '/snippets/quickstarts/platform-api.mdx';
## Unstructured Python SDK
The [Unstructured Python SDK](https://github.com/Unstructured-IO/unstructured-python-client), beginning with version 0.30.6,
-allows you to call the Unstructured Platform Workflow Endpoint through standard Python code.
+allows you to call the Unstructured Workflow Endpoint through standard Python code.
To install the Unstructured Python SDK, run the following command from within your Python virtual environment:
@@ -62,7 +62,7 @@ To get your Unstructured API key, do the following:
5. Click the **Copy** icon for your new API key. The API key's value is copied to your system's clipboard.
Calls made by the Unstructured Python SDK's `unstructured_client` functions for creating, listing, updating,
-and deleting connectors, workflows, and jobs in the Unstructured Platform UI all use the Unstructured Platform Workflow Endpoint URL (`https://platform.unstructuredapp.io/api/v1`) by default. You do not need to
+and deleting connectors, workflows, and jobs in the Unstructured UI all use the Unstructured Workflow Endpoint URL (`https://platform.unstructuredapp.io/api/v1`) by default. You do not need to
use the `server_url` parameter to specify this API URL in your Python code for these particular functions.
@@ -74,8 +74,8 @@ use the `server_url` parameter to specify this API URL in your Python code for t
To specify an API URL in your code, set the `server_url` parameter in the `UnstructuredClient` constructor to the target API URL.
-The Unstructured Platform Workflow Endpoint enables you to work with [connectors](#connectors),
-[workflows](#workflows), and [jobs](#jobs) in the Unstructured Platform UI.
+The Unstructured Workflow Endpoint enables you to work with [connectors](#connectors),
+[workflows](#workflows), and [jobs](#jobs) in the Unstructured UI.
- A _source connector_ ingests files or data into Unstructured from a source location.
- A _destination connector_ sends the processed data from Unstructured to a destination location.
@@ -84,9 +84,9 @@ The Unstructured Platform Workflow Endpoint enables you to work with [connectors
For general information about these objects, see:
-- [Connectors](/platform/connectors)
-- [Workflows](/platform/workflows)
-- [Jobs](/platform/jobs)
+- [Connectors](/ui/connectors)
+- [Workflows](/ui/workflows)
+- [Jobs](/ui/jobs)
Skip ahead to start learning about how to use the Unstructured Python SDK to work with
[connectors](#connectors),
@@ -94,13 +94,13 @@ Skip ahead to start learning about how to use the Unstructured Python SDK to wor
## REST endpoints
-The Unstructured Platform Workflow Endpoint is callable from a set of Representational State Transfer (REST) endpoints, which you can call through standard REST-enabled
-utilities, tools, programming languages, packages, and libraries. The examples, shown later on this page and on related pages, describe how to call the Unstructured Platform Workflow Endpoint with
+The Unstructured Workflow Endpoint is callable from a set of Representational State Transfer (REST) endpoints, which you can call through standard REST-enabled
+utilities, tools, programming languages, packages, and libraries. The examples, shown later on this page and on related pages, describe how to call the Unstructured Workflow Endpoint with
`curl` and Postman. You can adapt this information as needed for your preferred programming languages and libraries, for example by using the
`requests` library with Python.
- You can also use the [Unstructured Platform Workflow Endpoint - Swagger UI](https://platform.unstructuredapp.io/docs) to call the REST endpoints
+ You can also use the [Unstructured Workflow Endpoint - Swagger UI](https://platform.unstructuredapp.io/docs) to call the REST endpoints
that are available through `https://platform.unstructuredapp.io`. To use the Swagger UI, you must provide your Unstructured API key with each call. To
get this API key, see the [quickstart](#quickstart), earlier on this page.
@@ -168,8 +168,8 @@ To get your Unstructured API key, do the following:
API URL throughout the following examples.
-The Unstructured Platform Workflow Endpoint enables you to work with [connectors](#connectors),
-[workflows](#workflows), and [jobs](#jobs) in the Unstructured Platform UI.
+The Unstructured Workflow Endpoint enables you to work with [connectors](#connectors),
+[workflows](#workflows), and [jobs](#jobs) in the Unstructured UI.
- A _source connector_ ingests files or data into Unstructured from a source location.
- A _destination connector_ sends the processed data from Unstructured to a destination location.
@@ -178,9 +178,9 @@ The Unstructured Platform Workflow Endpoint enables you to work with [connectors
For general information about these objects, see:
-- [Connectors](/platform/connectors)
-- [Workflows](/platform/workflows)
-- [Jobs](/platform/jobs)
+- [Connectors](/ui/connectors)
+- [Workflows](/ui/workflows)
+- [Jobs](/ui/jobs)
Skip ahead to start learning about how to use the REST endpoints to work with
[connectors](#connectors),
@@ -188,15 +188,15 @@ Skip ahead to start learning about how to use the REST endpoints to work with
## Restrictions
-The following Unstructured SDKs, tools, and libraries do _not_ work with the Unstructured Platform Workflow Endpoint:
+The following Unstructured SDKs, tools, and libraries do _not_ work with the Unstructured Workflow Endpoint:
-- The [Unstructured JavaScript/TypeScript SDK](/platform-api/partition-api/sdk-jsts)
-- [Local single-file POST requests](/platform-api/partition-api/sdk-jsts) to the Unstructured Platform Partition Endpoint
+- The [Unstructured JavaScript/TypeScript SDK](/api/partition/sdk-jsts)
+- [Local single-file POST requests](/api/partition/sdk-jsts) to the Unstructured Partition Endpoint
- The [Unstructured open source Python library](/open-source/introduction/overview)
- The [Unstructued Ingest CLI](/ingestion/ingest-cli)
- The [Unstructured Ingest Python library](/ingestion/python-ingest)
-The following Unstructured API URL is also _not_ supported: `https://api.unstructuredapp.io/general/v0/general` (the Unstructured Platform Partition Endpoint URL).
+The following Unstructured API URL is also _not_ supported: `https://api.unstructuredapp.io/general/v0/general` (the Unstructured Partition Endpoint URL).
## Connectors
@@ -211,7 +211,7 @@ You can also [list](#list-destination-connectors),
[update](#update-a-destination-connector),
and [delete](#delete-a-destination-connector) destination connectors.
-For general information, see [Connectors](/platform/connectors).
+For general information, see [Connectors](/ui/connectors).
### List source connectors
@@ -222,7 +222,7 @@ To filter the list of source connectors, use the `ListSourcesRequest` object's `
or the query parameter `source_type=` (for `curl` or Postman),
replacing `` with the source connector type's unique ID
(for example, for the Amazon S3 source connector type, `S3` for the Python SDK or `s3` for `curl` or Postman).
-To get this ID, see [Sources](/platform-api/api/sources/overview).
+To get this ID, see [Sources](/api/workflow/sources/overview).
@@ -418,10 +418,10 @@ the `POST` method to call the `/sources` endpoint (for `curl` or Postman).
In the `CreateSourceConnector` object (for the Python SDK) or
the request body (for `curl` or Postman),
specify the settings for the connector. For the specific settings to include, which differ by connector, see
-[Sources](/platform-api/api/sources/overview).
+[Sources](/api/workflow/sources/overview).
For the Python SDK, replace `` with the source connector type's unique ID (for example, for the Amazon S3 source connector type, `S3`).
-To get this ID, see [Sources](/platform-api/api/sources/overview).
+To get this ID, see [Sources](/api/workflow/sources/overview).
@@ -547,10 +547,10 @@ the `PUT` method to call the `/sources/` endpoint (for `curl` or P
In the `UpdateSourceConnector` object (for the Python SDK) or
the request body (for `curl` or Postman), specify the settings for the connector. For the specific settings to include, which differ by connector, see
-[Sources](/platform-api/api/sources/overview).
+[Sources](/api/workflow/sources/overview).
For the Python SDK, replace `` with the source connector type's unique ID (for example, for the Amazon S3 source connector type, `S3`).
-To get this ID, see [Sources](/platform-api/api/sources/overview).
+To get this ID, see [Sources](/api/workflow/sources/overview).
You must specify all of the settings for the connector, even for settings that are not changing.
@@ -753,7 +753,7 @@ To filter the list of destination connectors, use the `ListDestinationsRequest`
the query parameter `destination_type=` (for `curl` or Postman),
replacing `` with the destination connector type's unique ID
(for example, for the Amazon S3 source connector type, `S3` for the Python SDK or `s3` for `curl` or Postman).
-To get this ID, see [Destinations](/platform-api/api/destinations/overview).
+To get this ID, see [Destinations](/api/workflow/destinations/overview).
@@ -948,10 +948,10 @@ the `POST` method to call the `/destinations` endpoint (for `curl` or Postman).
In the `CreateDestinationConnector` object (for the Python SDK) or
the request body (for `curl` or Postman),
specify the settings for the connector. For the specific settings to include, which differ by connector, see
-[Destinations](/platform-api/api/destinations/overview).
+[Destinations](/api/workflow/destinations/overview).
For the Python SDK, replace `` with the destination connector type's unique ID (for example, for the Amazon S3 source connector type, `S3`).
-To get this ID, see [Destinations](/platform-api/api/destinations/overview).
+To get this ID, see [Destinations](/api/workflow/destinations/overview).
@@ -1076,7 +1076,7 @@ the `PUT` method to call the `/destinations/` endpoint (for `curl`
In the `UpdateDestinationConnector` object (for the Python SDK) or
the request body (for `curl` or Postman), specify the settings for the connector. For the specific settings to include, which differ by connector, see
-[Destinations](/platform-api/api/destinations/overview).
+[Destinations](/api/workflow/destinations/overview).
You must specify all of the settings for the connector, even for settings that are not changing.
@@ -1278,7 +1278,7 @@ You can [list](#list-workflows),
[update](#update-a-workflow),
and [delete](#delete-a-workflow) workflows.
-For general information, see [Workflows](/platform/workflows).
+For general information, see [Workflows](/ui/workflows).
### List workflows
@@ -1541,7 +1541,7 @@ the `POST` method to call the `/workflows` endpoint (for `curl` or Postman).
In the `CreateWorkflow` object (for the Python SDK) or
the request body (for `curl` or Postman),
specify the settings for the workflow. For the specific settings to include, see
-[Create a workflow](/platform-api/api/workflows#create-a-workflow).
+[Create a workflow](/api/workflow/workflows#create-a-workflow).
@@ -1757,7 +1757,7 @@ the `POST` method to call the `/workflows//run` endpoint (for `curl
To run a workflow on a schedule instead, specify the `schedule` setting in the request body when you create or update a
-workflow. See [Create a workflow](/platform-api/api/workflows#create-a-workflow) or [Update a workflow](/platform-api/api/workflows#update-a-workflow).
+workflow. See [Create a workflow](/api/workflow/workflows#create-a-workflow) or [Update a workflow](/api/workflow/workflows#update-a-workflow).
### Update a workflow
@@ -1767,7 +1767,7 @@ the `PUT` method to call the `/workflows/` endpoint (for `curl` or
In `UpdateWorkflow` object (for the Python SDK) or
the request body (for `curl` or Postman), specify the settings for the workflow. For the specific settings to include, see
-[Update a workflow](/platform-api/api/workflows#update-a-workflow).
+[Update a workflow](/api/workflow/workflows#update-a-workflow).
@@ -1993,7 +1993,7 @@ and [cancel](#cancel-a-job) jobs.
A job is created automatically whenever a workflow runs on a schedule; see [Create a workflow](#create-a-workflow).
A job is also created whenever you run a workflow; see [Run a workflow](#run-a-workflow).
-For general information, see [Jobs](/platform/jobs).
+For general information, see [Jobs](/ui/jobs).
### List jobs
diff --git a/platform-api/api/sources/azure-blob-storage.mdx b/api/workflow/sources/azure-blob-storage.mdx
similarity index 100%
rename from platform-api/api/sources/azure-blob-storage.mdx
rename to api/workflow/sources/azure-blob-storage.mdx
diff --git a/platform-api/api/sources/box.mdx b/api/workflow/sources/box.mdx
similarity index 100%
rename from platform-api/api/sources/box.mdx
rename to api/workflow/sources/box.mdx
diff --git a/platform-api/api/sources/confluence.mdx b/api/workflow/sources/confluence.mdx
similarity index 100%
rename from platform-api/api/sources/confluence.mdx
rename to api/workflow/sources/confluence.mdx
diff --git a/platform-api/api/sources/couchbase.mdx b/api/workflow/sources/couchbase.mdx
similarity index 100%
rename from platform-api/api/sources/couchbase.mdx
rename to api/workflow/sources/couchbase.mdx
diff --git a/platform-api/api/sources/databricks-volumes.mdx b/api/workflow/sources/databricks-volumes.mdx
similarity index 100%
rename from platform-api/api/sources/databricks-volumes.mdx
rename to api/workflow/sources/databricks-volumes.mdx
diff --git a/platform-api/api/sources/dropbox.mdx b/api/workflow/sources/dropbox.mdx
similarity index 100%
rename from platform-api/api/sources/dropbox.mdx
rename to api/workflow/sources/dropbox.mdx
diff --git a/platform-api/api/sources/elasticsearch.mdx b/api/workflow/sources/elasticsearch.mdx
similarity index 100%
rename from platform-api/api/sources/elasticsearch.mdx
rename to api/workflow/sources/elasticsearch.mdx
diff --git a/platform-api/api/sources/google-cloud.mdx b/api/workflow/sources/google-cloud.mdx
similarity index 100%
rename from platform-api/api/sources/google-cloud.mdx
rename to api/workflow/sources/google-cloud.mdx
diff --git a/platform-api/api/sources/google-drive.mdx b/api/workflow/sources/google-drive.mdx
similarity index 100%
rename from platform-api/api/sources/google-drive.mdx
rename to api/workflow/sources/google-drive.mdx
diff --git a/platform-api/api/sources/kafka.mdx b/api/workflow/sources/kafka.mdx
similarity index 100%
rename from platform-api/api/sources/kafka.mdx
rename to api/workflow/sources/kafka.mdx
diff --git a/platform-api/api/sources/mongodb.mdx b/api/workflow/sources/mongodb.mdx
similarity index 100%
rename from platform-api/api/sources/mongodb.mdx
rename to api/workflow/sources/mongodb.mdx
diff --git a/platform-api/api/sources/onedrive.mdx b/api/workflow/sources/onedrive.mdx
similarity index 100%
rename from platform-api/api/sources/onedrive.mdx
rename to api/workflow/sources/onedrive.mdx
diff --git a/platform-api/api/sources/outlook.mdx b/api/workflow/sources/outlook.mdx
similarity index 100%
rename from platform-api/api/sources/outlook.mdx
rename to api/workflow/sources/outlook.mdx
diff --git a/api/workflow/sources/overview.mdx b/api/workflow/sources/overview.mdx
new file mode 100644
index 00000000..bbee8b47
--- /dev/null
+++ b/api/workflow/sources/overview.mdx
@@ -0,0 +1,40 @@
+---
+title: Overview
+---
+
+To use the [Unstructured Workflow Endpoint](/api/workflow/overview) to manage source connectors, do the following:
+
+- To get a list of available source connectors, use the `UnstructuredClient` object's `sources.list_sources` function (for the Python SDK) or
+ the `GET` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#list-source-connectors).
+- To get information about a source connector, use the `UnstructuredClient` object's `sources.get_source` function (for the Python SDK) or
+ the `GET` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#get-a-source-connector).
+- To create a source connector, use the `UnstructuredClient` object's `sources.create_source` function (for the Python SDK) or
+ the `POST` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#create-a-source-connector).
+- To update a source connector, use the `UnstructuredClient` object's `sources.update_source` function (for the Python SDK) or
+ the `PUT` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#update-a-source-connector).
+- To delete a source connector, use the `UnstructuredClient` object's `sources.delete_source` function (for the Python SDK) or
+ the `DELETE` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#delete-a-source-connector).
+
+To create or update a source connector, you must also provide settings that are specific to that connector.
+For the list of specific settings, see:
+
+- [Azure](/api/workflow/sources/azure-blob-storage) (`AZURE` for the Python SDK or `azure` for `curl` and Postman)
+- [Box](/api/workflow/sources/box) (`BOX` for the Python SDK or `box` for `curl` and Postman)
+- [Confluence](/api/workflow/sources/confluence) (`CONFLUENCE` for the Python SDK or `confluence` for `curl` and Postman)
+- [Couchbase](/api/workflow/sources/couchbase) (`COUCHBASE` for the Python SDK or `couchbase` for `curl` and Postman)
+- [Databricks Volumes](/api/workflow/sources/databricks-volumes) (`DATABRICKS_VOLUMES` for the Python SDK or `databricks_volumes` for `curl` and Postman)
+- [Dropbox](/api/workflow/sources/dropbox) (`DROPBOX` for the Python SDK or `dropbox` for `curl` and Postman)
+- [Elasticsearch](/api/workflow/sources/elasticsearch) (`ELASTICSEARCH` for the Python SDK or `elasticsearch` for `curl` and Postman)
+- [Google Cloud Storage](/api/workflow/sources/google-cloud) (`GCS` for the Python SDK or `gcs` for `curl` and Postman)
+- [Google Drive](/api/workflow/sources/google-drive) (`GOOGLE_DRIVE` for the Python SDK or `google_drive` for `curl` and Postman)
+- [Kafka](/api/workflow/sources/kafka) (`KAFKA_CLOUD` for the Python SDK or `kafka-cloud` for `curl` and Postman)
+- [MongoDB](/api/workflow/sources/mongodb) (`MONGODB` for the Python SDK or `mongodb` for `curl` and Postman)
+- [OneDrive](/api/workflow/sources/onedrive) (`ONEDRIVE` for the Python SDK or `onedrive` for `curl` and Postman)
+- [Outlook](/api/workflow/sources/outlook) (`OUTLOOK` for the Python SDK or `outlook` for `curl` and Postman)
+- [PostgreSQL](/api/workflow/sources/postgresql) (`POSTGRES` for the Python SDK or `postgres` for `curl` and Postman)
+- [S3](/api/workflow/sources/s3) (`S3` for the Python SDK or `s3` for `curl` and Postman)
+- [Salesforce](/api/workflow/sources/salesforce) (`SALESFORCE` for the Python SDK or `salesforce` for `curl` and Postman)
+- [SharePoint](/api/workflow/sources/sharepoint) (`SHAREPOINT` for the Python SDK or `sharepoint` for `curl` and Postman)
+- [Snowflake](/api/workflow/sources/snowflake) (`SNOWFLAKE` for the Python SDK or `snowflake` for `curl` and Postman)
+
+
diff --git a/platform-api/api/sources/postgresql.mdx b/api/workflow/sources/postgresql.mdx
similarity index 100%
rename from platform-api/api/sources/postgresql.mdx
rename to api/workflow/sources/postgresql.mdx
diff --git a/platform-api/api/sources/s3.mdx b/api/workflow/sources/s3.mdx
similarity index 100%
rename from platform-api/api/sources/s3.mdx
rename to api/workflow/sources/s3.mdx
diff --git a/platform-api/api/sources/salesforce.mdx b/api/workflow/sources/salesforce.mdx
similarity index 100%
rename from platform-api/api/sources/salesforce.mdx
rename to api/workflow/sources/salesforce.mdx
diff --git a/platform-api/api/sources/sharepoint.mdx b/api/workflow/sources/sharepoint.mdx
similarity index 100%
rename from platform-api/api/sources/sharepoint.mdx
rename to api/workflow/sources/sharepoint.mdx
diff --git a/platform-api/api/sources/snowflake.mdx b/api/workflow/sources/snowflake.mdx
similarity index 100%
rename from platform-api/api/sources/snowflake.mdx
rename to api/workflow/sources/snowflake.mdx
diff --git a/platform-api/api/workflows.mdx b/api/workflow/workflows.mdx
similarity index 96%
rename from platform-api/api/workflows.mdx
rename to api/workflow/workflows.mdx
index 16f86492..cd9bd1e5 100644
--- a/platform-api/api/workflows.mdx
+++ b/api/workflow/workflows.mdx
@@ -2,23 +2,23 @@
title: Workflows
---
-To use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) to manage workflows, do the following:
+To use the [Unstructured Workflow Endpoint](/api/workflow/overview) to manage workflows, do the following:
- To get a list of available workflows, use the `UnstructuredClient` object's `workflows.list_workflows` function (for the Python SDK) or
- the `GET` method to call the `/workflows` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#list-workflows).
+ the `GET` method to call the `/workflows` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#list-workflows).
- To get information about a workflow, use the `UnstructuredClient` object's `workflows.get_workflow` function (for the Python SDK) or
- the `GET` method to call the `/workflows/` endpoint (for `curl` or Postman)use the `GET` method to call the `/workflows/` endpoint. [Learn more](/platform-api/api/overview#get-a-workflow).
+ the `GET` method to call the `/workflows/` endpoint (for `curl` or Postman)use the `GET` method to call the `/workflows/` endpoint. [Learn more](/api/workflow/overview#get-a-workflow).
- To create a workflow, use the `UnstructuredClient` object's `workflows.create_workflow` function (for the Python SDK) or
the `POST` method to call the `/workflows` endpoint (for `curl` or Postman). [Learn more](#create-a-workflow).
- To run a workflow manually, use the `UnstructuredClient` object's `workflows.run_workflow` function (for the Python SDK) or
- the `POST` method to call the `/workflows//run` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#run-a-workflow).
+ the `POST` method to call the `/workflows//run` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#run-a-workflow).
- To update a workflow, use the `UnstructuredClient` object's `workflows.update_workflow` function (for the Python SDK) or
the `PUT` method to call the `/workflows/` endpoint (for `curl` or Postman). [Learn more](#update-a-workflow).
- To delete a workflow, use the `UnstructuredClient` object's `workflows.delete_workflow` function (for the Python SDK) or
- the `DELETE` method to call the `/workflows/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#delete-a-workflow).
+ the `DELETE` method to call the `/workflows/` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#delete-a-workflow).
-The following examples assume that you have already met the [requirements](/platform-api/api/overview#requirements) and
-understand the [basics](/platform-api/api/overview#basics) of working with the Unstructured Platform Workflow Endpoint.
+The following examples assume that you have already met the [requirements](/api/workflow/overview#requirements) and
+understand the [basics](/api/workflow/overview#basics) of working with the Unstructured Workflow Endpoint.
## Create a workflow
@@ -269,10 +269,10 @@ Replace the preceding placeholders as follows:
- `` (_required_) - A unique name for this workflow.
- `` (_required_) - The ID of the target source connector. To get the ID,
use the `UnstructuredClient` object's `sources.list_sources` function (for the Python SDK) or
- the `GET` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#list-source-connectors).
+ the `GET` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#list-source-connectors).
- `` (_required_) - The ID of the target destination connector. To get the ID,
use the `UnstructuredClient` object's `destinations.list_destinations` function (for the Python SDK) or
- the `GET` method to call the `/destinations` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#list-destination-connectors).
+ the `GET` method to call the `/destinations` endpoint (for `curl` or Postman). [Learn more](/api/workflow/overview#list-destination-connectors).
- `` (for the Python SDK) or `` (for `curl` or Postman) (_required_) - The workflow type. Available values include `CUSTOM` (for the Python SDK) and `custom` (for `curl` or Postman).
If `` is set to `CUSTOM` (for the Python SDK), or if `` is set to `custom` (for `curl` or Postman), you must add a `workflow_nodes` array. For instructions, see [Custom workflow DAG nodes](#custom-workflow-dag-nodes).
@@ -281,7 +281,7 @@ Replace the preceding placeholders as follows:
The previously-available workflow optimization types `ADVANCED`, `BASIC`, and `PLATINUM` (for the Python SDK) and
`advanced`, `basic`, and `platinum` (for `curl` or Postman) are non-operational and planned to be fully removed in a future release.
- The ability to create an [automatic workflow](/platform/workflows#create-an-automatic-workflow) type is currently not available but is planned to be added in a future release.
+ The ability to create an [automatic workflow](/ui/workflows#create-an-automatic-workflow) type is currently not available but is planned to be added in a future release.
- `` - The repeating automatic run schedule, specified as a predefined phrase. The available predefined phrases are:
@@ -307,7 +307,7 @@ the `PUT` method to call the `/workflows/` endpoint (for `curl` or
`` with the workflow's unique ID. To get this ID, see [List workflows](#list-workflows).
In the request body, specify the settings for the workflow. For the specific settings to include, see
-[Create a workflow](/platform-api/api/workflows#create-a-workflow).
+[Create a workflow](/api/workflow/workflows#create-a-workflow).
@@ -508,7 +508,7 @@ flowchart LR
A **Partitioner** node has a `type` of `WorkflowNodeType.PARTITION` (for the Python SDK) or `partition` (for `curl` and Postman).
-[Learn about the available partitioning strategies](/platform/partitioning).
+[Learn about the available partitioning strategies](/ui/partitioning).
#### Auto strategy
@@ -747,7 +747,7 @@ Allowed values for `provider` and `model` include:
A **Chunker** node has a `type` of `WorkflowNodeType.CHUNK` (for the Python SDK) or `chunk` (for `curl` and Postman).
-[Learn about the available chunking strategies](/platform/chunking).
+[Learn about the available chunking strategies](/ui/chunking).
#### Chunk by Character strategy
@@ -915,7 +915,7 @@ A **Chunker** node has a `type` of `WorkflowNodeType.CHUNK` (for the Python SDK)
An **Enrichment** node has a `type` of `WorkflowNodeType.PROMPTER` (for the Python SDK) or `prompter` (for `curl` and Postman).
-[Learn about the available enrichments](/platform/enriching/overview).
+[Learn about the available enrichments](/ui/enriching/overview).
#### Image Description task
@@ -1047,7 +1047,7 @@ Allowed values for `` include:
An **Embedder** node has a `type` of `WorkflowNodeType.EMBED` (for the Python SDK) or `embed` (for `curl` and Postman).
-[Learn about the available embedding providers and models](/platform/embedding).
+[Learn about the available embedding providers and models](/ui/embedding).
diff --git a/examplecode/codesamples/api/Unstructured-POST.postman_collection.json b/examplecode/codesamples/api/Unstructured-POST.postman_collection.json
index 0793aea1..06cdd379 100644
--- a/examplecode/codesamples/api/Unstructured-POST.postman_collection.json
+++ b/examplecode/codesamples/api/Unstructured-POST.postman_collection.json
@@ -7,7 +7,7 @@
},
"item": [
{
- "name": "(Platform Partition Endpoint) Basic Request",
+ "name": "(Partition Endpoint) Basic Request",
"request": {
"method": "POST",
"header": [
diff --git a/examplecode/codesamples/api/huggingchat.mdx b/examplecode/codesamples/api/huggingchat.mdx
index 992197c3..f4eef381 100644
--- a/examplecode/codesamples/api/huggingchat.mdx
+++ b/examplecode/codesamples/api/huggingchat.mdx
@@ -3,15 +3,15 @@ title: Query processed PDF with HuggingChat
---
This example uses the [Unstructured Ingest Python library](/ingestion/python-ingest) or the
-[Unstructured JavaScript/TypeScript SDK](/platform-api/partition-api/sdk-jsts) to send a PDF file to
-the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview) for processing. Unstructured processes the PDF and extracts the PDF's content.
+[Unstructured JavaScript/TypeScript SDK](/api/partition/sdk-jsts) to send a PDF file to
+the [Unstructured Partition Endpoint](/api/partition/overview) for processing. Unstructured processes the PDF and extracts the PDF's content.
This example then sends some of the content to [HuggingChat](https://huggingface.co/chat/), Hugging Face's open-source AI chatbot,
along with some queries about this content.
To run this example, you'll need:
- The [hugchat](https://pypi.org/project/hugchat/) package for Python, or the [huggingface-chat](https://www.npmjs.com/package/huggingface-chat) package for JavaScript/TypeScript.
-- Your Unstructured API key and API URL. [Get an API key and API URL](/platform-api/partition-api/overview).
+- Your Unstructured API key and API URL. [Get an API key and API URL](/api/partition/overview).
- Your Hugging Face account's email address and account password. [Get an account](https://huggingface.co/join).
- A PDF file for Unstructured to process. This example uses a sample PDF file containing the text of the United States Constitution,
available for download from [https://constitutioncenter.org/media/files/constitution.pdf](https://constitutioncenter.org/media/files/constitution.pdf).
diff --git a/examplecode/codesamples/apioss/table-extraction-from-pdf.mdx b/examplecode/codesamples/apioss/table-extraction-from-pdf.mdx
index ba992f69..88525226 100644
--- a/examplecode/codesamples/apioss/table-extraction-from-pdf.mdx
+++ b/examplecode/codesamples/apioss/table-extraction-from-pdf.mdx
@@ -4,7 +4,7 @@ description: This section describes two methods for extracting tables from PDF f
---
-This sample code utilizes the [Unstructured Open Source](/open-source/introduction/overview "Open Source") library and also provides an alternative method the utilizing the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview).
+This sample code utilizes the [Unstructured Open Source](/open-source/introduction/overview "Open Source") library and also provides an alternative method the utilizing the [Unstructured Partition Endpoint](/api/partition/overview).
## Method 1: Using partition\_pdf
@@ -33,7 +33,7 @@ print(tables[0].metadata.text_as_html)
## Method 2: Using Auto Partition or Unstructured API
-By default, table extraction from all file types is enabled. To extract tables from PDFs and images using [Auto Partition](/open-source/core-functionality/partitioning#partition) or [Unstructured API parameters](/platform-api/partition-api/api-parameters) simply set `strategy` parameter to `hi_res`.
+By default, table extraction from all file types is enabled. To extract tables from PDFs and images using [Auto Partition](/open-source/core-functionality/partitioning#partition) or [Unstructured API parameters](/api/partition/api-parameters) simply set `strategy` parameter to `hi_res`.
**Usage: Auto Partition**
diff --git a/examplecode/codesamples/oss/multi-files-api-processing.mdx b/examplecode/codesamples/oss/multi-files-api-processing.mdx
index 21bf7c20..83b52c6a 100644
--- a/examplecode/codesamples/oss/multi-files-api-processing.mdx
+++ b/examplecode/codesamples/oss/multi-files-api-processing.mdx
@@ -2,7 +2,7 @@
title: Multi-file API processing
---
-This sample code utilizes the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview).
+This sample code utilizes the [Unstructured Partition Endpoint](/api/partition/overview).
## Introduction
diff --git a/examplecode/notebooks.mdx b/examplecode/notebooks.mdx
index d49c3fc3..cf72317f 100644
--- a/examplecode/notebooks.mdx
+++ b/examplecode/notebooks.mdx
@@ -8,9 +8,9 @@ description: "Notebooks contain complete working sample code for end-to-end solu
- Build RAG with Databricks Vector Search with context preprocessed from multiple sources by Unstructured Platform.
+ Build RAG with Databricks Vector Search with context preprocessed from multiple sources by Unstructured.
- ``Unstructured Platform`` ``Databricks`` ``Introductory notebook``
+ ``Databricks`` ``Introductory notebook``
@@ -18,21 +18,21 @@ description: "Notebooks contain complete working sample code for end-to-end solu
Build Agentic RAG with `smolagents` library and compare the results with Vanilla RAG in pure Python
- ``Unstructured Platform UI`` ``GPT-4o`` ``smolagents`` ``Agents`` ``DataStax`` ``S3`` ``Advanced notebook``
+ ``GPT-4o`` ``smolagents`` ``Agents`` ``DataStax`` ``S3`` ``Advanced notebook``
- Evaluate Llama3.2 for your RAG system with Unstructured Platform, GPT-4o, Ragas, and LangChain
+ Evaluate Llama3.2 for your RAG system with Unstructured, GPT-4o, Ragas, and LangChain
- ``Unstructured Platform UI`` ``GPT-4o`` ``Ragas`` ``LangChain`` ``Llama3.2`` ``Pinecone`` ``S3`` ``Advanced notebook``
+ ``GPT-4o`` ``Ragas`` ``LangChain`` ``Llama3.2`` ``Pinecone`` ``S3`` ``Advanced notebook``
- Process a file in S3 with Unstructured Platform and return images in your RAG output
+ Process a file in S3 with Unstructured and return images in your RAG output
- ``Unstructured Platform UI`` ``S3`` ``FAISS`` ``GPT-4o-mini`` ``Advanced notebook``
+ ``S3`` ``FAISS`` ``GPT-4o-mini`` ``Advanced notebook``
diff --git a/examplecode/tools/langflow.mdx b/examplecode/tools/langflow.mdx
index d86a8f32..0b921b10 100644
--- a/examplecode/tools/langflow.mdx
+++ b/examplecode/tools/langflow.mdx
@@ -21,7 +21,7 @@ Also:
- [Sign up for an OpenAI account](https://platform.openai.com/signup), and [get your OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key).
- [Sign up for a free Langflow account](https://astra.datastax.com/signup?type=langflow).
-- [Get your Unstructured Platform Partition Endpoint key](/platform-api/partition-api/overview).
+- [Get your Unstructured Partition Endpoint key](/api/partition/overview).
## Create and run the demonstration project
@@ -32,7 +32,7 @@ Also:
3. Click **Blank Flow**.
- In this step, you add a component that instructs the Unstructured Platform Partition Endpoint to process a local file that you specify.
+ In this step, you add a component that instructs the Unstructured Partition Endpoint to process a local file that you specify.
1. On the sidebar, expand **Experimental (Beta)**, and then expand **Loaders**.
2. Drag the **Unstructured** component onto the designer area.
@@ -233,14 +233,14 @@ such as processing multiple files or using a different vector store.
In this demonstration, you pass to Unstructured a single local file. To pass multiple local or
non-local files to Unstructured instead, you can use the
-[Unstructured Platform](/platform/overview) or
+[Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview) or
[Unstructured Ingest](/ingestion/overview) outside of Langflow.
To do this, you can:
-- [Use the Unstructured Platform to create a workflow](/platform/quickstart) that relies on any available
- [source connector](/platform/sources/overview) to connect to
- [Astra DB](/platform/destinations/astradb). Run this workflow outside of Langflow anytime you have new documents in that source location that
+- [Use the Unstructured UI to create a workflow](/ui/quickstart) that relies on any available
+ [source connector](/ui/sources/overview) to connect to
+ [Astra DB](/ui/destinations/astradb). Run this workflow outside of Langflow anytime you have new documents in that source location that
you want Unstructured to process and then insert the new processed data into Astra DB. Then, back in the Langflow project,
use the **Playground** to ask additional questions, which will now include the new data when generating answers.
@@ -256,13 +256,13 @@ In this demonstration, you use Astra DB as the vector store. Langflow and Unstru
To do this, you can:
-[Use the Unstructured Platform to create a workflow](/platform/quickstart) that relies on any available
-[source connector](/platform/sources/overview) to connect to
+[Use the Unstructured UI to create a workflow](/ui/quickstart) that relies on any available
+[source connector](/ui/sources/overview) to connect to
one of the following available vector stores that Langflow also supports:
-- [Milvus](/platform/destinations/milvus)
-- [MongoDB](/platform/destinations/mongodb)
-- [Pinecone](/platform/destinations/pinecone)
+- [Milvus](/ui/destinations/milvus)
+- [MongoDB](/ui/destinations/mongodb)
+- [Pinecone](/ui/destinations/pinecone)
Run this workflow outside of Langflow anytime you have new documents in the source location that
you want Unstructured to process and then insert the new processed data into the vector store. Then, back in the Langflow project,
diff --git a/examplecode/tools/vectorshift.mdx b/examplecode/tools/vectorshift.mdx
index 488cc748..525bf054 100644
--- a/examplecode/tools/vectorshift.mdx
+++ b/examplecode/tools/vectorshift.mdx
@@ -43,20 +43,20 @@ Also:
- [Sign up for an OpenAI account](https://platform.openai.com/signup), and [get your OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key).
- [Sign up for a VectorShift Starter account](https://app.vectorshift.ai/api/signup).
-- [Sign up for an Unstructured Platform account through the For Developers page](/platform/quickstart).
+- [Sign up for an Unstructured account through the For Developers page](/ui/quickstart).
## Create and run the demonstration project
- Although you can use any [supported file type](/platform/supported-file-types) or data in any
- [supported source type](/platform/sources/overview) for the input into Pinecone, this demonstration uses [the text of the United States Constitution in PDF format](https://constitutioncenter.org/media/files/constitution.pdf).
-
- 1. Sign in to your Unstructured Platform account.
- 2. [Create a source connector](/platform/sources/overview), if you do not already have one, to connect Unstructured to the source location where the PDF file is stored.
- 3. [Create a Pinecone destination connector](/platform/destinations/pinecone), if you do not already have one, to connect Unstructured to your Pinecone serverless index.
- 4. [Create a workflow](/platform/workflows#create-a-workflow) that references this source connector and destination connector.
- 5. [Run the workflow](/platform/workflows#edit-delete-or-run-a-workflow).
+ Although you can use any [supported file type](/ui/supported-file-types) or data in any
+ [supported source type](/ui/sources/overview) for the input into Pinecone, this demonstration uses [the text of the United States Constitution in PDF format](https://constitutioncenter.org/media/files/constitution.pdf).
+
+ 1. Sign in to your Unstructured account.
+ 2. [Create a source connector](/ui/sources/overview), if you do not already have one, to connect Unstructured to the source location where the PDF file is stored.
+ 3. [Create a Pinecone destination connector](/ui/destinations/pinecone), if you do not already have one, to connect Unstructured to your Pinecone serverless index.
+ 4. [Create a workflow](/ui/workflows#create-a-workflow) that references this source connector and destination connector.
+ 5. [Run the workflow](/ui/workflows#edit-delete-or-run-a-workflow).
1. Sign in to your VectorShift account dashboard.
diff --git a/img/platform/APIKeyOnly.png b/img/ui/APIKeyOnly.png
similarity index 100%
rename from img/platform/APIKeyOnly.png
rename to img/ui/APIKeyOnly.png
diff --git a/img/platform/APIKeyURL.png b/img/ui/APIKeyURL.png
similarity index 100%
rename from img/platform/APIKeyURL.png
rename to img/ui/APIKeyURL.png
diff --git a/img/platform/AccountBilling.png b/img/ui/AccountBilling.png
similarity index 100%
rename from img/platform/AccountBilling.png
rename to img/ui/AccountBilling.png
diff --git a/img/platform/AccountBillingPayPerPage.png b/img/ui/AccountBillingPayPerPage.png
similarity index 100%
rename from img/platform/AccountBillingPayPerPage.png
rename to img/ui/AccountBillingPayPerPage.png
diff --git a/img/platform/AccountBillingPaymentMethod.png b/img/ui/AccountBillingPaymentMethod.png
similarity index 100%
rename from img/platform/AccountBillingPaymentMethod.png
rename to img/ui/AccountBillingPaymentMethod.png
diff --git a/img/platform/AccountBillingSubscribeAndSave.png b/img/ui/AccountBillingSubscribeAndSave.png
similarity index 100%
rename from img/platform/AccountBillingSubscribeAndSave.png
rename to img/ui/AccountBillingSubscribeAndSave.png
diff --git a/img/platform/AccountSettings.png b/img/ui/AccountSettings.png
similarity index 100%
rename from img/platform/AccountSettings.png
rename to img/ui/AccountSettings.png
diff --git a/img/platform/AccountSettingsNeedHelp.png b/img/ui/AccountSettingsNeedHelp.png
similarity index 100%
rename from img/platform/AccountSettingsNeedHelp.png
rename to img/ui/AccountSettingsNeedHelp.png
diff --git a/img/platform/AccountSettingsSidebar.png b/img/ui/AccountSettingsSidebar.png
similarity index 100%
rename from img/platform/AccountSettingsSidebar.png
rename to img/ui/AccountSettingsSidebar.png
diff --git a/img/platform/AccountSettingsSidebarMessageUs.png b/img/ui/AccountSettingsSidebarMessageUs.png
similarity index 100%
rename from img/platform/AccountSettingsSidebarMessageUs.png
rename to img/ui/AccountSettingsSidebarMessageUs.png
diff --git a/img/platform/AccountUsage.png b/img/ui/AccountUsage.png
similarity index 100%
rename from img/platform/AccountUsage.png
rename to img/ui/AccountUsage.png
diff --git a/img/platform/Choose-Workflow-Type.png b/img/ui/Choose-Workflow-Type.png
similarity index 100%
rename from img/platform/Choose-Workflow-Type.png
rename to img/ui/Choose-Workflow-Type.png
diff --git a/img/platform/Destinations-Sidebar.png b/img/ui/Destinations-Sidebar.png
similarity index 100%
rename from img/platform/Destinations-Sidebar.png
rename to img/ui/Destinations-Sidebar.png
diff --git a/img/platform/GoToPlatform.png b/img/ui/GoToPlatform.png
similarity index 100%
rename from img/platform/GoToPlatform.png
rename to img/ui/GoToPlatform.png
diff --git a/img/platform/Job-Complete.png b/img/ui/Job-Complete.png
similarity index 100%
rename from img/platform/Job-Complete.png
rename to img/ui/Job-Complete.png
diff --git a/img/platform/Job-Failed.png b/img/ui/Job-Failed.png
similarity index 100%
rename from img/platform/Job-Failed.png
rename to img/ui/Job-Failed.png
diff --git a/img/platform/Job-Finished-Fully.png b/img/ui/Job-Finished-Fully.png
similarity index 100%
rename from img/platform/Job-Finished-Fully.png
rename to img/ui/Job-Finished-Fully.png
diff --git a/img/platform/Job-Finished-Partially.png b/img/ui/Job-Finished-Partially.png
similarity index 100%
rename from img/platform/Job-Finished-Partially.png
rename to img/ui/Job-Finished-Partially.png
diff --git a/img/platform/Job-In-Progress.png b/img/ui/Job-In-Progress.png
similarity index 100%
rename from img/platform/Job-In-Progress.png
rename to img/ui/Job-In-Progress.png
diff --git a/img/platform/Job-Pending.png b/img/ui/Job-Pending.png
similarity index 100%
rename from img/platform/Job-Pending.png
rename to img/ui/Job-Pending.png
diff --git a/img/platform/Jobs-Sidebar.png b/img/ui/Jobs-Sidebar.png
similarity index 100%
rename from img/platform/Jobs-Sidebar.png
rename to img/ui/Jobs-Sidebar.png
diff --git a/img/platform/Node-Usage-Hints.png b/img/ui/Node-Usage-Hints.png
similarity index 100%
rename from img/platform/Node-Usage-Hints.png
rename to img/ui/Node-Usage-Hints.png
diff --git a/img/platform/PlatformAPIURL.png b/img/ui/PlatformAPIURL.png
similarity index 100%
rename from img/platform/PlatformAPIURL.png
rename to img/ui/PlatformAPIURL.png
diff --git a/img/platform/Python-Workflow-Code-Partial.png b/img/ui/Python-Workflow-Code-Partial.png
similarity index 100%
rename from img/platform/Python-Workflow-Code-Partial.png
rename to img/ui/Python-Workflow-Code-Partial.png
diff --git a/img/platform/Select-Job.png b/img/ui/Select-Job.png
similarity index 100%
rename from img/platform/Select-Job.png
rename to img/ui/Select-Job.png
diff --git a/img/platform/ServerlessAPIURL.png b/img/ui/ServerlessAPIURL.png
similarity index 100%
rename from img/platform/ServerlessAPIURL.png
rename to img/ui/ServerlessAPIURL.png
diff --git a/img/platform/ServerlessPlatformAPIURL.png b/img/ui/ServerlessPlatformAPIURL.png
similarity index 100%
rename from img/platform/ServerlessPlatformAPIURL.png
rename to img/ui/ServerlessPlatformAPIURL.png
diff --git a/img/platform/Signin.png b/img/ui/Signin.png
similarity index 100%
rename from img/platform/Signin.png
rename to img/ui/Signin.png
diff --git a/img/platform/Sources-Sidebar.png b/img/ui/Sources-Sidebar.png
similarity index 100%
rename from img/platform/Sources-Sidebar.png
rename to img/ui/Sources-Sidebar.png
diff --git a/img/platform/Start-Screen-Partial.png b/img/ui/Start-Screen-Partial.png
similarity index 100%
rename from img/platform/Start-Screen-Partial.png
rename to img/ui/Start-Screen-Partial.png
diff --git a/img/platform/Start-Screen.png b/img/ui/Start-Screen.png
similarity index 100%
rename from img/platform/Start-Screen.png
rename to img/ui/Start-Screen.png
diff --git a/img/platform/Workflow-Add-Node.png b/img/ui/Workflow-Add-Node.png
similarity index 100%
rename from img/platform/Workflow-Add-Node.png
rename to img/ui/Workflow-Add-Node.png
diff --git a/img/platform/Workflow-Designer.png b/img/ui/Workflow-Designer.png
similarity index 100%
rename from img/platform/Workflow-Designer.png
rename to img/ui/Workflow-Designer.png
diff --git a/img/platform/Workflow-Details.png b/img/ui/Workflow-Details.png
similarity index 100%
rename from img/platform/Workflow-Details.png
rename to img/ui/Workflow-Details.png
diff --git a/img/platform/Workflows-Sidebar.png b/img/ui/Workflows-Sidebar.png
similarity index 100%
rename from img/platform/Workflows-Sidebar.png
rename to img/ui/Workflows-Sidebar.png
diff --git a/ingestion/how-to/extract-image-block-types.mdx b/ingestion/how-to/extract-image-block-types.mdx
index 76741bd9..df700d97 100644
--- a/ingestion/how-to/extract-image-block-types.mdx
+++ b/ingestion/how-to/extract-image-block-types.mdx
@@ -15,7 +15,7 @@ and then show it.
## To run this example
You will need a document that is one of the document types supported by the `extract_image_block_types` argument.
-See the `extract_image_block_types` entry in [API Parameters](/platform-api/partition-api/api-parameters).
+See the `extract_image_block_types` entry in [API Parameters](/api/partition/api-parameters).
This example uses a PDF file with embedded images and tables.
import SharedAPIKeyURL from '/snippets/general-shared-text/api-key-url.mdx';
diff --git a/ingestion/ingest-cli.mdx b/ingestion/ingest-cli.mdx
index 7b5d2696..c475239c 100644
--- a/ingestion/ingest-cli.mdx
+++ b/ingestion/ingest-cli.mdx
@@ -6,9 +6,9 @@ sidebarTitle: Ingest CLI
The Unstructured Ingest CLI enables you to use command-line scripts to send files in batches to Unstructured for processing, and to tell Unstructured where to deliver the processed data. [Learn more](/ingestion/overview#unstructured-ingest-cli).
- The Unstructured Ingest CLI does not work with the Unstructured Platform API.
+ The Unstructured Ingest CLI does not work with the Unstructured API.
- For information about the Unstructured Platform API, see the [Unstructured Platform API Overview](/platform-api/api/overview).
+ For information about the Unstructured API, see the [Unstructured API Overview](/api/workflow/overview).
## Installation
diff --git a/ingestion/overview.mdx b/ingestion/overview.mdx
index fc1483c3..f5c2d1e4 100644
--- a/ingestion/overview.mdx
+++ b/ingestion/overview.mdx
@@ -3,14 +3,14 @@ title: Overview
---
- Unstructured recommends that you use the [Unstructured Platform API](/platform-api/overview) instead of the
+ Unstructured recommends that you use the [Unstructured API](/api/overview) instead of the
Unstructured Ingest CLI or the Unstructured Ingest Python library.
- The Unstructured Platform API provides a full range of partitioning, chunking, embedding, and enrichment options for your files and data.
+ The Unstructured API provides a full range of partitioning, chunking, embedding, and enrichment options for your files and data.
It also uses the latest and highest-performing models on the market today, and it has built-in logic to deliver the highest quality results
at the lowest cost.
- The Unstructured Ingest CLI and the Unstructured Ingest Python library are not being actively updated to include these and other Unstructured Platform API features.
+ The Unstructured Ingest CLI and the Unstructured Ingest Python library are not being actively updated to include these and other Unstructured API features.
You can send multiple files in batches to be ingested by Unstructured for processing.
@@ -131,5 +131,5 @@ import GeneratePythonCodeExamples from '/snippets/ingestion/code-generator.mdx';
## See also
-- The [Unstructured Platform UI](/platform/overview) enables you to send batches to Unstructured from remote locations, and to have Unstructured send the processed data to remote locations, all without using code or a CLI.
+- The [Unstructured UI](/ui/overview) enables you to send batches to Unstructured from remote locations, and to have Unstructured send the processed data to remote locations, all without using code or a CLI.
diff --git a/ingestion/python-ingest.mdx b/ingestion/python-ingest.mdx
index 3f1fe85f..6c5d3fcd 100644
--- a/ingestion/python-ingest.mdx
+++ b/ingestion/python-ingest.mdx
@@ -6,9 +6,9 @@ sidebarTitle: Ingest Python library
The Unstructured Ingest Python library enables you to use Python code to send files in batches to Unstructured for processing, and to tell Unstructured where to deliver the processed data.
- The Unstructured Ingest Python library does not work with the Unstructured Platform API.
+ The Unstructured Ingest Python library does not work with the Unstructured API.
- For information about the Unstructured Platform API, see the [Unstructured Platform API Overview](/platform-api/api/overview).
+ For information about the Unstructured API, see the [Unstructured API Overview](/api/workflow/overview).
The following 3-minute video shows how to use the Unstructured Ingest Python library to send multiple PDFs from a local directory in batches to be ingested by Unstructured for processing:
diff --git a/mint.json b/mint.json
index e84bec83..b2c93f98 100644
--- a/mint.json
+++ b/mint.json
@@ -74,12 +74,12 @@
},
"tabs": [
{
- "name": "Platform UI",
- "url": "platform"
+ "name": "UI",
+ "url": "ui"
},
{
- "name": "Platform API",
- "url": "platform-api"
+ "name": "API",
+ "url": "api"
},
{
"name": "Example code",
@@ -165,199 +165,199 @@
]
},
{
- "group": "Unstructured Platform UI",
+ "group": "Unstructured UI",
"pages": [
- "platform/overview",
- "platform/supported-file-types",
- "platform/connectors"
+ "ui/overview",
+ "ui/supported-file-types",
+ "ui/connectors"
]
},
{
- "group": "Getting started with Platform",
+ "group": "Getting started with the UI",
"pages": [
- "platform/quickstart"
+ "ui/quickstart"
]
},
{
- "group": "Using Platform",
+ "group": "Using the UI",
"pages": [
{
"group": "Sources",
"pages": [
- "platform/sources/overview",
- "platform/sources/azure-blob-storage",
- "platform/sources/box",
- "platform/sources/confluence",
- "platform/sources/couchbase",
- "platform/sources/databricks-volumes",
- "platform/sources/dropbox",
- "platform/sources/elasticsearch",
- "platform/sources/google-cloud",
- "platform/sources/google-drive",
- "platform/sources/kafka",
- "platform/sources/mongodb",
- "platform/sources/onedrive",
- "platform/sources/outlook",
- "platform/sources/postgresql",
- "platform/sources/s3",
- "platform/sources/salesforce",
- "platform/sources/sharepoint",
- "platform/sources/snowflake"
+ "ui/sources/overview",
+ "ui/sources/azure-blob-storage",
+ "ui/sources/box",
+ "ui/sources/confluence",
+ "ui/sources/couchbase",
+ "ui/sources/databricks-volumes",
+ "ui/sources/dropbox",
+ "ui/sources/elasticsearch",
+ "ui/sources/google-cloud",
+ "ui/sources/google-drive",
+ "ui/sources/kafka",
+ "ui/sources/mongodb",
+ "ui/sources/onedrive",
+ "ui/sources/outlook",
+ "ui/sources/postgresql",
+ "ui/sources/s3",
+ "ui/sources/salesforce",
+ "ui/sources/sharepoint",
+ "ui/sources/snowflake"
]
},
{
"group": "Destinations",
"pages": [
- "platform/destinations/overview",
- "platform/destinations/astradb",
- "platform/destinations/azure-ai-search",
- "platform/destinations/couchbase",
- "platform/destinations/databricks-volumes",
- "platform/destinations/delta-table",
- "platform/destinations/databricks-delta-table",
- "platform/destinations/elasticsearch",
- "platform/destinations/google-cloud",
- "platform/destinations/kafka",
- "platform/destinations/milvus",
- "platform/destinations/mongodb",
- "platform/destinations/motherduck",
- "platform/destinations/neo4j",
- "platform/destinations/onedrive",
- "platform/destinations/pinecone",
- "platform/destinations/postgresql",
- "platform/destinations/qdrant",
- "platform/destinations/redis",
- "platform/destinations/s3",
- "platform/destinations/snowflake",
- "platform/destinations/weaviate"
+ "ui/destinations/overview",
+ "ui/destinations/astradb",
+ "ui/destinations/azure-ai-search",
+ "ui/destinations/couchbase",
+ "ui/destinations/databricks-volumes",
+ "ui/destinations/delta-table",
+ "ui/destinations/databricks-delta-table",
+ "ui/destinations/elasticsearch",
+ "ui/destinations/google-cloud",
+ "ui/destinations/kafka",
+ "ui/destinations/milvus",
+ "ui/destinations/mongodb",
+ "ui/destinations/motherduck",
+ "ui/destinations/neo4j",
+ "ui/destinations/onedrive",
+ "ui/destinations/pinecone",
+ "ui/destinations/postgresql",
+ "ui/destinations/qdrant",
+ "ui/destinations/redis",
+ "ui/destinations/s3",
+ "ui/destinations/snowflake",
+ "ui/destinations/weaviate"
]
},
- "platform/workflows",
- "platform/jobs",
- "platform/billing"
+ "ui/workflows",
+ "ui/jobs",
+ "ui/billing"
]
},
{
"group": "Concepts",
"pages": [
- "platform/document-elements",
- "platform/partitioning",
- "platform/chunking",
+ "ui/document-elements",
+ "ui/partitioning",
+ "ui/chunking",
{
"group": "Enriching",
"pages": [
- "platform/enriching/overview",
- "platform/enriching/image-descriptions",
- "platform/enriching/table-descriptions",
- "platform/enriching/table-to-html",
- "platform/enriching/ner"
+ "ui/enriching/overview",
+ "ui/enriching/image-descriptions",
+ "ui/enriching/table-descriptions",
+ "ui/enriching/table-to-html",
+ "ui/enriching/ner"
]
},
- "platform/embedding"
+ "ui/embedding"
]
},
{
- "group": "Unstructured Platform API",
+ "group": "Unstructured API",
"pages": [
- "platform-api/overview",
- "platform-api/supported-file-types"
+ "api/overview",
+ "api/supported-file-types"
]
},
{
"group": "Workflow Endpoint",
"pages": [
- "platform-api/api/overview",
+ "api/workflow/overview",
{
"group": "Sources",
"pages": [
- "platform-api/api/sources/overview",
- "platform-api/api/sources/azure-blob-storage",
- "platform-api/api/sources/box",
- "platform-api/api/sources/confluence",
- "platform-api/api/sources/couchbase",
- "platform-api/api/sources/databricks-volumes",
- "platform-api/api/sources/dropbox",
- "platform-api/api/sources/elasticsearch",
- "platform-api/api/sources/google-cloud",
- "platform-api/api/sources/google-drive",
- "platform-api/api/sources/kafka",
- "platform-api/api/sources/mongodb",
- "platform-api/api/sources/onedrive",
- "platform-api/api/sources/outlook",
- "platform-api/api/sources/postgresql",
- "platform-api/api/sources/s3",
- "platform-api/api/sources/salesforce",
- "platform-api/api/sources/sharepoint",
- "platform-api/api/sources/snowflake"
+ "api/workflow/sources/overview",
+ "api/workflow/sources/azure-blob-storage",
+ "api/workflow/sources/box",
+ "api/workflow/sources/confluence",
+ "api/workflow/sources/couchbase",
+ "api/workflow/sources/databricks-volumes",
+ "api/workflow/sources/dropbox",
+ "api/workflow/sources/elasticsearch",
+ "api/workflow/sources/google-cloud",
+ "api/workflow/sources/google-drive",
+ "api/workflow/sources/kafka",
+ "api/workflow/sources/mongodb",
+ "api/workflow/sources/onedrive",
+ "api/workflow/sources/outlook",
+ "api/workflow/sources/postgresql",
+ "api/workflow/sources/s3",
+ "api/workflow/sources/salesforce",
+ "api/workflow/sources/sharepoint",
+ "api/workflow/sources/snowflake"
]
},
{
"group": "Destinations",
"pages": [
- "platform-api/api/destinations/overview",
- "platform-api/api/destinations/astradb",
- "platform-api/api/destinations/azure-ai-search",
- "platform-api/api/destinations/couchbase",
- "platform-api/api/destinations/databricks-volumes",
- "platform-api/api/destinations/delta-table",
- "platform-api/api/destinations/databricks-delta-table",
- "platform-api/api/destinations/elasticsearch",
- "platform-api/api/destinations/google-cloud",
- "platform-api/api/destinations/kafka",
- "platform-api/api/destinations/milvus",
- "platform-api/api/destinations/mongodb",
- "platform-api/api/destinations/motherduck",
- "platform-api/api/destinations/neo4j",
- "platform-api/api/destinations/onedrive",
- "platform-api/api/destinations/pinecone",
- "platform-api/api/destinations/postgresql",
- "platform-api/api/destinations/qdrant",
- "platform-api/api/destinations/redis",
- "platform-api/api/destinations/s3",
- "platform-api/api/destinations/snowflake",
- "platform-api/api/destinations/weaviate"
+ "api/workflow/destinations/overview",
+ "api/workflow/destinations/astradb",
+ "api/workflow/destinations/azure-ai-search",
+ "api/workflow/destinations/couchbase",
+ "api/workflow/destinations/databricks-volumes",
+ "api/workflow/destinations/delta-table",
+ "api/workflow/destinations/databricks-delta-table",
+ "api/workflow/destinations/elasticsearch",
+ "api/workflow/destinations/google-cloud",
+ "api/workflow/destinations/kafka",
+ "api/workflow/destinations/milvus",
+ "api/workflow/destinations/mongodb",
+ "api/workflow/destinations/motherduck",
+ "api/workflow/destinations/neo4j",
+ "api/workflow/destinations/onedrive",
+ "api/workflow/destinations/pinecone",
+ "api/workflow/destinations/postgresql",
+ "api/workflow/destinations/qdrant",
+ "api/workflow/destinations/redis",
+ "api/workflow/destinations/s3",
+ "api/workflow/destinations/snowflake",
+ "api/workflow/destinations/weaviate"
]
},
- "platform-api/api/workflows",
- "platform-api/api/jobs"
+ "api/workflow/workflows",
+ "api/workflow/jobs"
]
},
{
"group": "Partition Endpoint",
"pages": [
- "platform-api/partition-api/overview",
- "platform-api/partition-api/post-requests",
- "platform-api/partition-api/sdk-python",
- "platform-api/partition-api/sdk-jsts",
- "platform-api/partition-api/api-parameters",
- "platform-api/partition-api/api-validation-errors",
- "platform-api/partition-api/examples",
- "platform-api/partition-api/document-elements",
- "platform-api/partition-api/partitioning",
- "platform-api/partition-api/chunking",
- "platform-api/partition-api/speed-up-large-files-batches",
- "platform-api/partition-api/get-elements",
- "platform-api/partition-api/text-as-html",
- "platform-api/partition-api/extract-image-block-types",
- "platform-api/partition-api/get-chunked-elements",
- "platform-api/partition-api/transform-schemas",
- "platform-api/partition-api/generate-schema",
- "platform-api/partition-api/pipeline-1"
+ "api/partition/overview",
+ "api/partition/post-requests",
+ "api/partition/sdk-python",
+ "api/partition/sdk-jsts",
+ "api/partition/api-parameters",
+ "api/partition/api-validation-errors",
+ "api/partition/examples",
+ "api/partition/document-elements",
+ "api/partition/partitioning",
+ "api/partition/chunking",
+ "api/partition/speed-up-large-files-batches",
+ "api/partition/get-elements",
+ "api/partition/text-as-html",
+ "api/partition/extract-image-block-types",
+ "api/partition/get-chunked-elements",
+ "api/partition/transform-schemas",
+ "api/partition/generate-schema",
+ "api/partition/pipeline-1"
]
},
{
"group": "Legacy APIs",
"pages": [
- "platform-api/legacy-api/overview",
- "platform-api/legacy-api/free-api",
- "platform-api/legacy-api/aws",
- "platform-api/legacy-api/azure"
+ "api/legacy-api/overview",
+ "api/legacy-api/free-api",
+ "api/legacy-api/aws",
+ "api/legacy-api/azure"
]
},
{
"group": "Troubleshooting",
"pages": [
- "platform-api/troubleshooting/api-key-url"
+ "api/troubleshooting/api-key-url"
]
},
{
@@ -513,43 +513,43 @@
"redirects": [
{
"source": "/api-reference/api-services/accessing-unstructured-api",
- "destination": "/platform-api/overview"
+ "destination": "/api/overview"
},
{
"source": "/api-reference/api-services/api-parameters",
- "destination": "/platform-api/partition-api/api-parameters"
+ "destination": "/api/partition/api-parameters"
},
{
"source": "/api-reference/api-services/api-validation-errors",
- "destination": "/platform-api/partition-api/api-validation-errors"
+ "destination": "/api/partition/api-validation-errors"
},
{
"source": "/api-reference/api-services/aws",
- "destination": "/platform-api/legacy-api/aws"
+ "destination": "/api/legacy-api/aws"
},
{
"source": "/api-reference/api-services/azure",
- "destination": "/platform-api/legacy-api/azure"
+ "destination": "/api/legacy-api/azure"
},
{
"source": "/api-reference/api-services/chunking",
- "destination": "/platform-api/partition-api/chunking"
+ "destination": "/api/partition/chunking"
},
{
"source": "/api-reference/api-services/document-elements",
- "destination": "/platform-api/partition-api/document-elements"
+ "destination": "/api/partition/document-elements"
},
{
"source": "/api-reference/api-services/examples",
- "destination": "/platform-api/partition-api/examples"
+ "destination": "/api/partition/examples"
},
{
"source": "/api-reference/api-services/free-api",
- "destination": "/platform-api/legacy-api/free-api"
+ "destination": "/api/legacy-api/free-api"
},
{
"source": "/api-reference/api-services/overview",
- "destination": "/platform-api/overview"
+ "destination": "/api/overview"
},
{
"source": "/api-reference/api-services/partition-via-api",
@@ -557,39 +557,39 @@
},
{
"source": "/api-reference/api-services/partitioning",
- "destination": "/platform-api/partition-api/partitioning"
+ "destination": "/api/partition/partitioning"
},
{
"source": "/api-reference/api-services/post-requests",
- "destination": "/platform-api/partition-api/post-requests"
+ "destination": "/api/partition/post-requests"
},
{
"source": "/api-reference/api-services/saas-api-development-guide",
- "destination": "/platform-api/overview"
+ "destination": "/api/overview"
},
{
"source": "/api-reference/api-services/sdk-jsts",
- "destination": "/platform-api/partition-api/sdk-jsts"
+ "destination": "/api/partition/sdk-jsts"
},
{
"source": "/api-reference/api-services/sdk-python",
- "destination": "platform-api/partition-api/sdk-python"
+ "destination": "/api/partition/sdk-python"
},
{
"source": "/api-reference/api-services/supported-file-types",
- "destination": "/platform-api/supported-file-types"
+ "destination": "/api/supported-file-types"
},
{
"source": "/api-reference/best-practices/speed-up-large-files-batches",
- "destination": "/platform-api/partition-api/speed-up-large-files-batches"
+ "destination": "/api/partition/speed-up-large-files-batches"
},
{
"source": "/api-reference/general/pipeline-1",
- "destination": "/platform-api/partition-api/pipeline-1"
+ "destination": "/api/partition/pipeline-1"
},
{
"source": "/api-reference/how-to/:slug*",
- "destination": "/platform-api/partition-api/:slug*"
+ "destination": "/api/partition/:slug*"
},
{
"source": "/api-reference/ingest/:slug*",
@@ -597,7 +597,7 @@
},
{
"source": "/api-reference/troubleshooting/api-key-url",
- "destination": "/platform-api/troubleshooting/api-key-url"
+ "destination": "/api/troubleshooting/api-key-url"
},
{
"source": "/glossary/glossary",
@@ -607,26 +607,42 @@
"source": "/open-source/ingest/:slug*",
"destination": "/ingestion/:slug*"
},
+ {
+ "source": "/platform/:slug*",
+ "destination": "/ui/:slug*"
+ },
{
"source": "/platform/api/:slug*",
- "destination": "/platform-api/api/:slug*"
+ "destination": "/api/workflow/:slug*"
+ },
+ {
+ "source": "/platform-api/api/:slug*",
+ "destination": "/api/workflow/:slug*"
+ },
+ {
+ "source": "/platform-api/legacy-api/:slug*",
+ "destination": "/api/legacy-api/:slug*"
},
{
"source": "/platform-api/partition-api/choose-hi-res-model",
- "destination": "/platform-api/partition-api/partitioning"
+ "destination": "/api/partition/partitioning"
},
{
"source": "/platform-api/partition-api/choose-partitioning-strategy",
- "destination": "/platform-api/partition-api/partitioning"
+ "destination": "/api/partition/partitioning"
},
{
"source": "/platform-api/partition-api/embedding",
- "destination": "/ingestion/how-to/embedding"
+ "destination": "/api/partition/embedding"
},
{
"source": "/platform-api/partition-api/filter-files",
"destination": "/ingestion/how-to/filter-files"
- }
+ },
+ {
+ "source": "/platform-api/partition-api/:slug*",
+ "destination": "/api/partition/:slug*"
+ }
],
"analytics": {
"ga4": {
diff --git a/open-source/core-functionality/partitioning.mdx b/open-source/core-functionality/partitioning.mdx
index 2ddad7b0..9fbbd758 100644
--- a/open-source/core-functionality/partitioning.mdx
+++ b/open-source/core-functionality/partitioning.mdx
@@ -692,7 +692,7 @@ elements = partition_via_api(
```
-If you are using the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview), you can use the `api_url` kwarg to point the `partition_via_api` function at your Unstructured Platform Partition URL.
+If you are using the [Unstructured Partition Endpoint](/api/partition/overview), you can use the `api_url` kwarg to point the `partition_via_api` function at your Unstructured Partition URL.
```python
import os
diff --git a/open-source/introduction/overview.mdx b/open-source/introduction/overview.mdx
index 84da4a7f..5cbb4f8d 100644
--- a/open-source/introduction/overview.mdx
+++ b/open-source/introduction/overview.mdx
@@ -3,7 +3,7 @@ title: Unstructured Open Source
sidebarTitle: Overview
---
-The `unstructured` open source library is designed as a starting point for quick prototyping and has [limits](#limits). For production scenarios, see the [Unstructured Platform API](/platform-api/overview) instead.
+The `unstructured` open source library is designed as a starting point for quick prototyping and has [limits](#limits). For production scenarios, see the [Unstructured API](/api/overview) instead.
The `unstructured` [library](https://github.com/Unstructured-IO/unstructured) offers an open-source toolkit
designed to simplify the ingestion and pre-processing of diverse data formats, including images and text-based documents
@@ -44,7 +44,7 @@ and use cases.
## Limits
-The open source library has the following limits as compared to [Unstructured Platform API](/platform-api/overview) and the [Unstructured Platform](/platform/overview):
+The open source library has the following limits as compared to the [Unstructured UI](/ui/overview) and the [Unstructured API](/api/overview):
* Not designed for production scenarios.
* Significantly decreased performance on document and table extraction.
@@ -62,7 +62,7 @@ The open source library has the following limits as compared to [Unstructured Pl
## Telemetry
-The open source library allows you to make calls to the Unstructured Platform Partition Endpoint. If you do plan to make such calls, please note:
+The open source library allows you to make calls to the Unstructured Partition Endpoint. If you do plan to make such calls, please note:
import SharedTelemetry from '/snippets/general-shared-text/telemetry.mdx';
diff --git a/openapi.json b/openapi.json
index 687b9b9a..ff0a2def 100644
--- a/openapi.json
+++ b/openapi.json
@@ -7,7 +7,7 @@
"servers": [
{
"url": "https://api.unstructuredapp.io",
- "description": "Platform Partition Endpoint",
+ "description": "Partition Endpoint",
"x-speakeasy-server-id": "saas-api"
},
{
diff --git a/platform-api/api/destinations/overview.mdx b/platform-api/api/destinations/overview.mdx
deleted file mode 100644
index 483f262b..00000000
--- a/platform-api/api/destinations/overview.mdx
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: Overview
----
-
-To use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) to manage destination connectors, do the following:
-
-- To get a list of available destination connectors, use the `UnstructuredClient` object's `destinations.list_destinations` function (for the Python SDK) or
- the `GET` method to call the `/destinations` endpoint (for `curl` or Postman).. [Learn more](/platform-api/api/overview#list-destination-connectors).
-- To get information about a destination connector, use the `UnstructuredClient` object's `destinations.get_destination` function (for the Python SDK) or
- the `GET` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#get-a-destination-connector).
-- To create a destination connector, use the `UnstructuredClient` object's `destinations.create_destination` function (for the Python SDK) or
- the `POST` method to call the `/destinations` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#create-a-destination-connector).
-- To update a destination connector, use the `UnstructuredClient` object's `destinations.update_destination` function (for the Python SDK) or
- the `PUT` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#update-a-destination-connector).
-- To delete a destination connector, use the `UnstructuredClient` object's `destinations.delete_destination` function (for the Python SDK) or
- the `DELETE` method to call the `/destinations/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#delete-a-destination-connector).
-
-To create or update a destination connector, you must also provide settings that are specific to that connector.
-For the list of specific settings, see:
-
-- [Astra DB](/platform-api/api/destinations/astradb) (`ASTRADB` for the Python SDK or `astradb` for `curl` or Postman)
-- [Azure AI Search](/platform-api/api/destinations/azure-ai-search) (`AZURE_AI_SEARCH` for the Python SDK or `azure_ai_search` for `curl` or Postman)
-- [Couchbase](/platform-api/api/destinations/couchbase) (`COUCHBASE` for the Python SDK or `couchbase` for `curl` or Postman)
-- [Databricks Volumes](/platform-api/api/destinations/databricks-volumes) (`DATABRICKS_VOLUMES` for the Python SDK or `databricks_volumes` for `curl` or Postman)
-- [Delta Tables in Amazon S3](/platform-api/api/destinations/delta-table) (`DELTA_TABLE` for the Python SDK or `delta_table` for `curl` or Postman)
-- [Delta Tables in Databricks](/platform-api/api/destinations/databricks-delta-table) (`DATABRICKS_VOLUME_DELTA_TABLES` for the Python SDK or `databricks_volume_delta_tables` for `curl` or Postman)
-- [Elasticsearch](/platform-api/api/destinations/elasticsearch) (`ELASTICSEARCH` for the Python SDK or `elasticsearch` for `curl` or Postman)
-- [Google Cloud Storage](/platform-api/api/destinations/google-cloud) (`GCS` for the Python SDK or `gcs` for `curl` or Postman)
-- [Kafka](/platform-api/api/destinations/kafka) (`KAFKA_CLOUD` for the Python SDK or `kafka-cloud` for `curl` or Postman)
-- [Milvus](/platform-api/api/destinations/milvus) (`MILVUS` for the Python SDK or `milvus` for `curl` or Postman)
-- [MongoDB](/platform-api/api/destinations/mongodb) (`MONGODB` for the Python SDK or `mongodb` for `curl` or Postman)
-- [MotherDuck](/platform-api/api/destinations/motherduck) (`MOTHERDUCK` for the Python SDK or `motherduck` for `curl` or Postman)
-- [Neo4j](/platform-api/api/destinations/neo4j) (`NEO4J` for the Python SDK or `neo4j` for `curl` or Postman)
-- [OneDrive](/platform-api/api/destinations/onedrive) (`ONEDRIVE` for the Python SDK or `onedrive` for `curl` or Postman)
-- [Pinecone](/platform-api/api/destinations/pinecone) (`PINECONE` for the Python SDK or `pinecone` for `curl` or Postman)
-- [PostgreSQL](/platform-api/api/destinations/postgresql) (`POSTGRES` for the Python SDK or `postgres` for `curl` or Postman)
-- [Qdrant](/platform-api/api/destinations/qdrant) (`QDRANT_CLOUD` for the Python SDK or `qdrant-cloud` for `curl` or Postman)
-- [Redis](/platform-api/api/destinations/redis) (`REDIS` for the Python SDK or `redis` for `curl` or Postman)
-- [Snowflake](/platform-api/api/destinations/snowflake) (`SNOWFLAKE` for the Python SDK or `snowflake` for `curl` or Postman)
-- [S3](/platform-api/api/destinations/s3) (`S3` for the Python SDK or `s3` for `curl` or Postman)
-- [Weaviate](/platform-api/api/destinations/weaviate) (`WEAVIATE` for the Python SDK or `weaviate` for `curl` or Postman)
-
diff --git a/platform-api/api/sources/overview.mdx b/platform-api/api/sources/overview.mdx
deleted file mode 100644
index fed53173..00000000
--- a/platform-api/api/sources/overview.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: Overview
----
-
-To use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) to manage source connectors, do the following:
-
-- To get a list of available source connectors, use the `UnstructuredClient` object's `sources.list_sources` function (for the Python SDK) or
- the `GET` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#list-source-connectors).
-- To get information about a source connector, use the `UnstructuredClient` object's `sources.get_source` function (for the Python SDK) or
- the `GET` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#get-a-source-connector).
-- To create a source connector, use the `UnstructuredClient` object's `sources.create_source` function (for the Python SDK) or
- the `POST` method to call the `/sources` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#create-a-source-connector).
-- To update a source connector, use the `UnstructuredClient` object's `sources.update_source` function (for the Python SDK) or
- the `PUT` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#update-a-source-connector).
-- To delete a source connector, use the `UnstructuredClient` object's `sources.delete_source` function (for the Python SDK) or
- the `DELETE` method to call the `/sources/` endpoint (for `curl` or Postman). [Learn more](/platform-api/api/overview#delete-a-source-connector).
-
-To create or update a source connector, you must also provide settings that are specific to that connector.
-For the list of specific settings, see:
-
-- [Azure](/platform-api/api/sources/azure-blob-storage) (`AZURE` for the Python SDK or `azure` for `curl` and Postman)
-- [Box](/platform-api/api/sources/box) (`BOX` for the Python SDK or `box` for `curl` and Postman)
-- [Confluence](/platform-api/api/sources/confluence) (`CONFLUENCE` for the Python SDK or `confluence` for `curl` and Postman)
-- [Couchbase](/platform-api/api/sources/couchbase) (`COUCHBASE` for the Python SDK or `couchbase` for `curl` and Postman)
-- [Databricks Volumes](/platform-api/api/sources/databricks-volumes) (`DATABRICKS_VOLUMES` for the Python SDK or `databricks_volumes` for `curl` and Postman)
-- [Dropbox](/platform-api/api/sources/dropbox) (`DROPBOX` for the Python SDK or `dropbox` for `curl` and Postman)
-- [Elasticsearch](/platform-api/api/sources/elasticsearch) (`ELASTICSEARCH` for the Python SDK or `elasticsearch` for `curl` and Postman)
-- [Google Cloud Storage](/platform-api/api/sources/google-cloud) (`GCS` for the Python SDK or `gcs` for `curl` and Postman)
-- [Google Drive](/platform-api/api/sources/google-drive) (`GOOGLE_DRIVE` for the Python SDK or `google_drive` for `curl` and Postman)
-- [Kafka](/platform-api/api/sources/kafka) (`KAFKA_CLOUD` for the Python SDK or `kafka-cloud` for `curl` and Postman)
-- [MongoDB](/platform-api/api/sources/mongodb) (`MONGODB` for the Python SDK or `mongodb` for `curl` and Postman)
-- [OneDrive](/platform-api/api/sources/onedrive) (`ONEDRIVE` for the Python SDK or `onedrive` for `curl` and Postman)
-- [Outlook](/platform-api/api/sources/outlook) (`OUTLOOK` for the Python SDK or `outlook` for `curl` and Postman)
-- [PostgreSQL](/platform-api/api/sources/postgresql) (`POSTGRES` for the Python SDK or `postgres` for `curl` and Postman)
-- [S3](/platform-api/api/sources/s3) (`S3` for the Python SDK or `s3` for `curl` and Postman)
-- [Salesforce](/platform-api/api/sources/salesforce) (`SALESFORCE` for the Python SDK or `salesforce` for `curl` and Postman)
-- [SharePoint](/platform-api/api/sources/sharepoint) (`SHAREPOINT` for the Python SDK or `sharepoint` for `curl` and Postman)
-- [Snowflake](/platform-api/api/sources/snowflake) (`SNOWFLAKE` for the Python SDK or `snowflake` for `curl` and Postman)
-
-
diff --git a/platform-api/partition-api/output-bounding-box-coordinates.mdx b/platform-api/partition-api/output-bounding-box-coordinates.mdx
deleted file mode 100644
index e53a9763..00000000
--- a/platform-api/partition-api/output-bounding-box-coordinates.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
----
-title: "Output bounding box coordinates"
-url: "/platform-api/partition-api/examples#saving-bounding-box-coordinates"
----
\ No newline at end of file
diff --git a/platform/connectors.mdx b/platform/connectors.mdx
deleted file mode 100644
index 3f55e6ac..00000000
--- a/platform/connectors.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
----
-title: Supported connectors
----
-
-The Unstructured Platform supports connecting to the following source and destination types.
-
-```mermaid
- flowchart LR
- Sources-->Unstructured-->Destinations
-```
-
-## Sources
-
-- [Azure](/platform/sources/azure-blob-storage)
-- [Box](/platform/sources/box)
-- [Confluence](/platform/sources/confluence)
-- [Couchbase](/platform/sources/couchbase)
-- [Databricks Volumes](/platform/sources/databricks-volumes)
-- [Dropbox](/platform/sources/dropbox)
-- [Elasticsearch](/platform/sources/elasticsearch)
-- [Google Cloud Storage](/platform/sources/google-cloud)
-- [Google Drive](/platform/sources/google-drive)
-- [Kafka](/platform/sources/kafka)
-- [MongoDB](/platform/sources/mongodb)
-- [OneDrive](/platform/sources/onedrive)
-- [Outlook](/platform/sources/outlook)
-- [PostgreSQL](/platform/sources/postgresql)
-- [S3](/platform/sources/s3)
-- [Salesforce](/platform/sources/salesforce)
-- [SharePoint](/platform/sources/sharepoint)
-- [Snowflake](/platform/sources/snowflake)
-
-If your source is not listed here, you might still be able to connect Unstructured to it through scripts or code by using the
-[Unstructured Ingest CLI](/ingestion/overview#unstructured-ingest-cli) or the
-[Unstructured Ingest Python library](/ingestion/python-ingest).
-[Learn more](/ingestion/source-connectors/overview).
-
-## Destinations
-
-- [Astra DB](/platform/destinations/astradb)
-- [Azure AI Search](/platform/destinations/azure-ai-search)
-- [Couchbase](/platform/destinations/couchbase)
-- [Databricks Volumes](/platform/destinations/databricks-volumes)
-- [Delta Tables in Amazon S3](/platform/destinations/delta-table)
-- [Delta Tables in Databricks](/platform/destinations/databricks-delta-table)
-- [Elasticsearch](/platform/destinations/elasticsearch)
-- [Google Cloud Storage](/platform/destinations/google-cloud)
-- [Kafka](/platform/destinations/kafka)
-- [Milvus](/platform/destinations/milvus)
-- [MotherDuck](/platform/destinations/motherduck)
-- [MongoDB](/platform/destinations/mongodb)
-- [Neo4j](/platform/destinations/neo4j)
-- [OneDrive](/platform/destinations/onedrive)
-- [Pinecone](/platform/destinations/pinecone)
-- [PostgreSQL](/platform/destinations/postgresql)
-- [Qdrant](/platform/destinations/qdrant)
-- [Redis](/platform/destinations/redis)
-- [S3](/platform/destinations/s3)
-- [Snowflake](/platform/destinations/snowflake)
-- [Weaviate](/platform/destinations/weaviate)
-
-If your destination is not listed here, you might still be able to connect Unstructured to it through scripts or code by using the
-[Unstructured Ingest CLI](/ingestion/overview#unstructured-ingest-cli) or the
-[Unstructured Ingest Python library](/ingestion/python-ingest).
-[Learn more](/ingestion/destination-connectors/overview).
-
diff --git a/platform/destinations/overview.mdx b/platform/destinations/overview.mdx
deleted file mode 100644
index cedc6dd5..00000000
--- a/platform/destinations/overview.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Overview
-description: Destination connectors in the Unstructured Platform are designed to specify the endpoint for data processed within the platform. These connectors ensure that the transformed and analyzed data is securely and efficiently transferred to a storage system for future use, often to a vector database for tasks that involve high-speed retrieval and advanced data analytics operations.
----
-
-
-
-To see your existing destination connectors, on the sidebar, click **Connectors**, and then click **Destinations**.
-
-To create a destination connector:
-
-1. In the sidebar, click **Connectors**.
-2. Click **Destinations**.
-3. Cick **New** or **Create Connector**.
-4. For **Name**, enter some unique name for this connector.
-5. In the **Provider** area, click the destination location type that matches yours.
-6. Click **Continue**.
-7. Fill in the fields according to your connector type. To learn how, click your connector type in the following list:
-
- - [Astra DB](/platform/destinations/astradb)
- - [Azure AI Search](/platform/destinations/azure-ai-search)
- - [Couchbase](/platform/destinations/couchbase)
- - [Databricks Volumes](/platform/destinations/databricks-volumes)
- - [Delta Tables in Amazon S3](/platform/destinations/delta-table)
- - [Delta Tables in Databricks](/platform/destinations/databricks-delta-table)
- - [Elasticsearch](/platform/destinations/elasticsearch)
- - [Google Cloud Storage](/platform/destinations/google-cloud)
- - [Kafka](/platform/destinations/kafka)
- - [Milvus](/platform/destinations/milvus)
- - [MongoDB](/platform/destinations/mongodb)
- - [MotherDuck](/platform/destinations/motherduck)
- - [Neo4j](/platform/destinations/neo4j)
- - [OneDrive](/platform/destinations/onedrive)
- - [Pinecone](/platform/destinations/pinecone)
- - [PostgreSQL](/platform/destinations/postgresql)
- - [Qdrant](/platform/destinations/qdrant)
- - [Redis](/platform/destinations/redis)
- - [S3](/platform/destinations/s3)
- - [Snowflake](/platform/destinations/snowflake)
- - [Weaviate](/platform/destinations/weaviate)
-
-8. If a **Continue** button appears, click it, and fill in any additional settings fields.
-9. Click **Save and Test**.
\ No newline at end of file
diff --git a/snippets/general-shared-text/azure-ai-search.mdx b/snippets/general-shared-text/azure-ai-search.mdx
index c89dbd07..6ef5080a 100644
--- a/snippets/general-shared-text/azure-ai-search.mdx
+++ b/snippets/general-shared-text/azure-ai-search.mdx
@@ -942,4 +942,4 @@ Here are some more details about these requirements:
- [Search indexes in Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-an-index)
- [Schema of a search index](https://learn.microsoft.com/azure/search/search-what-is-an-index#schema-of-a-search-index)
- [Example index schema](https://learn.microsoft.com/rest/api/searchservice/create-index#examples)
- - [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
\ No newline at end of file
+ - [Unstructured document elements and metadata](/api/partition/document-elements)
\ No newline at end of file
diff --git a/snippets/general-shared-text/couchbase.mdx b/snippets/general-shared-text/couchbase.mdx
index de581cea..b62bbe97 100644
--- a/snippets/general-shared-text/couchbase.mdx
+++ b/snippets/general-shared-text/couchbase.mdx
@@ -1,4 +1,4 @@
-- For the [Unstructured Platform](/platform/overview), only Couchbase Capella clusters are supported.
+- For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview), only Couchbase Capella clusters are supported.
- For [Unstructured Ingest](/ingestion/overview), Couchbase Capella clusters and local Couchbase server deployments are supported.
\ No newline at end of file
diff --git a/snippets/general-shared-text/milvus.mdx b/snippets/general-shared-text/milvus.mdx
index 7ca4c008..c0b151bc 100644
--- a/snippets/general-shared-text/milvus.mdx
+++ b/snippets/general-shared-text/milvus.mdx
@@ -1,4 +1,4 @@
-- For the [Unstructured Platform](/platform/overview), only Milvus cloud-based instances (such as Zilliz Cloud, and Milvus on IBM watsonx.data) are supported.
+- For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview), only Milvus cloud-based instances (such as Zilliz Cloud, and Milvus on IBM watsonx.data) are supported.
- For [Unstructured Ingest](/ingestion/overview), Milvus local and cloud-based instances are supported.
The following video shows how to fulfill the minimum set of requirements for Milvus cloud-based instances, demonstrating Milvus on IBM watsonx.data:
diff --git a/snippets/general-shared-text/neo4j-graph.mdx b/snippets/general-shared-text/neo4j-graph.mdx
index 05f2d7bf..a6b80370 100644
--- a/snippets/general-shared-text/neo4j-graph.mdx
+++ b/snippets/general-shared-text/neo4j-graph.mdx
@@ -61,7 +61,7 @@ In the preceding diagram:
- Each `UnstructuredElement` node has a `PART_OF_CHUNK` relationship with a `Chunk` element.
- Each `Chunk` node, except for the "last" `Chunk` node, has a `NEXT_CHUNK` relationship with its "next" `Chunk` node.
-Learn more about [document elements](/platform/document-elements) and [chunking](/platform/chunking).
+Learn more about [document elements](/ui/document-elements) and [chunking](/ui/chunking).
Some related example Neo4j graph queries include the following.
diff --git a/snippets/general-shared-text/neo4j.mdx b/snippets/general-shared-text/neo4j.mdx
index 54fca4c3..9a84a5be 100644
--- a/snippets/general-shared-text/neo4j.mdx
+++ b/snippets/general-shared-text/neo4j.mdx
@@ -1,6 +1,6 @@
- A [Neo4j deployment](https://neo4j.com/deployment-center/).
- - For the [Unstructured Platform](/platform/overview), local Neo4j deployments are not supported.
+ - For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview), local Neo4j deployments are not supported.
- For [Unstructured Ingest](/ingestion/overview), local and non-local Neo4j deployments are supported.
The following video shows how to set up a Neo4j Aura deployment:
diff --git a/snippets/general-shared-text/no-url-for-serverless-api.mdx b/snippets/general-shared-text/no-url-for-serverless-api.mdx
index 738ff113..229f431d 100644
--- a/snippets/general-shared-text/no-url-for-serverless-api.mdx
+++ b/snippets/general-shared-text/no-url-for-serverless-api.mdx
@@ -1,5 +1,5 @@
- If you do not specify the API URL, the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview) URL of `https://api.unstructuredapp.io/general/v0/general` is used by default. You must always specify your Platform Partition Endpoint key.
+ If you do not specify the API URL, the [Unstructured Partition Endpoint](/api/partition/overview) URL of `https://api.unstructuredapp.io/general/v0/general` is used by default. You must always specify your Partition Endpoint key.
To specify the API URL in your code if needed:
diff --git a/snippets/general-shared-text/opensearch.mdx b/snippets/general-shared-text/opensearch.mdx
index ca82113d..28371096 100644
--- a/snippets/general-shared-text/opensearch.mdx
+++ b/snippets/general-shared-text/opensearch.mdx
@@ -91,7 +91,7 @@
- [Mappings and field types](https://opensearch.org/docs/latest/field-types/)
- [Explicit mapping](https://opensearch.org/docs/latest/field-types/#explicit-mapping)
- [Dynamic mapping](https://opensearch.org/docs/latest/field-types/#dynamic-mapping)
- - [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
+ - [Unstructured document elements and metadata](/api/partition/document-elements)
- If you're using basic authentication to the instance, the user's name and password.
- If you're using certificates for authentication instead:
diff --git a/snippets/general-shared-text/platform-partitioning-strategies.mdx b/snippets/general-shared-text/platform-partitioning-strategies.mdx
index ed55b407..0d010a1d 100644
--- a/snippets/general-shared-text/platform-partitioning-strategies.mdx
+++ b/snippets/general-shared-text/platform-partitioning-strategies.mdx
@@ -7,5 +7,5 @@ strategies other than **Auto** for sets of documents of different types could pr
including reduction in transformation quality.
- **VLM**: For the highest-quality transformation of these file types: `.bmp`, `.gif`, `.heic`, `.jpeg`, `.jpg`, `.pdf`, `.png`, `.tiff`, and `.webp`.
-- **High Res**: For all other [supported file types](/platform/supported-file-types), and for the generation of bounding box coordinates.
+- **High Res**: For all other [supported file types](/ui/supported-file-types), and for the generation of bounding box coordinates.
- **Fast**: For text-only documents.
\ No newline at end of file
diff --git a/snippets/general-shared-text/postgresql.mdx b/snippets/general-shared-text/postgresql.mdx
index 77557242..2c904b19 100644
--- a/snippets/general-shared-text/postgresql.mdx
+++ b/snippets/general-shared-text/postgresql.mdx
@@ -1,4 +1,4 @@
-- For the [Unstructured Platform](/platform/overview), local PostgreSQL installations are not supported.
+- For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview), local PostgreSQL installations are not supported.
- For [Unstructured Ingest](/ingestion/overview), local and non-local PostgreSQL installations are supported.
The following video shows how to set up [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/):
@@ -115,7 +115,7 @@ import AllowIPAddressRanges from '/snippets/general-shared-text/ip-address-range
- [CREATE TABLE](https://www.postgresql.org/docs/current/sql-createtable.html) for PostgreSQL
- [CREATE TABLE](https://github.com/pgvector/pgvector) for PostrgreSQL with pgvector
- - [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
+ - [Unstructured document elements and metadata](/api/partition/document-elements)
The following video shows how to use the `psql` utility to connect to PostgreSQL, list databases, and list and create tables:
diff --git a/snippets/general-shared-text/qdrant.mdx b/snippets/general-shared-text/qdrant.mdx
index d94c6ea3..9293c93a 100644
--- a/snippets/general-shared-text/qdrant.mdx
+++ b/snippets/general-shared-text/qdrant.mdx
@@ -1,4 +1,4 @@
-- For the [Unstructured Platform](/platform/overview), only [Qdrant Cloud](https://qdrant.tech/documentation/cloud-intro/) is supported.
+- For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview), only [Qdrant Cloud](https://qdrant.tech/documentation/cloud-intro/) is supported.
- For [Unstructured Ingest](/ingestion/overview), Qdrant Cloud,
[Qdrant local](https://github.com/qdrant/qdrant), and [Qdrant client-server](https://qdrant.tech/documentation/quickstart/) are supported.
diff --git a/snippets/general-shared-text/singlestore-schema.mdx b/snippets/general-shared-text/singlestore-schema.mdx
index 15c2a66e..e692d1fe 100644
--- a/snippets/general-shared-text/singlestore-schema.mdx
+++ b/snippets/general-shared-text/singlestore-schema.mdx
@@ -54,4 +54,4 @@ See also:
- [CREATE TABLE](https://docs.singlestore.com/cloud/reference/sql-reference/data-definition-language-ddl/create-table/)
in the SingleStore documentation
-- [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
\ No newline at end of file
+- [Unstructured document elements and metadata](/api/partition/document-elements)
\ No newline at end of file
diff --git a/snippets/general-shared-text/sql-sample-index-schema.mdx b/snippets/general-shared-text/sql-sample-index-schema.mdx
index 8f62ab8a..0d015f2d 100644
--- a/snippets/general-shared-text/sql-sample-index-schema.mdx
+++ b/snippets/general-shared-text/sql-sample-index-schema.mdx
@@ -14,7 +14,7 @@ See also:
- [CREATE TABLE](https://www.postgresql.org/docs/current/sql-createtable.html) for PostgreSQL
- [CREATE TABLE](https://github.com/pgvector/pgvector) for PostrgreSQL with pgvector
- [CREATE TABLE](https://www.sqlite.org/lang_createtable.html) for SQLite
-- [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
+- [Unstructured document elements and metadata](/api/partition/document-elements)
```sql PostgreSQL
diff --git a/snippets/general-shared-text/sqlite.mdx b/snippets/general-shared-text/sqlite.mdx
index e8932aad..7c7f6f34 100644
--- a/snippets/general-shared-text/sqlite.mdx
+++ b/snippets/general-shared-text/sqlite.mdx
@@ -28,5 +28,5 @@
See also:
- [CREATE TABLE](https://www.sqlite.org/lang_createtable.html) for SQLite
- - [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
+ - [Unstructured document elements and metadata](/api/partition/document-elements)
diff --git a/snippets/general-shared-text/supported-file-types-platform.mdx b/snippets/general-shared-text/supported-file-types-platform.mdx
index 7f32dc99..7c7ed136 100644
--- a/snippets/general-shared-text/supported-file-types-platform.mdx
+++ b/snippets/general-shared-text/supported-file-types-platform.mdx
@@ -1,4 +1,4 @@
-The Unstructured Platform supports processing of the following file types:
+Unstructured supports processing of the following file types:
By file extension:
diff --git a/snippets/general-shared-text/use-ingest-or-platform-instead.mdx b/snippets/general-shared-text/use-ingest-or-platform-instead.mdx
index 920eee78..6ccb5bb2 100644
--- a/snippets/general-shared-text/use-ingest-or-platform-instead.mdx
+++ b/snippets/general-shared-text/use-ingest-or-platform-instead.mdx
@@ -1,5 +1,5 @@
- Unstructured recommends that you use the [Unstructured Platform API](/platform-api/api/overview),
+ Unstructured recommends that you use the [Unstructured API](/api/workflow/overview),
or the [Unstructured Ingest CLI](/ingestion/overview#unstructured-ingest-cli) or
[Unstructured Ingest Python library](/ingestion/python-ingest), if any of the following apply
to you:
diff --git a/snippets/general-shared-text/weaviate-sample-index-schema.mdx b/snippets/general-shared-text/weaviate-sample-index-schema.mdx
index d77c8e39..b0731dc4 100644
--- a/snippets/general-shared-text/weaviate-sample-index-schema.mdx
+++ b/snippets/general-shared-text/weaviate-sample-index-schema.mdx
@@ -12,7 +12,7 @@ any custom post-processing code that you run; and other factors.
See also:
- [Collection schema](https://weaviate.io/developers/weaviate/config-refs/schema)
-- [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
+- [Unstructured document elements and metadata](/api/partition/document-elements)
```json
{
diff --git a/snippets/general-shared-text/weaviate.mdx b/snippets/general-shared-text/weaviate.mdx
index d1d5c190..b8c707f8 100644
--- a/snippets/general-shared-text/weaviate.mdx
+++ b/snippets/general-shared-text/weaviate.mdx
@@ -1,4 +1,4 @@
-- For the [Unstructured Platform](/platform/overview): only [Weaviate Cloud](https://weaviate.io/developers/wcs) clusters are supported.
+- For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview): only [Weaviate Cloud](https://weaviate.io/developers/wcs) clusters are supported.
- For [Unstructured Ingest](/ingestion/overview): Weaviate Cloud clusters,
[Weaviate installed locally](https://weaviate.io/developers/weaviate/quickstart/local),
and [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) are supported.
@@ -23,7 +23,7 @@
An existing collection is not required. At runtime, the collection behavior is as follows:
- For the [Unstructured Platform](/platform/overview):
+ For the [Unstructured UI](/ui/overview) or the [Unstructured API](/api/overview):
- If an existing collection name is specified, and Unstructured generates embeddings,
but the number of dimensions that are generated does not match the existing collection's embedding settings, the run will fail.
@@ -136,4 +136,4 @@ You can adapt the following collection schema example for your own specific sche
See also :
- [Collection schema](https://weaviate.io/developers/weaviate/config-refs/schema)
-- [Unstructured document elements and metadata](/platform-api/partition-api/document-elements)
\ No newline at end of file
+- [Unstructured document elements and metadata](/api/partition/document-elements)
\ No newline at end of file
diff --git a/snippets/ingest-configuration-shared/chunking-configuration.mdx b/snippets/ingest-configuration-shared/chunking-configuration.mdx
index effc14c4..df57b8a2 100644
--- a/snippets/ingest-configuration-shared/chunking-configuration.mdx
+++ b/snippets/ingest-configuration-shared/chunking-configuration.mdx
@@ -23,7 +23,7 @@ A common chunking configuration is a critical element in the data processing pip
* `chunk_overlap_all`: Applies overlap to chunks formed from whole elements as well as those formed by text-splitting oversized elements. The overlap length is taken from the `chunk_overlap` value.
-* `chunking_endpoint`: If `chunk_by_api` is set to `True`, chunking requests are sent to this Unstructured API URL. By default, this URL is the Unstructured Platform Partition Endpoint URL: `https://api.unstructuredapp.io/general/v0/general`.
+* `chunking_endpoint`: If `chunk_by_api` is set to `True`, chunking requests are sent to this Unstructured API URL. By default, this URL is the Unstructured Partition Endpoint URL: `https://api.unstructuredapp.io/general/v0/general`.
* , `chunking_strategy`: One of `basic` or `by_title`. When omitted, no chunking is performed. The `basic` strategy maximally fills each chunk with whole elements, up the specified size limits as specified by `max_characters` and `new_after_n_chars`. A single element that exceeds this length is divided into two or more chunks using text-splitting. A `Table` element is never combined with any other element and appears as a chunk of its own or as a sequence of `TableChunk` elements splitting is required. The `by_title` behaviors are the same except that section and optionally page boundaries are respected such that two consecutive elements from different sections appear in separate chunks.
diff --git a/snippets/ingest-configuration-shared/partition-by-api-oss.mdx b/snippets/ingest-configuration-shared/partition-by-api-oss.mdx
index 04d517a9..18d86a45 100644
--- a/snippets/ingest-configuration-shared/partition-by-api-oss.mdx
+++ b/snippets/ingest-configuration-shared/partition-by-api-oss.mdx
@@ -8,7 +8,7 @@ For the Unstructured Ingest CLI and the Unstructured Ingest Python library, you
- `--partition-endpoint $UNSTRUCTURED_API_URL` (CLI) or `partition_endpoint=os.getenv("UNSTRUCTURED_API_URL")` (Python)
- The environment variables `UNSTRUCTURED_API_KEY` and `UNSTRUCTURED_API_URL`
-- To send files to the [Unstructured Platform Partition Endpoint](/platform-api/partition-api/overview) for processing, specify `--partition-by-api` (CLI) or `partition_by_api=True` (Python).
+- To send files to the [Unstructured Partition Endpoint](/api/partition/overview) for processing, specify `--partition-by-api` (CLI) or `partition_by_api=True` (Python).
Unstructured also requires an Unstructured API key and API URL, by adding the following:
@@ -16,4 +16,4 @@ For the Unstructured Ingest CLI and the Unstructured Ingest Python library, you
- `--partition-endpoint $UNSTRUCTURED_API_URL` (CLI) or `partition_endpoint=os.getenv("UNSTRUCTURED_API_URL")` (Python)
- The environment variables `UNSTRUCTURED_API_KEY` and `UNSTRUCTURED_API_URL`, representing your API key and API URL, respectively.
- [Get an API key and API URL](/platform-api/partition-api/overview).
\ No newline at end of file
+ [Get an API key and API URL](/api/partition/overview).
\ No newline at end of file
diff --git a/snippets/ingestion/code-generator.mdx b/snippets/ingestion/code-generator.mdx
index 9b702546..f72e1a4e 100644
--- a/snippets/ingestion/code-generator.mdx
+++ b/snippets/ingestion/code-generator.mdx
@@ -16,7 +16,7 @@ do the following:
- **by_page** - Use the `basic` strategy and also preserve page boundaries.
- **by_similarity** - Use the `sentence-transformers/multi-qa-mpnet-base-dot-v1` embedding model to identify topically similar sequential elements and combine them into chunks. This strategy is availably only when calling Unstructured.
- To learn more, see [Chunking strategies](/platform-api/partition-api/chunking) and [Chunking configuration](/ingestion/ingest-configuration/chunking-configuration).
+ To learn more, see [Chunking strategies](/api/partition/chunking) and [Chunking configuration](/ingestion/ingest-configuration/chunking-configuration).
5. For any chunking strategy other than **None**:
diff --git a/snippets/quickstarts/platform-api.mdx b/snippets/quickstarts/platform-api.mdx
index e523a638..736109f6 100644
--- a/snippets/quickstarts/platform-api.mdx
+++ b/snippets/quickstarts/platform-api.mdx
@@ -1,17 +1,17 @@
-This quickstart uses the Unstructured Python SDK to call the Unstructured Platform Workflow Endpoint to get your data RAG-ready. The Python code for this
+This quickstart uses the Unstructured Python SDK to call the Unstructured Workflow Endpoint to get your data RAG-ready. The Python code for this
quickstart is in a remote hosted Google Collab notebook. Data is processed on Unstructured-hosted compute resources.
The requirements are as follows:
-- A compatible source (input) location that contains your data for Unstructured to process. [See the list of supported source types](/platform/connectors#sources).
+- A compatible source (input) location that contains your data for Unstructured to process. [See the list of supported source types](/ui/connectors#sources).
This quickstart uses an Amazon S3 bucket as the source location. If you use a different source type, you will need to modify the quickstart notebook accordingly.
-- For document-based source locations, compatible files in that location. [See the list of supported file types](/platform/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the `Unstructured-IO/unstructured-ingest` repository in GitHub.
-- A compatible destination (output) location for Unstructured to put the processed data. [See the list of supported destination types](/platform/connectors#destinations).
+- For document-based source locations, compatible files in that location. [See the list of supported file types](/ui/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the `Unstructured-IO/unstructured-ingest` repository in GitHub.
+- A compatible destination (output) location for Unstructured to put the processed data. [See the list of supported destination types](/ui/connectors#destinations).
For this quickstart's destination location, a different folder in the same Amazon S3 bucket as the source location is used. If you use a different destination S3 bucket or a different destination type, you will need to modify the quickstart notebook accordingly.
- To sign up for the Unstructured Platform, go to the [For Developers](https://unstructured.io/developers) page and choose one of the following plans:
+ To sign up for Unstructured, go to the [For Developers](https://unstructured.io/developers) page and choose one of the following plans:
- Sign up for a [pay-per-page plan](https://unstructured.io/developers#get-started).
- Save money by signing up for a [subscribe-and-save plan](https://unstructured.io/subscribeandsave) instead.
@@ -22,15 +22,15 @@ The requirements are as follows:
If you choose a pay-per-page plan, after your first 14 days of usage or more than 1000 processed pages per day,
whichever comes first, your account is then billed at Unstructured's standard service usage rates. To keep using the service,
you must
- [provide Unstructured with your payment details](/platform/billing#add-view-or-change-pay-per-page-payment-details).
+ [provide Unstructured with your payment details](/ui/billing#add-view-or-change-pay-per-page-payment-details).
To save money by switching from a pay-per-page to a subscribe-and-save plan, go to the
[Unstructured Subscribe & Save](https://unstructured.io/subscribeandsave) page and complete the on-screen instructions.
To save even more money by making a long-term billing commitment,
stop here and sign up through the [For Enterprise](https://unstructured.io/enterprise) page instead.
- By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of the Unstructured Platform on
- Unstructured's own hosted cloud resources. If you would rather run the Unstructured Platform within the context of your own virtual private cloud (VPC),
+ By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of Unstructured on
+ Unstructured's own hosted cloud resources. If you would rather run Unstructured within the context of your own virtual private cloud (VPC),
stop here and sign up through the [For Enterprise](https://unstructured.io/enterprise) page instead.
@@ -43,9 +43,9 @@ The requirements are as follows:
be different. For enterprise sign-in guidance, contact Unstructured Sales at [sales@unstructured.io](mailto:sales@unstructured.io).
- 1. After you have signed up for a pay-per-page plan, the Unstructured Platform sign-in page appears.
+ 1. After you have signed up for a pay-per-page plan, the Unstructured account sign-in page appears.
- 
+ 
2. Click **Google** or **GitHub** to sign in with the Google or GitHub account that you signed up with.
Or, enter the email address that you signed up with, and then click **Sign In**.
@@ -60,9 +60,9 @@ The requirements are as follows:
- 
+ 
- 
+ 
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -84,7 +84,7 @@ The requirements are as follows:
is where Unstructured will put the processed data.
The S3 URI to the destination location will be `s3:///output`.
- Learn how to [create an S3 bucket and set it up for Unstructured](/platform-api/api/sources/s3). (Do not run the Python SDK code or REST commands at the end of those setup instructions.)
+ Learn how to [create an S3 bucket and set it up for Unstructured](/api/workflow/sources/s3). (Do not run the Python SDK code or REST commands at the end of those setup instructions.)
After your S3 bucket is created and set up, follow the instructions in this [quickstart notebook](https://colab.research.google.com/drive/13f5C9WtUvIPjwJzxyOR3pNJ9K9vnF4ww).
diff --git a/snippets/quickstarts/platform.mdx b/snippets/quickstarts/platform.mdx
index 6c3d7c20..21e26b07 100644
--- a/snippets/quickstarts/platform.mdx
+++ b/snippets/quickstarts/platform.mdx
@@ -2,9 +2,9 @@ This quickstart uses a no-code, point-and-click user interface in your web brows
The requirements are as follows.
-- A compatible source (input) location that contains your data for Unstructured to process. [See the list of supported source types](/platform/connectors#sources).
-- For document-based source locations, compatible files in that location. [See the list of supported file types](/platform/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
-- A compatible destination (output) location for Unstructured to put the processed data. [See the list of supported destination types](/platform/connectors#destinations).
+- A compatible source (input) location that contains your data for Unstructured to process. [See the list of supported source types](/ui/connectors#sources).
+- For document-based source locations, compatible files in that location. [See the list of supported file types](/ui/supported-file-types). If you do not have any files available, you can download some from the [example-docs](https://github.com/Unstructured-IO/unstructured-ingest/tree/main/example-docs) folder in the Unstructured repo on GitHub.
+- A compatible destination (output) location for Unstructured to put the processed data. [See the list of supported destination types](/ui/connectors#destinations).
- To sign up for the Unstructured Platform, go to the [For Developers](https://unstructured.io/developers) page and choose one of the following plans:
+ To sign up for Unstructured, go to the [For Developers](https://unstructured.io/developers) page and choose one of the following plans:
- Sign up for a [pay-per-page plan](https://unstructured.io/developers#get-started).
- Save money by signing up for a [subscribe-and-save plan](https://unstructured.io/subscribeandsave) instead.
@@ -29,15 +29,15 @@ allowfullscreen
If you choose a pay-per-page plan, after your first 14 days of usage or more than 1000 processed pages per day,
whichever comes first, your account is then billed at Unstructured's standard service usage rates. To keep using the service,
you must
- [provide Unstructured with your payment details](/platform/billing#add-view-or-change-pay-per-page-payment-details).
+ [provide Unstructured with your payment details](/ui/billing#add-view-or-change-pay-per-page-payment-details).
To save money by switching from a pay-per-page to a subscribe-and-save plan, go to the
[Unstructured Subscribe & Save](https://unstructured.io/subscribeandsave) page and complete the on-screen instructions.
To save even more money by making a long-term billing commitment,
stop here and sign up through the [For Enterprise](https://unstructured.io/enterprise) page instead.
- By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of the Unstructured Platform on
- Unstructured's own hosted cloud resources. If you would rather run the Unstructured Platform within the context of your own virtual private cloud (VPC),
+ By signing up for a pay-per-page or subscribe-and-save plan, your Unstructured account will run within the context of Unstructured on
+ Unstructured's own hosted cloud resources. If you would rather run Unstructured within the context of your own virtual private cloud (VPC),
stop here and sign up through the [For Enterprise](https://unstructured.io/enterprise) page instead.
@@ -50,9 +50,9 @@ allowfullscreen
be different. For enterprise sign-in guidance, contact Unstructured Sales at [sales@unstructured.io](mailto:sales@unstructured.io).
- 1. After you have signed up for a pay-per-page plan, the Unstructured Platform sign-in page appears.
+ 1. After you have signed up for a pay-per-page plan, the Unstructured account sign-in page appears.
- 
+ 
2. Click **Google** or **GitHub** to sign in with the Google or GitHub account that you signed up with.
Or, enter the email address that you signed up with, and then click **Sign In**.
@@ -67,31 +67,31 @@ allowfullscreen
- 
- 1. From your Unstructured Platform dashboard, in the sidebar, click **Connectors**.
+ 
+ 1. From your Unstructured dashboard, in the sidebar, click **Connectors**.
2. Click **Sources**.
3. Cick **New** or **Create Connector**.
4. For **Name**, enter some unique name for this connector.
5. In the **Provider** area, click the source location type that matches yours.
6. Click **Continue**.
- 7. Fill in the fields with the appropriate settings. [Learn more](/platform/sources/overview).
+ 7. Fill in the fields with the appropriate settings. [Learn more](/ui/sources/overview).
8. If a **Continue** button appears, click it, and fill in any additional settings fields.
9. Click **Save and Test**.
- 
+ 
1. In the sidebar, click **Connectors**.
2. Click **Destinations**.
3. Cick **New** or **Create Connector**.
4. For **Name**, enter some unique name for this connector.
5. In the **Provider** area, click the destination location type that matches yours.
6. Click **Continue**.
- 7. Fill in the fields with the appropriate settings. [Learn more](/platform/sources/overview).
+ 7. Fill in the fields with the appropriate settings. [Learn more](/ui/sources/overview).
8. If a **Continue** button appears, click it, and fill in any additional settings fields.
9. Click **Save and Test**.
- 
+ 
1. In the sidebar, click **Workflows**.
2. Click **New Workflow**.
3. Next to **Build it for Me**, click **Create Workflow**.
@@ -115,13 +115,13 @@ allowfullscreen
11. Click **Complete**.
- 
+ 
1. If you did not choose to run this workflow on a schedule in Step 5, you can run the workflow now: on the sidebar, click **Workflows**.
2. Next to your workflow from Step 5, click **Run**.
- 
- 
+ 
+ 
1. In the sidebar, click **Jobs**.
2. In the list of jobs, wait for the job's **Status** to change to **Finished**.
3. Click the row for the job.
diff --git a/platform/billing.mdx b/ui/billing.mdx
similarity index 82%
rename from platform/billing.mdx
rename to ui/billing.mdx
index 30ac4da6..d294f796 100644
--- a/platform/billing.mdx
+++ b/ui/billing.mdx
@@ -2,7 +2,7 @@
title: Billing
---
-To ensure that your Unstructured account has continued access to the Unstructured Platform, you must have one of the following plans in place with Unstructured:
+To ensure that your Unstructured account has continued access to Unstructured, you must have one of the following plans in place with Unstructured:
- A [pay-per-page plan](https://unstructured.io/developers#get-started) with valid payment details provided.
- A [subscribe-and-save plan](https://unstructured.io/subscribeandsave) plan with a non-zero available budget.
@@ -20,9 +20,9 @@ you must provide Unstructured with your payment details to continue using the se
-
+
-
+
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -37,9 +37,9 @@ Go to the [Unstructured Subscribe & Save](https://unstructured.io/subscribeandsa
-
+
-
+
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -47,9 +47,9 @@ Go to the [Unstructured Subscribe & Save](https://unstructured.io/subscribeandsa
## View subscribe-and-save budget amounts
-
+
-
+
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
@@ -65,9 +65,9 @@ To view usage details for your Unstructured account, do the following:
-
+
-
+
1. Sign in to your Unstructured account, at [https://platform.unstructured.io](https://platform.unstructured.io).
2. At the bottom of the sidebar, click your user icon, and then click **Account Settings**.
diff --git a/platform/chunking.mdx b/ui/chunking.mdx
similarity index 98%
rename from platform/chunking.mdx
rename to ui/chunking.mdx
index 868e29d7..7d2c1de1 100644
--- a/platform/chunking.mdx
+++ b/ui/chunking.mdx
@@ -61,7 +61,7 @@ Here are a few examples:
The following sections provide information about the available chunking strategies and their settings.
-You can change a workflow's preconfigured strategy only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+You can change a workflow's preconfigured strategy only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
## Basic chunking strategy
diff --git a/ui/connectors.mdx b/ui/connectors.mdx
new file mode 100644
index 00000000..18a27691
--- /dev/null
+++ b/ui/connectors.mdx
@@ -0,0 +1,66 @@
+---
+title: Supported connectors
+---
+
+Unstructured supports connecting to the following source and destination types.
+
+```mermaid
+ flowchart LR
+ Sources-->Unstructured-->Destinations
+```
+
+## Sources
+
+- [Azure](/ui/sources/azure-blob-storage)
+- [Box](/ui/sources/box)
+- [Confluence](/ui/sources/confluence)
+- [Couchbase](/ui/sources/couchbase)
+- [Databricks Volumes](/ui/sources/databricks-volumes)
+- [Dropbox](/ui/sources/dropbox)
+- [Elasticsearch](/ui/sources/elasticsearch)
+- [Google Cloud Storage](/ui/sources/google-cloud)
+- [Google Drive](/ui/sources/google-drive)
+- [Kafka](/ui/sources/kafka)
+- [MongoDB](/ui/sources/mongodb)
+- [OneDrive](/ui/sources/onedrive)
+- [Outlook](/ui/sources/outlook)
+- [PostgreSQL](/ui/sources/postgresql)
+- [S3](/ui/sources/s3)
+- [Salesforce](/ui/sources/salesforce)
+- [SharePoint](/ui/sources/sharepoint)
+- [Snowflake](/ui/sources/snowflake)
+
+If your source is not listed here, you might still be able to connect Unstructured to it through scripts or code by using the
+[Unstructured Ingest CLI](/ingestion/overview#unstructured-ingest-cli) or the
+[Unstructured Ingest Python library](/ingestion/python-ingest).
+[Learn more](/ingestion/source-connectors/overview).
+
+## Destinations
+
+- [Astra DB](/ui/destinations/astradb)
+- [Azure AI Search](/ui/destinations/azure-ai-search)
+- [Couchbase](/ui/destinations/couchbase)
+- [Databricks Volumes](/ui/destinations/databricks-volumes)
+- [Delta Tables in Amazon S3](/ui/destinations/delta-table)
+- [Delta Tables in Databricks](/ui/destinations/databricks-delta-table)
+- [Elasticsearch](/ui/destinations/elasticsearch)
+- [Google Cloud Storage](/ui/destinations/google-cloud)
+- [Kafka](/ui/destinations/kafka)
+- [Milvus](/ui/destinations/milvus)
+- [MotherDuck](/ui/destinations/motherduck)
+- [MongoDB](/ui/destinations/mongodb)
+- [Neo4j](/ui/destinations/neo4j)
+- [OneDrive](/ui/destinations/onedrive)
+- [Pinecone](/ui/destinations/pinecone)
+- [PostgreSQL](/ui/destinations/postgresql)
+- [Qdrant](/ui/destinations/qdrant)
+- [Redis](/ui/destinations/redis)
+- [S3](/ui/destinations/s3)
+- [Snowflake](/ui/destinations/snowflake)
+- [Weaviate](/ui/destinations/weaviate)
+
+If your destination is not listed here, you might still be able to connect Unstructured to it through scripts or code by using the
+[Unstructured Ingest CLI](/ingestion/overview#unstructured-ingest-cli) or the
+[Unstructured Ingest Python library](/ingestion/python-ingest).
+[Learn more](/ingestion/destination-connectors/overview).
+
diff --git a/platform/destinations/astradb.mdx b/ui/destinations/astradb.mdx
similarity index 100%
rename from platform/destinations/astradb.mdx
rename to ui/destinations/astradb.mdx
diff --git a/platform/destinations/azure-ai-search.mdx b/ui/destinations/azure-ai-search.mdx
similarity index 100%
rename from platform/destinations/azure-ai-search.mdx
rename to ui/destinations/azure-ai-search.mdx
diff --git a/platform/destinations/chroma.mdx b/ui/destinations/chroma.mdx
similarity index 100%
rename from platform/destinations/chroma.mdx
rename to ui/destinations/chroma.mdx
diff --git a/platform/destinations/couchbase.mdx b/ui/destinations/couchbase.mdx
similarity index 100%
rename from platform/destinations/couchbase.mdx
rename to ui/destinations/couchbase.mdx
diff --git a/platform/destinations/databricks-delta-table.mdx b/ui/destinations/databricks-delta-table.mdx
similarity index 89%
rename from platform/destinations/databricks-delta-table.mdx
rename to ui/destinations/databricks-delta-table.mdx
index d6950ddb..56b05c28 100644
--- a/platform/destinations/databricks-delta-table.mdx
+++ b/ui/destinations/databricks-delta-table.mdx
@@ -6,10 +6,10 @@ title: Delta Tables in Databricks
This article covers connecting Unstructured to Delta Tables in Databricks.
For information about connecting Unstructured to Delta Tables in Amazon S3 instead, see
- [Delta Tables in Amazon S3](/platform/destinations/delta-table).
+ [Delta Tables in Amazon S3](/ui/destinations/delta-table).
For information about connecting Unstructured to Databricks Volumes instead, see
- [Databricks Volumes](/platform/destinations/databricks-volumes).
+ [Databricks Volumes](/ui/destinations/databricks-volumes).
Send processed data from Unstructured to a Delta Table in Databricks.
diff --git a/platform/destinations/databricks-volumes.mdx b/ui/destinations/databricks-volumes.mdx
similarity index 92%
rename from platform/destinations/databricks-volumes.mdx
rename to ui/destinations/databricks-volumes.mdx
index e448c44a..d5edabe9 100644
--- a/platform/destinations/databricks-volumes.mdx
+++ b/ui/destinations/databricks-volumes.mdx
@@ -6,7 +6,7 @@ title: Databricks Volumes
This article covers connecting Unstructured to Databricks Volumes.
For information about connecting Unstructured to Delta Tables in Databricks instead, see
- [Delta Tables in Databricks](/platform/destinations/databricks-delta-table).
+ [Delta Tables in Databricks](/ui/destinations/databricks-delta-table).
Send processed data from Unstructured to Databricks Volumes.
diff --git a/platform/destinations/delta-table.mdx b/ui/destinations/delta-table.mdx
similarity index 92%
rename from platform/destinations/delta-table.mdx
rename to ui/destinations/delta-table.mdx
index 88761305..21b450e5 100644
--- a/platform/destinations/delta-table.mdx
+++ b/ui/destinations/delta-table.mdx
@@ -5,7 +5,7 @@ title: Delta Tables in Amazon S3
This article covers connecting Unstructured to Delta Tables in Amazon S3. For information about
connecting Unstructured to Delta Tables in Databricks instead, see
- [Delta Tables in Databricks](/platform/destinations/databricks-delta-table).
+ [Delta Tables in Databricks](/ui/destinations/databricks-delta-table).
Send processed data from Unstructured to a Delta Table, stored in Amazon S3.
diff --git a/platform/destinations/elasticsearch.mdx b/ui/destinations/elasticsearch.mdx
similarity index 100%
rename from platform/destinations/elasticsearch.mdx
rename to ui/destinations/elasticsearch.mdx
diff --git a/platform/destinations/google-cloud.mdx b/ui/destinations/google-cloud.mdx
similarity index 100%
rename from platform/destinations/google-cloud.mdx
rename to ui/destinations/google-cloud.mdx
diff --git a/platform/destinations/kafka.mdx b/ui/destinations/kafka.mdx
similarity index 100%
rename from platform/destinations/kafka.mdx
rename to ui/destinations/kafka.mdx
diff --git a/platform/destinations/milvus.mdx b/ui/destinations/milvus.mdx
similarity index 100%
rename from platform/destinations/milvus.mdx
rename to ui/destinations/milvus.mdx
diff --git a/platform/destinations/mongodb.mdx b/ui/destinations/mongodb.mdx
similarity index 100%
rename from platform/destinations/mongodb.mdx
rename to ui/destinations/mongodb.mdx
diff --git a/platform/destinations/motherduck.mdx b/ui/destinations/motherduck.mdx
similarity index 100%
rename from platform/destinations/motherduck.mdx
rename to ui/destinations/motherduck.mdx
diff --git a/platform/destinations/neo4j.mdx b/ui/destinations/neo4j.mdx
similarity index 100%
rename from platform/destinations/neo4j.mdx
rename to ui/destinations/neo4j.mdx
diff --git a/platform/destinations/onedrive.mdx b/ui/destinations/onedrive.mdx
similarity index 100%
rename from platform/destinations/onedrive.mdx
rename to ui/destinations/onedrive.mdx
diff --git a/platform/destinations/opensearch.mdx b/ui/destinations/opensearch.mdx
similarity index 100%
rename from platform/destinations/opensearch.mdx
rename to ui/destinations/opensearch.mdx
diff --git a/ui/destinations/overview.mdx b/ui/destinations/overview.mdx
new file mode 100644
index 00000000..978b8883
--- /dev/null
+++ b/ui/destinations/overview.mdx
@@ -0,0 +1,43 @@
+---
+title: Overview
+description: Destination connectors in Unstructured are designed to specify the endpoint for data processed within the platform. These connectors ensure that the transformed and analyzed data is securely and efficiently transferred to a storage system for future use, often to a vector database for tasks that involve high-speed retrieval and advanced data analytics operations.
+---
+
+
+
+To see your existing destination connectors, on the sidebar, click **Connectors**, and then click **Destinations**.
+
+To create a destination connector:
+
+1. In the sidebar, click **Connectors**.
+2. Click **Destinations**.
+3. Cick **New** or **Create Connector**.
+4. For **Name**, enter some unique name for this connector.
+5. In the **Provider** area, click the destination location type that matches yours.
+6. Click **Continue**.
+7. Fill in the fields according to your connector type. To learn how, click your connector type in the following list:
+
+ - [Astra DB](/ui/destinations/astradb)
+ - [Azure AI Search](/ui/destinations/azure-ai-search)
+ - [Couchbase](/ui/destinations/couchbase)
+ - [Databricks Volumes](/ui/destinations/databricks-volumes)
+ - [Delta Tables in Amazon S3](/ui/destinations/delta-table)
+ - [Delta Tables in Databricks](/ui/destinations/databricks-delta-table)
+ - [Elasticsearch](/ui/destinations/elasticsearch)
+ - [Google Cloud Storage](/ui/destinations/google-cloud)
+ - [Kafka](/ui/destinations/kafka)
+ - [Milvus](/ui/destinations/milvus)
+ - [MongoDB](/ui/destinations/mongodb)
+ - [MotherDuck](/ui/destinations/motherduck)
+ - [Neo4j](/ui/destinations/neo4j)
+ - [OneDrive](/ui/destinations/onedrive)
+ - [Pinecone](/ui/destinations/pinecone)
+ - [PostgreSQL](/ui/destinations/postgresql)
+ - [Qdrant](/ui/destinations/qdrant)
+ - [Redis](/ui/destinations/redis)
+ - [S3](/ui/destinations/s3)
+ - [Snowflake](/ui/destinations/snowflake)
+ - [Weaviate](/ui/destinations/weaviate)
+
+8. If a **Continue** button appears, click it, and fill in any additional settings fields.
+9. Click **Save and Test**.
\ No newline at end of file
diff --git a/platform/destinations/pinecone.mdx b/ui/destinations/pinecone.mdx
similarity index 100%
rename from platform/destinations/pinecone.mdx
rename to ui/destinations/pinecone.mdx
diff --git a/platform/destinations/postgresql.mdx b/ui/destinations/postgresql.mdx
similarity index 100%
rename from platform/destinations/postgresql.mdx
rename to ui/destinations/postgresql.mdx
diff --git a/platform/destinations/qdrant.mdx b/ui/destinations/qdrant.mdx
similarity index 100%
rename from platform/destinations/qdrant.mdx
rename to ui/destinations/qdrant.mdx
diff --git a/platform/destinations/redis.mdx b/ui/destinations/redis.mdx
similarity index 100%
rename from platform/destinations/redis.mdx
rename to ui/destinations/redis.mdx
diff --git a/platform/destinations/s3.mdx b/ui/destinations/s3.mdx
similarity index 100%
rename from platform/destinations/s3.mdx
rename to ui/destinations/s3.mdx
diff --git a/platform/destinations/snowflake.mdx b/ui/destinations/snowflake.mdx
similarity index 100%
rename from platform/destinations/snowflake.mdx
rename to ui/destinations/snowflake.mdx
diff --git a/platform/destinations/weaviate.mdx b/ui/destinations/weaviate.mdx
similarity index 100%
rename from platform/destinations/weaviate.mdx
rename to ui/destinations/weaviate.mdx
diff --git a/platform/document-elements.mdx b/ui/document-elements.mdx
similarity index 98%
rename from platform/document-elements.mdx
rename to ui/document-elements.mdx
index 43b0511e..48b122d6 100644
--- a/platform/document-elements.mdx
+++ b/ui/document-elements.mdx
@@ -2,7 +2,7 @@
title: Document elements and metadata
---
-When Unstructured [partitions](/platform/partitioning) a file, the result is a list of _document elements_, sometimes referred to simply as _elements_. These elements represent different components of the source file.
+When Unstructured [partitions](/ui/partitioning) a file, the result is a list of _document elements_, sometimes referred to simply as _elements_. These elements represent different components of the source file.
## Element example
@@ -27,7 +27,7 @@ Here's an example of what an element might look like:
Every element has a [type](#element-type); an [element_id](#element-id); the extracted `text`; and some [metadata](#metadata) which might
vary depending on the element type, file structure, and some additional settings that are applied during
-[partitioning](/platform/partitioning), chunking, summarizing, and embedding.
+[partitioning](/ui/partitioning), chunking, summarizing, and embedding.
## Element type
diff --git a/platform/embedding.mdx b/ui/embedding.mdx
similarity index 98%
rename from platform/embedding.mdx
rename to ui/embedding.mdx
index c116978e..69715968 100644
--- a/platform/embedding.mdx
+++ b/ui/embedding.mdx
@@ -61,7 +61,7 @@ on Hugging Face:
To generate embeddings, choose one of the following embedding providers and models in the **Select Embedding Model** section of an **Embedder** node in a workflow:
-You can change a workflow's preconfigured provider only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+You can change a workflow's preconfigured provider only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- **Azure OpenAI**: Use [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service) to generate embeddings with one of the following models:
diff --git a/platform/enriching/image-descriptions.mdx b/ui/enriching/image-descriptions.mdx
similarity index 95%
rename from platform/enriching/image-descriptions.mdx
rename to ui/enriching/image-descriptions.mdx
index 9396561e..1a5d1af9 100644
--- a/platform/enriching/image-descriptions.mdx
+++ b/ui/enriching/image-descriptions.mdx
@@ -47,9 +47,9 @@ Any embeddings that are produced after these summaries are generated will be bas
To generate image descriptions, in the **Task** drop-down list of an **Enrichment** node in a workflow, specify the following:
- You can change a workflow's image description settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+ You can change a workflow's image description settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- Image summaries are generated only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/platform/partitioning).
+ Image summaries are generated only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/ui/partitioning).
Select **Image Description**, and then choose one of the following provider (and model) combinations to use:
diff --git a/platform/enriching/ner.mdx b/ui/enriching/ner.mdx
similarity index 97%
rename from platform/enriching/ner.mdx
rename to ui/enriching/ner.mdx
index ddc4a5a2..06c9cc31 100644
--- a/platform/enriching/ner.mdx
+++ b/ui/enriching/ner.mdx
@@ -90,9 +90,9 @@ Here is an example of a list of recognized entities and their types using GPT-4o
To generate a list of recognized entities and their types, in the **Task** drop-down list of an **Enrichment** node in a workflow, specify the following:
- You can change a workflow's NER settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+ You can change a workflow's NER settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- Entities are only recognized when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/platform/partitioning).
+ Entities are only recognized when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/ui/partitioning).
1. Select **Named Entity Recognition (NER)**. By default, OpenAI's GPT-4o will follow a default set of instructions (called a _prompt_) to perform NER using a set of predefined entity types.
diff --git a/platform/enriching/overview.mdx b/ui/enriching/overview.mdx
similarity index 60%
rename from platform/enriching/overview.mdx
rename to ui/enriching/overview.mdx
index 6ed307c5..6f05e1b6 100644
--- a/platform/enriching/overview.mdx
+++ b/ui/enriching/overview.mdx
@@ -4,22 +4,22 @@ title: Overview
_Enriching_ adds enhancments to the processed data that Unstructured produces. These enrichments include:
-- Providing a summarized description of the contents of a detected image. [Learn more](/platform/enriching/image-descriptions).
-- Providing a summarized description of the contents of a detected table. [Learn more](/platform/enriching/table-descriptions).
-- Providing a representation of a detected table in HTML markup format. [Learn more](/platform/enriching/table-to-html).
-- Providing a list of recognized entities and their types, through a process known as _named entity recognition_ (NER). [Learn more](/platform/enriching/ner).
+- Providing a summarized description of the contents of a detected image. [Learn more](/ui/enriching/image-descriptions).
+- Providing a summarized description of the contents of a detected table. [Learn more](/ui/enriching/table-descriptions).
+- Providing a representation of a detected table in HTML markup format. [Learn more](/ui/enriching/table-to-html).
+- Providing a list of recognized entities and their types, through a process known as _named entity recognition_ (NER). [Learn more](/ui/enriching/ner).
To add an enrichment, in the **Task** drop-down list of an **Enrichment** node in a workflow, select one of the following enrichment types:
- You can change enrichment settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+ You can change enrichment settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- Enrichments work only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/platform/partitioning).
+ Enrichments work only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/ui/partitioning).
-- **Image Description** to provide a summarized description of the contents of each detected image. [Learn more](/platform/enriching/image-descriptions).
-- **Table Description** to provide a summarized description of the contents of each detected table. [Learn more](/platform/enriching/table-descriptions).
-- **Table to HTML** to provide a representation of each detected table in HTML markup format. [Learn more](/platform/enriching/table-to-html).
-- **Named Entity Recognition (NER)** to provide a list of recognized entities and their types. [Learn more](/platform/enriching/ner).
+- **Image Description** to provide a summarized description of the contents of each detected image. [Learn more](/ui/enriching/image-descriptions).
+- **Table Description** to provide a summarized description of the contents of each detected table. [Learn more](/ui/enriching/table-descriptions).
+- **Table to HTML** to provide a representation of each detected table in HTML markup format. [Learn more](/ui/enriching/table-to-html).
+- **Named Entity Recognition (NER)** to provide a list of recognized entities and their types. [Learn more](/ui/enriching/ner).
To add multiple enrichments, create an additional **Enrichment** node for each enrichment type that you want to add.
\ No newline at end of file
diff --git a/platform/enriching/table-descriptions.mdx b/ui/enriching/table-descriptions.mdx
similarity index 96%
rename from platform/enriching/table-descriptions.mdx
rename to ui/enriching/table-descriptions.mdx
index d9ea5964..7ee9762c 100644
--- a/platform/enriching/table-descriptions.mdx
+++ b/ui/enriching/table-descriptions.mdx
@@ -54,9 +54,9 @@ Any embeddings that are produced after these summaries are generated will be bas
To generate table descriptions, in the **Task** drop-down list of an **Enrichment** node in a workflow, specify the following:
- You can change a workflow's table description settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+ You can change a workflow's table description settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- Table summaries are generated only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/platform/partitioning).
+ Table summaries are generated only when the **Partitioner** node in a workflow is also set to use the **High Res** partitioning strategy. [Learn more](/ui/partitioning).
Select **Table Description**, and then choose one of the following provider (and model) combinations to use:
diff --git a/platform/enriching/table-to-html.mdx b/ui/enriching/table-to-html.mdx
similarity index 96%
rename from platform/enriching/table-to-html.mdx
rename to ui/enriching/table-to-html.mdx
index ca9d8ca7..ab4210e2 100644
--- a/platform/enriching/table-to-html.mdx
+++ b/ui/enriching/table-to-html.mdx
@@ -65,7 +65,7 @@ Line breaks have been inserted here for readability. The output will not contain
To generate table-to-HTML output, in the **Task** drop-down list of an **Enrichment** node in a workflow, select **Table to HTML**.
- You can change a workflow's table description settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+ You can change a workflow's table description settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
- Table-to-HTML output is generated only when the **Partitioner** node in a workflow is set to use the **High Res** partitioning strategy. [Learn more](/platform/partitioning).
+ Table-to-HTML output is generated only when the **Partitioner** node in a workflow is set to use the **High Res** partitioning strategy. [Learn more](/ui/partitioning).
\ No newline at end of file
diff --git a/platform/jobs.mdx b/ui/jobs.mdx
similarity index 61%
rename from platform/jobs.mdx
rename to ui/jobs.mdx
index 754a37cd..c3d235b8 100644
--- a/platform/jobs.mdx
+++ b/ui/jobs.mdx
@@ -3,7 +3,7 @@ title: Jobs
---
## Jobs dashboard
-
+
To view the jobs dashboard, on the sidebar, click **Jobs**.
@@ -13,18 +13,18 @@ The jobs dashboard lists each job and its associated **Status**, **Job ID**, **C
Each job's status, shown in the **Status** column, can be:
- **Pending**: The job's data is currently not attempting to be processed.
- **In Progress**: The job's data is attempting to be processed.
- **Finished**: 100% of the job's data has been successfully processed.
- **Finished**: 90% to 99% of the job's data has been sucessfully processed.
- **Failed**: Less than 90% of the job's data has been successfully processed.
+ **Pending**: The job's data is currently not attempting to be processed.
+ **In Progress**: The job's data is attempting to be processed.
+ **Finished**: 100% of the job's data has been successfully processed.
+ **Finished**: 90% to 99% of the job's data has been sucessfully processed.
+ **Failed**: Less than 90% of the job's data has been successfully processed.
## Run a job
You must first have an existing workflow to run a job against.
- If you do not have an existing workflow, stop. [Create a workflow](/platform/workflows#create-a-workflow), and then return here.
+ If you do not have an existing workflow, stop. [Create a workflow](/ui/workflows#create-a-workflow), and then return here.
To see your existing workflows, on the sidebar, click **Workflows**.
@@ -33,7 +33,7 @@ To run a job, on the sidebar, click **Workflows**, and then click **Run** in the
## Monitor a job
-
+
The job details pane is a comprehensive section for monitoring the specific details of jobs executed within a particular workflow. To access this pane, click the specific job on the jobs dashboard.
diff --git a/platform/overview.mdx b/ui/overview.mdx
similarity index 54%
rename from platform/overview.mdx
rename to ui/overview.mdx
index 377d2fc5..50b99418 100644
--- a/platform/overview.mdx
+++ b/ui/overview.mdx
@@ -2,15 +2,15 @@
title: Overview
---
-The Unstructured Platform user interface (UI) is a no-code user interface, pay-as-you-go platform for transforming your unstructured data into data that is ready for Retrieval Augmented Generation (RAG).
+The Unstructured user interface (UI) is a no-code user interface, pay-as-you-go platform for transforming your unstructured data into data that is ready for Retrieval Augmented Generation (RAG).
-To start using the Unstructured Platform UI right away, skip ahead to the [quickstart](/platform/quickstart).
+To start using the Unstructured UI right away, skip ahead to the [quickstart](/ui/quickstart).
-Here is a screenshot of the Unstructured Platform UI **Start** page:
+Here is a screenshot of the Unstructured UI **Start** page:
-
+
-This 90-second video provides a brief overview of the Unstructured Platform UI:
+This 90-second video provides a brief overview of the Unstructured UI:
- The Unstructured Platform offers multiple [source connectors](/platform/sources/overview) to connect to your data in its existing location.
+ Unstructured offers multiple [source connectors](/ui/sources/overview) to connect to your data in its existing location.
- Routing determines which strategy Unstructured Platform uses to transform your documents into Unstructured's canonical JSON schema. The Unstructured Platform provides four [partitioning](/platform/partitioning) strategies for document transformation, as follows.
+ Routing determines which strategy Unstructured uses to transform your documents into Unstructured's canonical JSON schema. Unstructured provides four [partitioning](/ui/partitioning) strategies for document transformation, as follows.
- Your source document is transformed into Unstructured's canonical JSON schema. Regardless of the input document, this JSON schema gives you a [standardized output](/platform/document-elements). It contains more than 20 elements, such as `Header`, `Footer`, `Title`, `NarrativeText`, `Table`, `Image`, and many more. Each document is wrapped in extensive metadata so you can understand languages, file types, sources, hierarchies, and much more.
+ Your source document is transformed into Unstructured's canonical JSON schema. Regardless of the input document, this JSON schema gives you a [standardized output](/ui/document-elements). It contains more than 20 elements, such as `Header`, `Footer`, `Title`, `NarrativeText`, `Table`, `Image`, and many more. Each document is wrapped in extensive metadata so you can understand languages, file types, sources, hierarchies, and much more.
- The Unstructured Platform provides these [chunking](/platform/chunking) strategies:
+ Unstructured provides these [chunking](/ui/chunking) strategies:
- **Basic** combines sequential elements up to specified size limits. Oversized elements are split, while tables are isolated and divided if necessary. Overlap between chunks is optional.
- **By Title** uses semantic chunking, understands the layout of the document, and makes intelligent splits.
@@ -61,14 +61,14 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
Images and tables can be optionally summarized. This generates enriched content around the images or tables that were parsed during the transformation process.
- The Unstructured Platform uses optional third-party [embedding](/platform/embedding) providers such as OpenAI.
+ Unstructured uses optional third-party [embedding](/ui/embedding) providers such as OpenAI.
- The Unstructured Platform offers multiple [destination connectors](/platform/destinations/overview), including all major vector databases.
+ Unstructured offers multiple [destination connectors](/ui/destinations/overview), including all major vector databases.
-To simplify this process and provide it as a no-code solution, the Unstructured Platform brings together these key concepts:
+To simplify this process and provide it as a no-code solution, Unstructured brings together these key concepts:
```mermaid
flowchart LR
@@ -85,16 +85,16 @@ flowchart LR
- [Source connectors](/platform/sources/overview) to ingest your data into the Unstructured Platform for transformation.
+ [Source connectors](/ui/sources/overview) to ingest your data into Unstructured for transformation.
- [Destination connectors](/platform/destinations/overview) tell the Unstructured Platform where to write your transformed data to.
+ [Destination connectors](/ui/destinations/overview) tell Unstructured where to write your transformed data to.
- A [workflow](/platform/workflows) connects sources to destinations and provide chunking, embedding, and scheduling options.
+ A [workflow](/ui/workflows) connects sources to destinations and provide chunking, embedding, and scheduling options.
- [Jobs](/platform/jobs) enable you to monitor data transformation progress.
+ [Jobs](/ui/jobs) enable you to monitor data transformation progress.
@@ -104,7 +104,7 @@ The platform is designed for global reach with SOC2 Type 1, SOC2 Type 2, and HIP
## How do I get started?
-Skip ahead to the [quickstart](/platform/quickstart).
+Skip ahead to the [quickstart](/ui/quickstart).
## How do I get help?
diff --git a/platform/partitioning.mdx b/ui/partitioning.mdx
similarity index 81%
rename from platform/partitioning.mdx
rename to ui/partitioning.mdx
index 51084787..8f8447e3 100644
--- a/platform/partitioning.mdx
+++ b/ui/partitioning.mdx
@@ -2,9 +2,9 @@
title: Partitioning
---
-_Partitioning_ extracts content from raw unstructured files and outputs that content as structured [document elements](/platform/document-elements).
+_Partitioning_ extracts content from raw unstructured files and outputs that content as structured [document elements](/ui/document-elements).
-For specific file types, such as image files and PDF files, the Unstructured Platform offers special strategies to partition them. Each of these
+For specific file types, such as image files and PDF files, Unstructured offers special strategies to partition them. Each of these
strategies has trade-offs for output speed, cost to output, and quality of output.
PDF files, for example, vary in quality and complexity. In simple cases, traditional natural language processing (NLP) extraction techniques might
@@ -17,7 +17,7 @@ For example, the **Fast** strategy can be about 100 times faster than leading im
To choose one of these strategies, select one of the following four **Partition Strategy** options in the **Partitioner** node of a workflow.
-You can change a workflow's preconfigured strategy only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+You can change a workflow's preconfigured strategy only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
import PlatformPartitioningStrategies from '/snippets/general-shared-text/platform-partitioning-strategies.mdx';
diff --git a/platform/quickstart.mdx b/ui/quickstart.mdx
similarity index 75%
rename from platform/quickstart.mdx
rename to ui/quickstart.mdx
index f988b786..5fd5a9f5 100644
--- a/platform/quickstart.mdx
+++ b/ui/quickstart.mdx
@@ -1,5 +1,5 @@
---
-title: Unstructured Platform quickstart
+title: Unstructured UI quickstart
sidebarTitle: Quickstart
---
diff --git a/platform/sources/azure-blob-storage.mdx b/ui/sources/azure-blob-storage.mdx
similarity index 100%
rename from platform/sources/azure-blob-storage.mdx
rename to ui/sources/azure-blob-storage.mdx
diff --git a/platform/sources/box.mdx b/ui/sources/box.mdx
similarity index 100%
rename from platform/sources/box.mdx
rename to ui/sources/box.mdx
diff --git a/platform/sources/confluence.mdx b/ui/sources/confluence.mdx
similarity index 100%
rename from platform/sources/confluence.mdx
rename to ui/sources/confluence.mdx
diff --git a/platform/sources/couchbase.mdx b/ui/sources/couchbase.mdx
similarity index 100%
rename from platform/sources/couchbase.mdx
rename to ui/sources/couchbase.mdx
diff --git a/platform/sources/databricks-volumes.mdx b/ui/sources/databricks-volumes.mdx
similarity index 100%
rename from platform/sources/databricks-volumes.mdx
rename to ui/sources/databricks-volumes.mdx
diff --git a/platform/sources/dropbox.mdx b/ui/sources/dropbox.mdx
similarity index 100%
rename from platform/sources/dropbox.mdx
rename to ui/sources/dropbox.mdx
diff --git a/platform/sources/elasticsearch.mdx b/ui/sources/elasticsearch.mdx
similarity index 100%
rename from platform/sources/elasticsearch.mdx
rename to ui/sources/elasticsearch.mdx
diff --git a/platform/sources/google-cloud.mdx b/ui/sources/google-cloud.mdx
similarity index 100%
rename from platform/sources/google-cloud.mdx
rename to ui/sources/google-cloud.mdx
diff --git a/platform/sources/google-drive.mdx b/ui/sources/google-drive.mdx
similarity index 100%
rename from platform/sources/google-drive.mdx
rename to ui/sources/google-drive.mdx
diff --git a/platform/sources/kafka.mdx b/ui/sources/kafka.mdx
similarity index 100%
rename from platform/sources/kafka.mdx
rename to ui/sources/kafka.mdx
diff --git a/platform/sources/mongodb.mdx b/ui/sources/mongodb.mdx
similarity index 100%
rename from platform/sources/mongodb.mdx
rename to ui/sources/mongodb.mdx
diff --git a/platform/sources/onedrive.mdx b/ui/sources/onedrive.mdx
similarity index 100%
rename from platform/sources/onedrive.mdx
rename to ui/sources/onedrive.mdx
diff --git a/platform/sources/opensearch.mdx b/ui/sources/opensearch.mdx
similarity index 100%
rename from platform/sources/opensearch.mdx
rename to ui/sources/opensearch.mdx
diff --git a/platform/sources/outlook.mdx b/ui/sources/outlook.mdx
similarity index 100%
rename from platform/sources/outlook.mdx
rename to ui/sources/outlook.mdx
diff --git a/platform/sources/overview.mdx b/ui/sources/overview.mdx
similarity index 52%
rename from platform/sources/overview.mdx
rename to ui/sources/overview.mdx
index 14b7ae17..0d0d5703 100644
--- a/platform/sources/overview.mdx
+++ b/ui/sources/overview.mdx
@@ -4,7 +4,7 @@ description: Source connectors are essential components in data integration syst
---
-
+
To see your existing source connectors, on the sidebar, click **Connectors**, and then click **Sources**.
@@ -18,24 +18,24 @@ To create a source connector:
6. Click **Continue**.
7. Fill in the fields according to your connector type. To learn how, click your connector type in the following list:
- - [Azure](/platform/sources/azure-blob-storage)
- - [Box](/platform/sources/box)
- - [Confluence](/platform/sources/confluence)
- - [Couchbase](/platform/sources/couchbase)
- - [Databricks Volumes](/platform/sources/databricks-volumes)
- - [Dropbox](/platform/sources/dropbox)
- - [Elasticsearch](/platform/sources/elasticsearch)
- - [Google Cloud Storage](/platform/sources/google-cloud)
- - [Google Drive](/platform/sources/google-drive)
- - [Kafka](/platform/sources/kafka)
- - [MongoDB](/platform/sources/mongodb)
- - [OneDrive](/platform/sources/onedrive)
- - [Outlook](/platform/sources/outlook)
- - [PostgreSQL](/platform/sources/postgresql)
- - [S3](/platform/sources/s3)
- - [Salesforce](/platform/sources/salesforce)
- - [SharePoint](/platform/sources/sharepoint)
- - [Snowflake](/platform/sources/snowflake)
+ - [Azure](/ui/sources/azure-blob-storage)
+ - [Box](/ui/sources/box)
+ - [Confluence](/ui/sources/confluence)
+ - [Couchbase](/ui/sources/couchbase)
+ - [Databricks Volumes](/ui/sources/databricks-volumes)
+ - [Dropbox](/ui/sources/dropbox)
+ - [Elasticsearch](/ui/sources/elasticsearch)
+ - [Google Cloud Storage](/ui/sources/google-cloud)
+ - [Google Drive](/ui/sources/google-drive)
+ - [Kafka](/ui/sources/kafka)
+ - [MongoDB](/ui/sources/mongodb)
+ - [OneDrive](/ui/sources/onedrive)
+ - [Outlook](/ui/sources/outlook)
+ - [PostgreSQL](/ui/sources/postgresql)
+ - [S3](/ui/sources/s3)
+ - [Salesforce](/ui/sources/salesforce)
+ - [SharePoint](/ui/sources/sharepoint)
+ - [Snowflake](/ui/sources/snowflake)
8. If a **Continue** button appears, click it, and fill in any additional settings fields.
9. Click **Save and Test**.
diff --git a/platform/sources/postgresql.mdx b/ui/sources/postgresql.mdx
similarity index 100%
rename from platform/sources/postgresql.mdx
rename to ui/sources/postgresql.mdx
diff --git a/platform/sources/s3.mdx b/ui/sources/s3.mdx
similarity index 100%
rename from platform/sources/s3.mdx
rename to ui/sources/s3.mdx
diff --git a/platform/sources/salesforce.mdx b/ui/sources/salesforce.mdx
similarity index 100%
rename from platform/sources/salesforce.mdx
rename to ui/sources/salesforce.mdx
diff --git a/platform/sources/sftp-storage.mdx b/ui/sources/sftp-storage.mdx
similarity index 100%
rename from platform/sources/sftp-storage.mdx
rename to ui/sources/sftp-storage.mdx
diff --git a/platform/sources/sharepoint.mdx b/ui/sources/sharepoint.mdx
similarity index 100%
rename from platform/sources/sharepoint.mdx
rename to ui/sources/sharepoint.mdx
diff --git a/platform/sources/snowflake.mdx b/ui/sources/snowflake.mdx
similarity index 100%
rename from platform/sources/snowflake.mdx
rename to ui/sources/snowflake.mdx
diff --git a/platform/summarizing.mdx b/ui/summarizing.mdx
similarity index 98%
rename from platform/summarizing.mdx
rename to ui/summarizing.mdx
index bc21e7e9..ec607e6e 100644
--- a/platform/summarizing.mdx
+++ b/ui/summarizing.mdx
@@ -75,7 +75,7 @@ Line breaks have been inserted here for readability. The output will not contain
To summarize images or tables, in the **Task** drop-down list of an **Enrichment** node in a workflow, specify the following:
-You can change a workflow's summarization settings only through [Custom](/platform/workflows#create-a-custom-workflow) workflow settings.
+You can change a workflow's summarization settings only through [Custom](/ui/workflows#create-a-custom-workflow) workflow settings.
For image summarization, select **Image Description**, and then choose one of the following provider (and model) combinations to use:
diff --git a/platform/supported-file-types.mdx b/ui/supported-file-types.mdx
similarity index 100%
rename from platform/supported-file-types.mdx
rename to ui/supported-file-types.mdx
diff --git a/platform/workflows.mdx b/ui/workflows.mdx
similarity index 93%
rename from platform/workflows.mdx
rename to ui/workflows.mdx
index ff871385..b508244c 100644
--- a/platform/workflows.mdx
+++ b/ui/workflows.mdx
@@ -4,17 +4,17 @@ title: Workflows
## Workflows dashboard
-
+
To view the workflows dashboard, on the sidebar, click **Workflows**.
-A workflow in the Unstructured Platform is a defined sequence of processes that automate the data handling from source to destination. It allows users to configure how and when data should be ingested, processed, and stored.
+A workflow in Unstructured is a defined sequence of processes that automate the data handling from source to destination. It allows users to configure how and when data should be ingested, processed, and stored.
Workflows are crucial for establishing a systematic approach to managing data flows within the platform, ensuring consistency, efficiency, and adherence to specific data processing requirements.
## Create a workflow
-The Unstructured Platform provides two types of workflow builders:
+Unstructured provides two types of workflow builders:
- [Automatic](#create-an-automatic-workflow) or **Build it For Me** workflows, which use sensible default workflow settings to enable you to get good-quality results faster.
- [Custom](#create-a-custom-worklow) or **Build it Myself** workflows, which enable you to fine-tune the workflow settings behind the scenes to get very specific results.
@@ -24,7 +24,7 @@ The Unstructured Platform provides two types of workflow builders:
You must first have an existing source connector and destination connector to add to the workflow.
- If you do not have an existing connector for either your target source (input) or destination (output) location, [create the source connector](/platform/sources/overview), [create the destination connector](/platform/destinations/overview), and then return here.
+ If you do not have an existing connector for either your target source (input) or destination (output) location, [create the source connector](/ui/sources/overview), [create the destination connector](/ui/destinations/overview), and then return here.
To see your existing connectors, on the sidebar, click **Connectors**, and then click **Sources** or **Destinations**.
@@ -63,7 +63,7 @@ By default, this workflow partitions, chunks, and generates embeddings as follow
- If the page or document has only a few tables or images with standard layouts and languages, **High Res** partitioning is used, and the page or document is billed at the **High Res** rate for processing.
- If the page or document has more than a few tables or images, **VLM** partitioning is used, and the page or document is billed at the **VLM** rate for processing.
- [Learn about partitioning strategies](/platform/partitioning).
+ [Learn about partitioning strategies](/ui/partitioning).
- **Chunker**: **Chunk by Title** strategy
@@ -76,20 +76,20 @@ By default, this workflow partitions, chunks, and generates embeddings as follow
- **Overlap**: 350
- **Overlap All**: Yes (checked)
- [Learn about chunking strategies](/platform/chunking).
+ [Learn about chunking strategies](/ui/chunking).
- **Embedder**:
- **Provider**: Azure OpenAI
- **Model**: text-embedding-3-large, with 3072 dimensions
- [Learn about embedding providers and models](/platform/embedding).
+ [Learn about embedding providers and models](/ui/embedding).
- **Enrichments**:
This workflow contains no enrichments.
- [Learn about available enrichments](/platform/enriching/overview).
+ [Learn about available enrichments](/ui/enriching/overview).
After this workflow is created, you can change any or all of its settings if you want to. This includes the workflow's
source connector, destination connector, partitioning, chunking, and embedding settings. You can also add enrichments
@@ -120,7 +120,7 @@ If you did not previously set the workflow to run on a schedule, you can [run th
You must first have an existing source connector and destination connector to add to the workflow.
- If you do not have an existing connector for either your target source (input) or destination (output) location, [create the source connector](/platform/sources/overview), [create the destination connector](/platform/destinations/overview), and then return here.
+ If you do not have an existing connector for either your target source (input) or destination (output) location, [create the source connector](/ui/sources/overview), [create the destination connector](/ui/destinations/overview), and then return here.
To see your existing connectors, on the sidebar, click **Connectors**, and then click **Sources** or **Destinations**.
@@ -130,7 +130,7 @@ If you did not previously set the workflow to run on a schedule, you can [run th
3. Click the **Build it Myself** option, and then click **Continue**.
4. In the **This workflow** pane, click the **Details** button.
- 
+ 
5. Next to **Name**, click the pencil icon, enter some unique name for this workflow, and then click the check mark icon.
6. If you want this workflow to run on a schedule, click the **Schedule** button. In the **Repeat Run** dropdown list, select one of the scheduling options, and fill in the scheduling settings.
@@ -170,12 +170,12 @@ If you did not previously set the workflow to run on a schedule, you can [run th
9. In the pipeline designer, click the **Source** node. In the **Source** pane, select the source location. Then click **Save**.
- 
+ 
10. Click the **Destination** node. In the **Destination** pane, select the destination location. Then click **Save**.
11. As needed, add more nodes by clicking the plus icon (recommended) or **Add Node** button:
- 
+ 
- Click **Connect** to add another **Source** or **Destination** node. You can add multiple source and destination locations. Files will be ingested from all of the source locations, and the processed data will be delivered to all of the destination locations. [Learn more](#custom-workflow-node-types).
- Click **Enrich** to add a **Chunker** or **Enrichment** node. [Learn more](#custom-workflow-node-types).
@@ -185,7 +185,7 @@ If you did not previously set the workflow to run on a schedule, you can [run th
Make sure to add nodes in the correct order. If you are unsure, see the usage hints in the blue note that appears
in the node's settings pane.
- 
+ 
To edit a node, click that node, and then change its settings.
@@ -234,7 +234,7 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
these files are processed. These errors typically occur when these larger PDF files have lots of tables and high-resolution images.
- [Learn more](/platform/partitioning).
+ [Learn more](/ui/partitioning).
For **Chunkers**, select one of the following:
@@ -273,7 +273,7 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
Learn more:
- - [Chunking overview](/platform/chunking)
+ - [Chunking overview](/ui/chunking)
- [Chunking for RAG: best practices](https://unstructured.io/blog/chunking-for-rag-best-practices)
@@ -287,7 +287,7 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
- **Amazon Bedrock (Claude 3.5 Sonnet)**. [Learn more](https://aws.amazon.com/bedrock/claude/).
- **Vertex AI (Gemini 2.0 Flash)**. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2).
- [Learn more](/platform/enriching/image-descriptions).
+ [Learn more](/ui/enriching/image-descriptions).
- **Table Description** to summarize tables. Also select one of the following provider (and model) combinations to use:
@@ -296,13 +296,13 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
- **Amazon Bedrock (Claude 3.5 Sonnet)**. [Learn more](https://aws.amazon.com/bedrock/claude/).
- **Vertex AI (Gemini 2.0 Flash)**. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2).
- [Learn more](/platform/enriching/table-descriptions).
+ [Learn more](/ui/enriching/table-descriptions).
- **Table to HTML** to convert tables to HTML. Also select one of the following provider (and model) combinations to use:
- **OpenAI (GPT-4o)**. [Learn more](https://openai.com/index/hello-gpt-4o/).
- [Learn more](/platform/enriching/table-to-html).
+ [Learn more](/ui/enriching/table-to-html).
@@ -347,7 +347,7 @@ import PlatformPartitioningStrategies from '/snippets/general-shared-text/platfo
Learn more:
- - [Embedding overview](/platform/embedding)
+ - [Embedding overview](/ui/embedding)
- [Understanding embedding models: make an informed choice for your RAG](https://unstructured.io/blog/understanding-embedding-models-make-an-informed-choice-for-your-rag).
diff --git a/welcome.mdx b/welcome.mdx
index 5345e50c..7add630d 100644
--- a/welcome.mdx
+++ b/welcome.mdx
@@ -29,17 +29,17 @@ This 40-second video demonstrates a simple use case that Unstructured helps solv
allowfullscreen
>
-Unstructured offers the Unstructured Platform user interface (UI) and the Unstructured Platform API. Read on to learn more.
+Unstructured offers the Unstructured user interface (UI) and the Unstructured API. Read on to learn more.
-## Unstructured Platform user interface (UI)
+## Unstructured user interface (UI)
-No-code UI. Production-ready. Pay as you go. [Learn more](/platform/overview).
+No-code UI. Production-ready. Pay as you go. [Learn more](/ui/overview).
-Here is a screenshot of the Unstructured Platform UI **Start** page:
+Here is a screenshot of the Unstructured UI **Start** page:
-
+
-This 90-second video provides a brief overview of the Unstructured Platform UI:
+This 90-second video provides a brief overview of the Unstructured UI:
Unstructured Platform API
+## Unstructured API
-Use scripts or code. Production-ready. Pay as you go. [Learn more](/platform-api/overview).
+Use scripts or code. Production-ready. Pay as you go. [Learn more](/api/overview).
-The Unstructured Platform API consists of two parts:
+The Unstructured API consists of two parts:
-- The [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) enables a full range of partitioning, chunking, embedding, and
+- The [Unstructured Workflow Endpoint](/api/overview) enables a full range of partitioning, chunking, embedding, and
enrichment options for your files and data. It is designed to batch-process files and data in remote locations; send processed results to
various storage, databases, and vector stores; and use the latest and highest-performing models on the market today. It has built-in logic
- to deliver the highest quality results at the lowest cost. [Learn more](/platform-api/api/overview).
-- The [Unstructured Platform Partition Endpoint](platform-api/partition-api/overview) is intended for rapid prototyping of Unstructured's
+ to deliver the highest quality results at the lowest cost. [Learn more](/api/overview).
+- The [Unstructured Partition Endpoint](/api/partition/overview) is intended for rapid prototyping of Unstructured's
various partitioning strategies, with limited support for chunking. It is designed to work only with processing of local files, one file
- at a time. Use the [Unstructured Platform Workflow Endpoint](/platform-api/api/overview) for production-level scenarios, file processing in
+ at a time. Use the [Unstructured Workflow Endpoint](/api/overview) for production-level scenarios, file processing in
batches, files and data in remote locations, generating embeddings, applying post-transform enrichments, using the latest and
- highest-performing models, and for the highest quality results at the lowest cost. [Learn more](/platform-api/partition-api/overview).
+ highest-performing models, and for the highest quality results at the lowest cost. [Learn more](/api/partition/overview).
-Here is a screenshot of some Python code that calls the Unstructured Platform Workflow Endpoint:
+Here is a screenshot of some Python code that calls the Unstructured Workflow Endpoint:
-
+
-To start using the Unstructured Platform Workflow Endpoint right away, skip ahead to the [quickstart](#quickstart-unstructured-platform-endpoint).
+To start using the Unstructured Workflow Endpoint right away, skip ahead to the [quickstart](#unstructured-workflow-endpoint-quickstart).
---
@@ -85,26 +85,26 @@ import SupportedFileTypes from '/snippets/general-shared-text/supported-file-typ
---
-## Quickstart: Unstructured Platform UI
+## Unstructured UI quickstart
import SharedPlatformUI from '/snippets/quickstarts/platform.mdx';
-[Learn more about the Unstructured Platform UI](/platform/overview).
+[Learn more about the Unstructured UI](/ui/overview).
---
import LocalToLocalPythonIngestLibrary from '/snippets/ingestion/local-to-local.v2.py.mdx';
import AdditionalIngestDependencies from '/snippets/general-shared-text/ingest-dependencies.mdx';
-## Quickstart: Unstructured Platform Workflow Endpoint
+## Unstructured Workflow Endpoint quickstart
import SharedPlatformAPI from '/snippets/quickstarts/platform-api.mdx';
-[Learn more about the Unstructured Platform API](/platform-api/overview).
+[Learn more about the Unstructured API](/api/overview).
---