Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions TRANSPARENCY_FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

- ### What is the Content Processing Solution Accelerator?

This solution accelerator is an open-source GitHub Repository to extract data from unstructured documents and transform the data into defined schemas with validation to enhance the speed of downstream data ingestion and improve quality. It enables the ability to efficiently automate extraction, validation, and structuring of information for event driven system-to-system workflows. The solution is built using Azure OpenAI, Azure AI Services, Content Understanding Services, CosmosDB, and Azure Containers.
This solution accelerator is an open-source GitHub Repository to extract data from unstructured documents and transform the data into defined schemas with validation to enhance the speed of downstream data ingestion and improve quality. It enables the ability to efficiently automate extraction, validation, and structuring of information for event driven system-to-system workflows. The solution is built using Azure OpenAI Service, Azure AI Services, Azure AI Content Understanding Service, Azure Cosmos DB, and Azure Container Apps.



- ### What can the Content Processing Solution Accelerator do?

The sample solution is tailored for a Data Analyst at a property insurance company, who analyzes large amounts of claim-related data including forms, reports, invoices, and property loss documentation. The sample data is synthetically generated utilizing Azure OpenAI and saved into related templates and files, which are unstructured documents that can be used to show the processing pipeline. Any names and other personally identifiable information in the sample data is fictitious.
The sample solution is tailored for a Data Analyst at a property insurance company, who analyzes large amounts of claim-related data including forms, reports, invoices, and property loss documentation. The sample data is synthetically generated utilizing Azure OpenAI Service and saved into related templates and files, which are unstructured documents that can be used to show the processing pipeline. Any names and other personally identifiable information in the sample data is fictitious.

The sample solution processes the uploaded documents by exposing an API endpoint that utilizes Azure OpenAI and Content Understanding Service for extraction. The extracted data is then transformed into a specific schema output based on the content type (ex: invoice), and validates the extraction and schema mapping through accuracy scoring. The scoring enables thresholds to dictate a human-in-the-loop review of the output if needed, allowing a user to review, update, and add comments.
The sample solution processes the uploaded documents by exposing an API endpoint that utilizes Azure OpenAI Service and Azure AI Content Understanding Service for extraction. The extracted data is then transformed into a specific schema output based on the content type (ex: invoice), and validates the extraction and schema mapping through accuracy scoring. The scoring enables thresholds to dictate a human-in-the-loop review of the output if needed, allowing a user to review, update, and add comments.

- ### What is/are the Content Processing Solution Accelerator’s intended use(s)?

Expand Down
2 changes: 1 addition & 1 deletion docs/CustomizingAzdParameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Set the Environment Name Prefix
azd env set AZURE_ENV_NAME 'cps'
```

Change the Content Understanding Service Location (example: eastus2, westus2, etc.)
Change the Azure Content Understanding Service Location (example: eastus2, westus2, etc.)
```shell
azd env set AZURE_ENV_CU_LOCATION 'West US'
```
Expand Down
3 changes: 1 addition & 2 deletions docs/DeploymentGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@ Check the [Azure Products by Region](https://azure.microsoft.com/en-us/explore/g

- [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/)
- [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/)
- [Azure AI Document Intelligence](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/)
- [Azure AI Content Understanding](https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/)
- [Azure AI Content Understanding Service](https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/)
- [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/)
- [Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/)
- [Azure Container Registry](https://learn.microsoft.com/en-us/azure/container-registry/)
Expand Down
Binary file modified docs/Images/ReadMe/approach.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/Images/ReadMe/solution-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 5 additions & 5 deletions docs/ProcessingPipelineApproach.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ At the application level, when a file is processed a number of steps take place

3. Images are extracted from individual pages and included with the markdown content in a second call to Azure OpenAI Vision to complete a second extraction and multiple extraction prompts relating to the schema initially selected.

4. These two extracted datasets are compared and use system level logs from Azure AI Content Understanding and Azure OpenAI to determine the extraction score. This score is used to determine which extraction method is the most accurate for the schema and content and sent to be transformed and structured for finalization.
4. These two extracted datasets are compared and use system level logs from Azure AI Content Understanding and Azure OpenAI Service to determine the extraction score. This score is used to determine which extraction method is the most accurate for the schema and content and sent to be transformed and structured for finalization.

5. The top performing data is used for transforming the data into its selected schema. This is saved as a JSON format along with the final extraction and schema mapping scores. These scores can be used to initiate human-in-the-loop review - allowing for manual review, updates, and annotation of changes.

Expand All @@ -21,16 +21,16 @@ At the application level, when a file is processed a number of steps take place

1. **Extract Pipeline** – Text Extraction via Azure Content Understanding.

Uses Azure Content Understanding Service to detect and extract text from images and PDFs. This service also retrieves the coordinates of each piece of text, along with confidence scores, by leveraging built-in (pretrained) models.
Uses Azure AI Content Understanding Service to detect and extract text from images and PDFs. This service also retrieves the coordinates of each piece of text, along with confidence scores, by leveraging built-in (pretrained) models.

2. **Map Pipeline** – Mapping Extracted Text with Azure OpenAI GPT-4o
2. **Map Pipeline** – Mapping Extracted Text with Azure OpenAI Service GPT-4o

Takes the extracted text (as context) and the associated document images, then applies GPT-4o’s vision capabilities to interpret the content. It maps the recognized text to a predefined entity schema, providing structured data fields and confidence scores derived from model log probabilities.

3. **Evaluate Pipeline** – Merging and Evaluating Extraction Results

Combines confidence scores from both the Extract pipeline (Azure Content Understanding) and the Map pipeline (GPT-4o). It then calculates an overall confidence level by merging and comparing these scores, ensuring accuracy and consistency in the final extracted data.
Combines confidence scores from both the Extract pipeline (Azure AI Content Understanding) and the Map pipeline (GPT-4o). It then calculates an overall confidence level by merging and comparing these scores, ensuring accuracy and consistency in the final extracted data.

4. **Save Pipeline** – Storing Results in Azure Blob Storage and Cosmos DB
4. **Save Pipeline** – Storing Results in Azure Blob Storage and Azure Cosmos DB

Aggregates all outputs from the Extract, Map, and Evaluate steps. It finalizes and saves the processed data to Azure Blob Storage for file-based retrieval and updates or creates records in Azure Cosmos DB for structured, queryable storage. Confidence scoring is captured and saved with results for down-stream use - showing up, for example, in the web UI of the processing queue. This is surfaced as "extraction score" and "schema score" and is used to highlight the need for human-in-the-loop if desired.
7 changes: 3 additions & 4 deletions docs/TechnicalArchitecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ Using Azure Container App, this includes API end points exposed to facilitate in
### Content Process Monitor Web
Using Azure Container App, this app acts as the UI for the process monitoring queue. The app is built with React and TypeScript. It acts as an API client to create an experience for uploading new documents, monitoring current and historical processes, and reviewing output results.


### App Configuration
Using Azure App Configuration, app settings and configurations are centralized and used with the Content Processor, Content process API, and Content Process Monitor Web.

Expand All @@ -30,11 +29,11 @@ Using Azure Storage Queue, pipeline work steps and processing jobs are added to
### Azure AI Content Understanding Service
Used to detect and extract text from images and PDFs. This service also retrieves the coordinates of each piece of text, along with confidence scores, by leveraging built-in (pretrained) models. This utilizes the prebuild-layout 2024-12-01-preview for extraction.

### Azure OpenAI
Using Azure OpenAI, a deployment of the GPT-4o 2024-10-01-preview model is used during the content processing pipeline to extract content. GPT Vision is used for extraction and validation functions during processing. This model can be changed to a different Azure OpenAI model if desired, but this has not been thoroughly tested and may be affected by the output token limits.
### Azure OpenAI Service
Using Azure OpenAI Service, a deployment of the GPT-4o 2024-10-01-preview model is used during the content processing pipeline to extract content. GPT Vision is used for extraction and validation functions during processing. This model can be changed to a different Azure OpenAI Service model if desired, but this has not been thoroughly tested and may be affected by the output token limits.

### Blob Storage
Using Azure Blob Storage, schema .py files, source files for processing, and final output JSON files are stored in blob storage.

### Cosmos DB for MongoDB
### Azure Cosmos DB for MongoDB
Using Azure Cosmos DB for MongoDB, files that have been submitted for processing are added to the DB and their processing step history is saved. The processing queue stores individual processes information and history for status and processing step review, along with final extraction and transformation into JSON for its selected schema.
4 changes: 2 additions & 2 deletions infra/deploy_role_assignments.bicep
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ param appConfigResourceId string // Resource ID of the App Configuration instanc
param storageResourceId string // Resource ID of the Storage account
param storagePrincipalId string // Resource ID of the Storage account

param aiServiceCUId string // Resource ID of the Content Understanding Service
param aiServiceId string // Resource ID of the Open AI service
param aiServiceCUId string // Resource ID of the Azure AI Content Understanding Service
param aiServiceId string // Resource ID of the Azure Open AI service

param containerRegistryReaderPrincipalId string

Expand Down
4 changes: 2 additions & 2 deletions infra/main.bicep
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ param environmentName string
var uniqueId = toLower(uniqueString(subscription().id, environmentName, resourceGroup().location))
var solutionPrefix = 'cps-${padLeft(take(uniqueId, 12), 12, '0')}'

@description('Location used for Cosmos DB, Container App deployment')
@description('Location used for Azure Cosmos DB, Azure Container App deployment')
param secondaryLocation string = 'EastUs2'

@minLength(1)
@description('Location for the Content Understanding service deployment:')
@description('Location for the Azure AI Content Understanding service deployment:')
@allowed(['WestUS', 'SwedenCentral', 'AustraliaEast'])
@metadata({
azd: {
Expand Down