Skip to content

A simple architecture to process unstructured data using textract and comprehend

License

Notifications You must be signed in to change notification settings

aws-samples/nlp-textract-comprehend-demo

nlp-analysis-demo

The purpose of this demo is to build a stack that uses Amazon Comprehend and Amazon Textract to analyze unstructured data and generate insights and trendsn from it.

Overview

In this demonstration we are going to build a stack to extract text from a PDF document that will be uploaded in Amazon S3, run comprehend against the text to generate aggregate and generate insights using start_entities_detection_job API call.

This demo was tested in us-east-1 and with pt language code

Prerequisites

Architecture Diagram

Setup instructions

First of all we need to setup the foundation for our solution, that consists of create the bucket to store our lambda code and the ECR to store our worker docker image.

A script was developed to help in that task, simple run:

./setup.sh

The output will be the follow:

Starting environment setup...


Creating ECR Repository...
Created ECR Repository: xxxxx.dkr.ecr.us-east-1.amazonaws.com/ai-comprehend-ml

Creating S3 Bucket...
Created S3 Bucket: lambdacode-sadasd

File zipped: lambda_comprehend.zip
File zipped: lambda_textract.zip
File zipped: lambda_data_wrangler.zip

Building docker image and pushing to ecr...

Uploading all required files to S3...

"Information that will be used in CloudFormation:":
BucketLambdaCode: lambdacode-sadasd
ImageUrl: xxxx.dkr.ecr.us-east-1.amazonaws.com/ai-comprehend-ml:latest

We are going to use the BucketLambdaCode and ImageUrl values later on the demonstration.

CloudFormation

In this repository we have two CloudFormation Templates that we are going to use to provision the stack.

Serveless Stack Template:

aws cloudformation create-stack --stack-name serverless-npl-stack --template-body file://cloudformation/serverless-stack.yaml --parameters ParameterKey=BucketName,ParameterValue=<BUCKET_NAME> ParameterKey=BucketLambdaCode,ParameterValue=<BUCKET_LAMBDA_CODE> ParameterKey=LanguageCode,ParameterValue=pt --capabilities CAPABILITY_IAM

Values to be replaced:

<BUCKET_NAME> - Bucket name that will be created.

<BUCKET_LAMBDA_CODE> - Bucket name that was created in the script setup.sh (BucketLambdaCode).

ECS Worker Stack Template:

aws cloudformation create-stack --stack-name ecs-npl-stack --template-body file://cloudformation/ecs-stack.yaml --parameters ParameterKey=ClusterName,ParameterValue=ecs-cluster-demo ParameterKey=ServiceName,ParameterValue=textract-worker ParameterKey=ImageUrl,ParameterValue=<IMAGE_URL> ParameterKey=BucketName,ParameterValue=<BUCKET_NAME> ParameterKey=QueueName,ParameterValue=sqs_textract_messages ParameterKey=VpcId,ParameterValue=<VPC_ID> ParameterKey=VpcCidr,ParameterValue=<VPC_CIDR> ParameterKey=PubSubnet1Id,ParameterValue=<PUB_SUBNET_1_ID> ParameterKey=PubSubnet2Id,ParameterValue=<PUB_SUBNET_2_ID> --capabilities CAPABILITY_IAM

Values to be replaced:

<IMAGE_URL> - The URI of ECR the image uploaded in the script setup.sh (ImageUrl).

<BUCKET_NAME> - The same of above.

<VPC_ID> - VPC that we will use to provision ECS cluster.

<VPC_CIDR> - VPC CIDR that we will use to provision ECS cluster.

<PUB_SUBNET_1_ID> - First public Subnet ID that we will use to provision ECS cluster.

<PUB_SUBNET_2_ID> - Second public Subnet ID that we will use to provision ECS cluster.

Testing the solution

Now we need to upload a PDF file to our S3 bucket in a specific path (textract/input/) to trigger the workflow.

aws s3 cp <MY_PDF_FILE> s3://<BUCKET_NAME>/textract/input/

This will be the result in the S3 console.

After that all the components of the architecture will be triggered, the result of that will be a Database created by AWS Glue that we can use AWS Athena to query the information agreggated by our solution with Amazon Comprehend.

Access the Athena console and select the Database npl_textract_comprehend

Click on the table and select Preview Table

The result will be the aggregation of the entities founded by Amazon Comprehend (Using the default entities), Check default Comprehend entities

Quicksight (Optional)

Also you can use Amazon Quicksight to create amazing dashboard plugged in Athena.

After all the setup the dashboard that you can create may look like this:

Cleaning up:

  • Delete all the files inside of the provisioned S3 bucket.
aws s3 rm s3://<BUCKET_NAME> --recursive
  • Delete the container image, inside the ECR Repository.

  • Delete the CloudFormation stacks.

aws cloudformation delete-stack --stack-name serverless-npl-stack
aws cloudformation delete-stack --stack-name ecs-npl-stack
  • Delete the S3 bucket that we used to store the lambda codes and lambda layer.
aws s3 rb s3://<BUCKET_LAMBDA_CODE> --force

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

A simple architecture to process unstructured data using textract and comprehend

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published