The noteless serverless demo is available online through this page and allows you to analyze pictures for certain command words ("go", "stop", "on", "off", "left", "right", "up", "down", "to"). First, you'd capture a picture that contains some text and then you can view the results of the analysis.
You capture pictures containing text, ideally using your phone's camera:
Once you have captured a few text fragments, you move on to the analysis stage.
You analyze by applying predefined OPA Rego rules and if noteless recognizes a command like up
or go
it will list it:
This is a serverless end-to-end demo with an architecture as follows:
noteless uses the following serverless AWS services:
- Amazon Rekognition for detecting text in images
- AWS Lambda for the capture/frontend processing
- Amazon EKS on AWS Fargate for the event-driven analytics part with an Open Policy Agent Rego-based set of rules.
- Amazon S3 for storing the captured images and Amazon DynamoDB to persist the detected text in JSON format.
If you want to try it out yourself, deploying the demo in your own environment, the source code is available via mhausenblas/noteless. Kudos go out to Mike Rudolph for mikerudolph/aws_rekognition_demo which served as a starting point for this demo.
First, create an S3 Bucket for the Lambda code and provide it as an input
for the Makefile as NOTELESS_BUCKET
when you run make up
. This sets up
the Lambda functions, the DynamoDB table, and the S3 data bucket.
For the container part: run first create-eks-fargate-cluster.sh to set up the EKS on Fargate cluster and then create-alb.sh for the ALB Ingress controller. Finally, execute launch-backend.sh to launch the Kubernetes deployment and service. TBD: patching frontends …