This is code for an Azure Function . The v1 S3 bindings are modeled off of https://github.com/lindydonna/SlackOutputBinding .
Once installed it creates a Function App linked to an S3 bucket account and a CosmosDB database.
This function runs every 30 seconds and reads an image from S3 bucket as a Stream, makes a copy of it in S3 and also in Azure Storage Blobs It also calls the cognitive API and pushes the results to CosmosDB Additionally it pushes the guid name to an Azure storage queue
Creates a pre signed Url for any bucketName and objectKey in the S3 bucket you own
Reads from Azure Storage tos3/container
and makes a copy in AWS S3 fromazure/folder
(all in one line)
The main limitations are:
1) Cannot trigger on S3 blob creates, only can use them as input or outputs
2) Cannot use dynamic S3 path names for input, only available for output
From your AWS account account using these instructions https://www.cloudberrylab.com/resources/blog/how-to-find-your-aws-access-key-id-and-secret-access-key/
From your account or a free one from here https://azure.microsoft.com/try/cognitive-services/
To run in Azure, run the ARM Template found in azuredeploy.json and then fill in the app settings with the following values:
- FunctionApp Name : The name of the function App you want to create
- AWSAccessKeyID : With access to the AWS S3 storage account
- AWSSecretAccessKey : Secret for the AWS S3 Access Key. Follow these instructions to get these secrets https://www.cloudberrylab.com/resources/blog/how-to-find-your-aws-access-key-id-and-secret-access-key/
- BucketName : Name of the S3 bucket you plan to use. eg functions-demo
- OcpApiKey : from Microsoft Cognitive Services. Get yours here https://azure.microsoft.com/try/cognitive-services/
The deployment screen will look like this:
- Install storage explorer from https://azure.microsoft.com/en-us/features/storage-explorer/
- Add Azure Storage account :
- Add Cosmosdb account:
Check you have access to S3 bucket: https://s3.console.aws.amazon.com/s3/buckets//?region=us-west-2&tab=overview
For the demo you can upload an image to input/imagetoOCR.png .
Here is a sample image. You can change / add text it or use your own png image
Once this is uploaded you can open the S3ImageOCR function in portal and hit run on it.
I’ve made it an httptrigger so its easy to demo any changes.
The output can be see in portal and in appInsights.
Once executed you can also show the OCR output stored in cosmosdb
Note you can also show the results in storage explorer
Also a copy is created in S3 output/ bucket, and azure storage:
You can also see the output of QueueTriggerOCRFromS3 which should have the same guid read from the azure storage queue.
Using storage explorer upload an image to tos3/container:
Shows the execution of “BlobTriggerToS3” function The output file will be in fromazure folder
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "",
"BlobStorageConnectionString": "",
"BlobStorageAccountName":"",
"BlobStorageContainerName": "",
"BlobStorageBlobName": "",
"tenantId" : "",
"AzureServicesAuthConnectionString":"RunAs=App;AppId=<>;TenantId=<>;AppKey=<>",
"AWSAccessKeyID" :"",
"AWSSecretAccessKey" :"",
"BucketName" : "",
"OcpApiKey" : "",
"CosmosDB":""
}
}