Skip to content


Getting Started

This demo shows three Generative AI use-cases integrated into single solution on Miro board. It turns Python notebooks into dynamic interactive experience, where several team members can brainstorm, explore, exchange ideas empowered by privately hosted Sagemaker generative AI models. This demo can be easily extended by adding use-cases to demonstrate new concepts and solutions.

Usage instructions:

Use case How it looks like Details
1. Image generation Image generation To generate new image: Select one or several yellow stickers with prompts, then run the app.
2. Image inpainting Image inpainting To transform a part of image: Define changing part of image using round shape, add a sticky note with change prompt, connect image and sticky note, then select all 4 items and run the app.
3. Image trasformation Image change To transform image: Select image and sticky note with transformation prompt connected by line, then run the app.

Start from brainstorming and then develop your visual idea step-by-step.

💡 Tips: you can use resulted image from previous step as an input for the next.


Architecture overview

  • Miro application is running on the board. Loaded from S3 bucket, accessed via CloudFront distribution. Written on TypeScript.
  • Authorization and AIML proxy lambdas. Accessed via APIGateway deployed behind CloudFront. Written on Python.
    • Authorization function authorize provide access to backend functions only for authorized Miro application. It's used to protect organization data and generated content in AWS account.
    • AIML proxy function mlInference is required to handle API call from application and redirect it to correct Sagemaker inference endpoint. It also can be used for more complex use-cases, when several AIML functions work.
  • Sagemaker inference endpoints. Run inference.


The demo could be extended in two ways:

  1. by adding new AIML use cases.
  2. by changing/empowering interface on Miro board or Web-interface.

In both cases existing environment can be used as boilerplate. More details here



  1. AWS account with access to create
  2. AWS CLI installed and configured
  3. NodeJS installed
  4. NPM installed
  5. AWS CDK installed (min. version 2.94.x is required)
  6. Docker installed

To begin setup Generative AI demo in your AWS account, follow these steps:

Start with creating Miro application

1. Familiarize yourself with Miro's Developer Platform:

Visit the Miro Developer Platform documentation ( to learn about the available APIs, SDKs, and other resources that can help you build your app.

2. Create Miro Developer Team

💡 If you already have Miro Developer Team in you account, skip this step.

Build App

3. Go to the Miro App management Dashboard (

and click "Create new app".

Create New App Button Fill in the necessary information about your app, such as its name, select Developer team. Note: you don't need to check the "Expire user authorization token" checkbox. Click "Create app" to create your app.


4. Copy client secret on app creation page


Now everything ready to set up backend

5. Configure CLI access to AWS account via profile or environment variables

👇 Demo operator user/role policies (steps below developed and tested in Cloud9 and Sagemaker, role with following policies)
IAMFullAccess, AmazonS3FullAccess, AmazonSSMFullAccess, CloudWatchLogsFullAccess,
CloudFrontFullAccess, AmazonAPIGatewayAdministrator, AWSCloudFormationFullAccess, 
AWSLambda_FullAccess, AmazonSageMakerFullAccess

6. Bootstrap CDK stack in the target account:

cdk bootstrap aws://<account_id>/<region>

7. Docker buildx is required to build Graviton2 Lambda images on x86 platform.

It could be either used from Docker Desktop package - no need in steps 7.1 and 7.2 in this case; or installed separately (steps below developed and tested on AWS Cloud9):

  1. Binary installation manual

  2. On x86 platform to enable multiarch building capability launch

    docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

8. Configure Miro application client secret

Edit deployment-config.json to authorize Miro application to access backend. Find the following parameter and put as value secret string you received in step 4.


9. Deploy backend

Run npm run deploy from the project root folder. You will be requested to provide application client secret to continue installation. When installation is completed, all the necessary resources are deployed as CloudFormation DeployStack in the target account. Write down CloudFront HTTPS distrubution address:

    DeployStack.DistributionOutput =

10. Return to Miro application creation dialog to complete app configuration

Please enter the CloudFront URL that you obtained on the previous step.

App Url

11. Add necessary permission.


12. Install the app to the team.

Install App

Back to the Miro Developer Dashboard, click "More apps" icon on application bar, find your just installed app in the list and start working.

Run App

Sagemaker endpoints

You need to run dedicated Sagemaker endpoint for each use case. Each use-case is supported by a separate Jupyter notebook in ./ml_services/<use_case> subdirectory:

  • 1-create_image image generation (Stable diffusion 2.1), based on this example
  • 2-inpaint_image image inpainting (Stable diffusion 2 Inpainting fp16), based on this example
  • 3-modify_image image pix2pix modification (Huggingface instruct pix2pix), based on this example

💡 These steps developed and tested in Sagemaker notebook. For cases 1 and 2 you also can use any other ways to run Jumpstart referenced models, i.e. Sagemaker Studio

Starting Sagemaker endpoints

  • Go to ./ml_services/<use_case> directory and run one-by-one all three Sagemaker notebooks.
  • After endpoint started and successfully tested in notebook, go to Miro board, select required items and run use-case.

Demo extension with additional use-cases

🛸 Extension guidance --TBD--


This library is licensed under the MIT-0 license. For more details, please see LICENSE file

Legal disclaimer

Sample code, software libraries, command line tools, proofs of concept, templates, or other related technology are provided as AWS Content or Third-Party Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content or Third-Party Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content or Third-Party Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content or Third-Party Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.


No description, website, or topics provided.



Code of conduct

Security policy





No releases published


No packages published

Contributors 4