Skip to content



Predictive models have powered design and analysis of real-world systems such as jet engines, automobiles, and powerplants for decades. However, their use for operational decisions has been limited by the lack of scalable tools and ease of use of distributed computing systems. For example, if an end user of a model (with 10s of parameters) that predicts the performance of an industrial system wants to update the model with new observations, the end user at a minimum would need to understand the model (e.g., what parameters in the model needs to be updated), know of the techniques that may be useful for updating the model (e.g. Kalman filters), deploy the model in a way that the technique can communicate with, and deploy the technique at scale to update the model. The adoption of these techniques has thus been limited to small-scale problems, given the complexity of the process for even a basic task as “updating models”. Our aim is to enable a broad group of end-users to achieve their outcomes using predictive modeling without worrying about the underlying techniques, orchestration mechanisms, and the infrastructure required to run them.

Fig 1. - predictive modeling system run graph slide show

Predictive Modeling (aws-do-pm)

The aws-do-pm project is an all-in-one template for building a predictive modeling application. It is built on the Do framework which provides a prescriptive project structure and a set of simple management scripts for building and running your application. An AWS implementation of the Do framework is provided by the aws-do-docker project, which was extended, to create this code base. We demonstrate the power of aws-do-pm by modeling Electric Vehicles (EVs). The document below will walk you through the process of building, running, and scaling a demo application step by step, starting from a fresh clone of the aws-do-pm repository to building and automatically deploying continuously updating predictive models for a fleet of Electric Vehicles using Artificial Neural Networks, trained by PyTorch. Similarly to the included demo, the aws-do-pm framework can be used to develop predictive modeling applications for other use cases. The project can be built and deployed in any environment where Docker is available. It can run either on your local machine, on a single server, or on a cluster of servers orchestrated by Kubernetes. For details on provisioning an Amazon EKS cluster please refer to the aws-do-eks project.


This project provides an extensible framework for building predictive modeling and other applications. Its architectural layer digram is shown below.

Fig 2. - aws-do-pm architecture

Please refer to the framework documentation for an architectural deep-dive.

Details about the process of building and running predictive modeling applications using the aws-do-pm framework are documented here. You can see an abbreviated walkthrough in this YouTube video.

YouTube Video - Predictive Modeling with aws-do-pm

Follow the steps below to execute the demos shown in the video or build your own predictive modeling application.

0. Prerequisites

There are only two prerequisites that are needed to complete all the steps in this project.

  1. Docker
  2. AWS User access keys

1. Setup

The aws-do-pm project is deployed through a one-time container-centric setup process which infolves the following steps: clone, configure, build, push, and run.

2. Use

The project root directory includes a number of scripts. A brief description of their purpose is included in this section.

  • ./pm - predictive modeling CLI
  • ./ - advanced project configuration
  • ./ - build aws-do-pm container images
  • ./ - authenticate with the container image registry
  • ./ - create ECR repositories for all container images in the project
  • ./ - push aws-do-pm container images to the registry
  • ./ - pull aws-do-pm container images from the resitry if they are already present there
  • ./ - deploy and start all project containerized services
  • ./ - show current status of project services
  • ./ - open a shell into a running service container
  • ./ - show logs of running services
  • ./ - run service unit tests
  • ./ - stop and remove service containers
  • ./ - copy a file from a running service container to a local path
  • ./ - copy a file from a local path to a running service container
  • ./ - create a registry secret that can be used when pulling container images from the registry
  • ./ - expose a service running in Kubernetes on a local port
  • ./ - encode a local kube config and configure the aws-do-pm platform service with Kubernetes access
  • ./ev-demo - execute single electric vehicle demo showing aws-do-pm capabilities
  • ./ev-fleet-demo - execute electric vehicle fleet demo showing aws-do-pm scale

These scripts are availble both in the project root directory and in the platform container /app/pm directory. It is preferable that scripts are exectuted from a platform container shell when possible. A shell can be opened by running ./ while the platform is up.

3. Demo

Included with the project, there are two demo scripts which showcase the capabilities of the framework to build and deploy predictive modeling applications at scale. The first script ./ev-demo focuses on demonstrating the capabilities of the framework by building and deploying a continuously updating model (a.k.a. Digital Twin) of a single electric vehicle. The second script ./ev-fleet-demo focuses on demonstrating the scalability of the framework by building and deploying digital twins for a fleet of electric vehicles.

Fig 3. - ev-fleet-demo screencast - 100 electric vehicles

The battery is the most important component in an electric vehicle. The demo scripts in this project, use phenomenological degradation to generate data for the batteries of electric vehicles. All vehicles start with the “ideal” battery. Each vehicle is expected to travel up to a configured number of routes, and every route is assigned a specific distance, speed, load, rolling friction, and drag. The speed is assumed to be constant for the duration of each route. The built-in phenomenological damage depends on all the inputs and a random factor to mimic real-world damage and variability. The voltage drop as a function of time in each trip is calculated based on the inputs and the phenomenological model.

Please refer to the EV Demo and EV Fleet Demo walkthroughs for further details.

4. Cleanup

Regardless of the target orchestrator, cleanup can be done using the same management scripts.

5. Develop

If you wish to develop your own predictive modeling application by extending aws-do-pm, please review the framework documentation. Through the framework's data, model, and technique registration capabilities, it's CLI and SDK, you can implement your own use case. If the included techniques do not fulfill your needs, following the EV example, you can develop and register your own custom techniques. The code for your techniques can be added to this project template, or reside in a separate project. The only requirement of the framework is that your code runs in a Docker container and has an executor that accepts a --config argument. The framework comes with a number of pre-registered techniques. Please refer to each technique's documentation details. Finally, in case you face any issues, check the troubleshooting document for known solutions.


Below is an index of all documents included in the aws-do-pm project.


This repository is released under the MIT-0 License. See the LICENSE file for details.