A framework for measuring coldstart latency, io/network throughput, and more in AWS Lambda
Switch branches/tags
Nothing to show
Clone or download
Latest commit de20832 Jul 8, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
code up Jul 8, 2018
.gitignore up Jul 8, 2018
README.md up Jul 8, 2018
coldstart_long_test.py up Jul 8, 2018
coldstart_test.py up Jul 8, 2018
conf.py up Jul 8, 2018
consistency_test.py up Jul 8, 2018
lifetime_test.py up Jul 8, 2018
perf_test.py up Jul 8, 2018
placement_test.py up Jul 8, 2018
scale_test.py up Jul 8, 2018
utils.py up Jul 8, 2018



A framework for measuring coldstart latency, io/network throughput, and more in AWS Lambda

Please check our paper at [http://pages.cs.wisc.edu/~liangw/pub/atc18-final298.pdf]


  1. Install boto3 using pip

  2. Update your AWS credentials in conf.py

How to use the measurement framework

The measurement framework is only for conducting measurements in AWS Lambda. Only support Python and NodeJS as function runtime

The code of measurement function is in code/python/. Please check the comments in the files.

The code of the measurement framework is in utils.py. Very simple. Please check the comments in the files.

Coldstatrt_test.py gives an example of how to use the framework to measure coldstart and warmstart latency. It has detailed comments to help users to understand how to use the framework. In the test we send two concurrent requests at a time. Consistency_test.py shows another way to use the framework, and how to append extra info to the output.

*_test.py files are the other examples we developed.

Understand the output

The format of the output for non-performance related tests:

round ID, sub-round ID, worker No., worker ID, function region, function runtime, function memory, function name, instance ID generated by the function, request ID, VM ID, instance ID from procfs, instance VM IP, public VM IP, private instance IP, VM uptime, vCPU No. and model, request received time (from function), response sent time (from function), response processing time (response sent time - request received time, ms), request sent time (from vantage point), response received time (from vantage point), request rtt (response received time - request sent time, ms)

  • Round ID: for a given test, one might perform it multiple rounds. Each round needs to have a unique "round ID".

  • Sub-round ID: in a given round, one might want to perform a task multiple times. Each time the framework will automatically generate a "sub-round ID" to record how many times the tasks was repeated.

The format of the output for performance related tests:

The performance test result is after "vCPU No. and model".

For performance related tests: only support Python as function runtime. For I/O and network throughput tests we used dd and iperf directly.