FaaSMark is a benchmark for serverless compute platforms. It is designed to measure function invocation latency across different clouds and using different invocation methods and function parameters.
The benchmark currently tests latency under different:
- Invocation methods (HTTP vs SDK)
- Programming languages
- Memory sizes
- Load conditions
- FaaS providers
This version supports aspect (invocation memory, programming languages and memory size) benchmarks and concurrency benchmarks only on AWS Lambda.
Benchmark code is divided into three parts:
- Client is the code that drives the benchmark process. It can run on your local machine or any cloud VM/container.
- Initiator is a FaaS function that is deployed on the platform being benchmarked. The initiator is responsible for repeatedly invoking empty functions (see below) and measuring their invocation latencies. There are two types of initiators:
concurrencyInitiator. The first is used for measuring different aspects of function invocation (method, languages and memory sizes) and comparing different cloud platforms. The second is used for load tests.
sleeperthat can sleep for a specified interval before returning. This feature is used for warming up multiple function containers before starting the benchmark itself.
Deploying both initiator and empty on the same FaaS platforms assures that no factors outside of the platform affect invocation latency.
In order to perform the benchmark you first need to deploy functions to the different FaaS provider platforms. See how to below.
Invoke the benchmark using the command
Benchmark behavior is control by the file settings.json which has the following fields:
||Test once or loop forever|
||Number||Milliseconds between beginning of tests|
||Change only if you change deployment code|
||Number||How many times Empty is invoked by Initiator|
||Number||Same as repeat for concurrency (load) tests|
||Number||Maximum concurrent invocations (load level)|
||Number||Maximum concurrent invocations from a single initiator function|
||String||AWS region name|
||String||Google Cloud region name|
||String||Google Cloud project name|
Note that automatic deployment onto Azure and Bluemix is not supported yet, and that their settings are currently hard coded.
Deployment method varies between clouds.
Use the Serverless Framework (sls) to deploy functions to AWS Lambda. Set your AWS credentials and install install
cd providers/aws sls deploy
Automatic deployment is not yet supported on Azure. To deploy on Azure Functions you can use the web console to create two functions with the contents of
Use the file name (without the extension) as the function name and configure HTTP triggers for the functions.
See Azure above.
gcloud utility is installed and configured, you can use the
utility to create a project, configure it to run functions and deploy functions to that project.
You can use an existing project or create a new one. Once you have a project name you must set the
GCLOUD_PROJECT variables accordingly.