pronounced as cumuli: plural for heap, accumulation
Qmuli is an experimental effort in creating a unified environment, in which one could specify both resource configuration and lambda behavior of a cloud architecture built on AWS side-by-side in one language without the artificial boundary imposed by the current generation of AWS tools. Advantages of such unification are numerous:
- a more convenient way to specify AWS resources configuration than with regular CloudFormation templates codified in JSON, in a language subset that is as declarative as JSON but much more powerful and expressive
- ability of performing static analisys on both configuration and behaviors at the same time, which is more powerful and encompassing than doing this on each individually
- possibility of automatic infrastructure diagram/graph generation from this high level DSL code
- no need to duplicate the resource configuration information in the lambda logic that drives the behavior, as it is available and easily accessible during the building phase
Let's say we want to create a simple contrived example of architecture that would automatically copy the content of any new file uploaded
incoming bucket into the
+-------------+ +-------------------+ +-------------+ | | | Lambda that | | | | Incoming | S3 event | receives the S3 | | Outgoing | | S3 bucket +--------->| event and copies +-------->| S3 bucket | | | | the S3 object | | | +-------------+ +-------------------+ +-------------+
In order to accomplish this with regular tools provided by AWS, we would create these resources using one of 3 methods:
- AWS console
- AWS command line interface (CLI)
- CloudFormation template
Using console is great for beginners to learn how to provision resources or for quick ad-hoc changes, but involves lots of clicking around and therefore is not very practical for non-trivial deployments and hard to replicate exactly.
Using AWS CLI allows for better replication/reproducing and maintenance and is much more practical for non-trivial deployments. One could script the CLI commands to create reproducible provisioning sequences, but one needs to somehow guarantee that the sequence is correct to satisfy the inter-dependencies among the resources and that the delays are correct to ensure dependencies have been provisioned before the dependents are attempted to be provisioned.
Using CloudFormation is a step up in that it allows to completely describe the infrastructure that needs to be provisioned in a declarative json document that could be version-controlled and used as a specification. However writing and maintaining the json template specification can get unwieldy and lambda behaviors still need to be specified separately and may result in errors if the behavior specification assumes permissions or resources that don't match or exist in the CloudFormation template.
Qmuli tries to solve potential problems stemming from mismatched resource and behavior specification by unifying them. Then it statically analyzes, while compiling, prior to actual deployment, whether all the pieces fit and would work correctly together once deployed.
Below is an example of how one would express the above architecture as a "qmulus" (i.e. a unified specification for architecture resources and behavior) in a single file:
main :: IO () main = withConfig config where config :: ConfigProgram () config = do -- create an "input" s3 bucket incoming <- s3Bucket "incoming" -- create an "output" s3 bucket outgoing <- s3Bucket "outgoing" -- create a lambda, which will copy an s3 object from "incoming" to "outgoing" buckets -- upon an S3 "Put" event. -- Attach the lambda to the "incoming" bucket such way so each time a file is uploaded to -- the bucket, the lambda is called with the information about the newly uploaded file. -- The lambda creation function takes the Lambda name, s3BucketId to attach to, lambda -- function itself and a lambda profile, that specifies attributes like memory size and -- timeout, and has meaningful defaults for those. void $ s3BucketLambda "copyS3Object" incoming (copyContentsLambda outgoing) $ def & lpMemorySize .~ M1536 copyContentsLambda :: S3BucketId -> S3LambdaProgram copyContentsLambda sinkBucketId = lbd where lbd event = do let incomingS3Obj = event ^. s3eObject outgoingS3Obj = s3oBucketId .~ sinkBucketId $ incomingS3Obj -- get the content of the newly uploaded file content <- getS3ObjectContent incomingS3Obj -- emit log messages that end up in the appropriate cloudwatch group/stream say "hello there!" -- write the content into a new file in the "output" bucket putS3ObjectContent outgoingS3Obj content success "lambda had executed successfully"
Compiling this qmulus results in a multi-purpose executable binary, which can be used as a CLI tool for management tasks like provisioning as well as the binary executable that gets packaged and used in all lambdas.
Note: for how to use currently available functionality and AWS integrations, see examples
A qmulus does not necessarily need to be built on Amazon Linux AMI in order to be compatible with running it on AWS Lambda. One only needs a system with
docker installed in order to build everything necessary for deployment.
Clone and build the library and examples
git clone --recursive -j8 https://github.com/qmuli/qmuli.git cd qmuli <uncomment the examples you want to build in package.yaml> stack install
Running an example
The above example is available as the "simple-s3-copy" qmulus.
simple-s3-copy cf deploy <globally-unique-app-name> command does the following:
- generates the CloudFormation (CF) json template
- packages/zips up the executable to be used by lambda
- uploads those to the qmulus S3 bucket (named with
After that is deployed, just create a new CF stack:
simple-s3-copy cf create <globally-unique-app-name>
And voila, you should now have the example deployed and working. Try uploading a small file into the 'incoming' bucket, you should see the same file copied automatically to the 'outgoing' bucket.
To monitor the status of the stack and view the stack outputs:
simple-s3-copy cf describe <globally-unique-app-name>
To deploy updates to CF stack:
simple-s3-copy cf update <globally-unique-app-name> (this also implicitly updates all lambdas with the current code)
To updates all lambdas with the current code separately from updating the CF stack:
simple-s3-copy lbd update <globally-unique-app-name>
NOTE: commands below are dangerous because they destroy all the CF stack resources including S3 buckets and their content
To destroy a stack:
simple-s3-copy cf destroy <globally-unique-app-name>
To cycle a stack: destroy it, rebuild, deploy and re-create:
simple-s3-copy cf cycle <globally-unique-app-name>
this command is useful when doing iterative development on a qmulus
In case if lambda function suddenly starts timing out, pay close attention to the lambda memory consumption. Sometimes a log message containing
c_poll: permission denied may appear. Try increasing the lambda function's provisioned memory size.
The idea is to use this project as an experiment platform to design a toolset that would allow very rapid and painless development for serverless architectures, and of course, would leverage all the great stuff that Haskell has to offer. The plan is to add all the usual AWS SaaS puzzle-pieces like ApiGateway, Cognito, Dynamo, SQS, SNS, etc and make them easily composable. Furthermore, using free/operational monad based DSLs would allow for various ways to statically analyze architecture + lambda behaviors and infer various properties that would allow for optimizations, correctness checking, generating artifacts like visual diagrams, etc in addition to making code safer by not directly using the IO.
Big kudos to
- David Reaver (@jdreaver), the creator and maintainer of the stratosphere package
- Brendan Hay (@brendanhay), the creator and maintainer of the amazonka package
and to the wonderful Haskell community.