Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My ideal DX for Rust on AWS Lambda #4

Closed
brainstorm opened this issue Oct 4, 2021 · 7 comments
Closed

My ideal DX for Rust on AWS Lambda #4

brainstorm opened this issue Oct 4, 2021 · 7 comments

Comments

@brainstorm
Copy link
Contributor

brainstorm commented Oct 4, 2021

Screen Shot 2021-10-04 at 2 09 05 pm

Challenge accepted, @nmoutschen! ;)

Here's my (first ever) attempt at an Amazon (less-than-)6-pager for an "ideal world DX for Rust and AWS lambda". My hope is that it gets circulated at AWS, reaches leaders willing to take ownership of the goals outlined below so that customers (like me) get to enjoy better Rust lambdas developer experience and unleash its untapped potential.

It is now a draft since "Strategic priorities" might not be in line with how reports like this are structured, but bear with me... happy to edit/improve if there's uptake ;)

Introduction

This document aims to outline the current state of AWS Lambda runtime support for Rust and share strategic priorities on next iterations for its developer tooling ecosystem. AWS serverless teams mission is to work with developers interested in migrating current idle-prone workloads into optimized, safe and GC-free Rust lambdas.

Given the uptake of serverless and the recent move towards more cost-efficient ARM64 lambdas, removing Rust developer friction can increase organic adoption and, in turn, attract other developers that can port high performance computing workloads from on-premise HPC to Rust lambda (where applicable). AWS serverless teams focus for Q4 2021 should be to establish Rust as a first class citizen of the lambda ecosystem by improving the devops tooling around it, running benchmarks against other runtimes and updating documentation, app templates and demo repositories accordingly to ease onboarding.

Goals

In Q4 2021 AWS Rust serverless should plan focusing on achieving the following goals:

  1. Increase tooling quality: Better integration of sam-cli and/or sam-cli-cdk local runtime execution and deployment for all host systems and targets (i.e Apple Silicon). Also integrate with the native cargo Rust tool (no Makefile(s)).

  2. Measure and increase performance: Remove MUSL wrapping misconceptions from documentation. Through benchmarking and allocator tuning, Rust lambdas (could/should?) outperform wall time and memory consumption against other runtimes by at least 5-10% on average. Run the tests and add those to CI to spot performance regressions as best practices change.

  3. Change runtime naming: Currently Rust lambdas are offered as provided.al2 runtime, unintentionally leaving Rust as a "second class citizen" against the rest of officially supported runtimes. Simply aliasing provided.al2 to rust-1.x to mirror i.e "python3.9" lambdas could signal "supported and quality runtime at AWS". No changes w.r.t runtime underneath are needed and experts can continue using provided.al2, just aliasing/naming...Easy win?

Tenets

The following tenets are guiding principles we use to evaluate and prioritize Rust AWS lambda tooling improvement activities:

Quality over quantity

SAM CLI and/or CDK should be the canonical way to deploy rust lambdas without interfering with the expected Rust developer build tool: cargo. Several third party Rust crates are attempting to fill some of DX experience gaps from SAM and other official AWS tooling: cargo-aws-lambda, aws-build, minlambda, mu_runtime, rocket_lamb, warp_lambda. While this contributions are welcome, it also shows that the shortcomings could be addressed upstream in the official tooling.

Reduce developer friction

We want to focus on educating developers through in-depth technical content and tool documentation instead of relying solely on community knowledge transfer.

Invent and Simplify

We want Rust developers to feel at home when they approach AWS Rust lambdas and that means an officially supported aws cargo subcommand (via an officially supported AWS Rust crate).

Strategic priorities?

Ideally bring SAM-CLI, rust_lambda_runtime and other relevant Rust tooling ecosystem teams together and build an example that is deployable on Graviton2 instances with the following cargo commands (ideal scenario), supported officially by AWS via its documentation:

  1. cargo aws build
  2. cargo aws deploy

If the user wants to debug the lambda locally:

  1. cargo aws build
  2. cargo aws run [invoke | start-api]

The official cargo aws subcommand could potentially call other aws services (simple AWS CLI and/or SAM-CLI-(CDK) wrapper), but there's no need to wrap them all on a first iteration for this tooling.

Mentions

Please let me know if you wish to be removed from this thread, I have no intention to inconvenience anybody. I just think that, if anyone, you have the means to make a difference on this topic :)

@softprops @davidbarsky @jonhoo @mjasay
@praneetap @jfuss @sapessi @singledigit @hoffa
@coltonweaver @bahildebrand
@jdisanti @rcoh

@nmoutschen
Copy link
Contributor

Adding to this, first step would be to get a better idea of Rust usage at the moment.

Also looking at what resources would be useful. If we can start putting good documentation and incremental improvements, that'd be great. cargo aws would be awesome, but that would take some time - especially in term of scoping what it would do and getting something out there.

On the rust-1.x runtime, I think I'd prefer to keep provided.al2, but I think we're probably lacking documentation and communication so people understand that Rust on Lambda is supported. I personally think Go should go the same way, as too many people use go-1.x today, which doesn't support the Lambda runtime API.

@rimutaka
Copy link

What is DX?

@brainstorm
Copy link
Contributor Author

brainstorm commented Nov 11, 2021 via email

@calavera
Copy link
Contributor

calavera commented Mar 7, 2022

@brainstorm I've been playing with a cargo subcommand that abstracts away most of the boilerplate necessary to build and organize lambda binaries, check it out: https://crates.io/crates/cargo-lambda

I didn't call it cargo-aws because aws is too broad of a name, I think lambda is more specific to this use case, and easier to understand what it does.

@brainstorm
Copy link
Contributor Author

This is fantastic, thanks so much! I'll give it a spin once I'm back from holidays :)

@jkmogane
Copy link

jkmogane commented Mar 9, 2022

@brainstorm I've been playing with a cargo subcommand that abstracts away most of the boilerplate necessary to build and organize lambda binaries, check it out: https://crates.io/crates/cargo-lambda

I didn't call it cargo-aws because aws is too broad of a name, I think lambda is more specific to this use case, and easier to understand what it does.

This is awesome David

@brainstorm
Copy link
Contributor Author

brainstorm commented Apr 3, 2024

Fast forward to 2024, I'm closing this issue since IMHO, all goals outlined have been accomplished:

Goal 1 has been mostly smashed by the work of @calavera, kudos! The cargo-lambda ecosystem has fixed it for good. We use it consistently and works well for us @umccr.

Goal 2 has seen the runtime not using MUSL anymore... It'd be interesting to have MUSL benchmarked over lambda-perf, but the current situation already has very good standing in cold starts at the moment. Probably testing mimalloc, jemalloc and other high perf allocators on that public continuous benchmark would put this goal to bed for good?

Goal 3 had a fun and unexpected twist to it: Go lambda runtime got renamed to provided.al2 instead of Rust having his own runtime name... welp, good enough, no need to maintain runtime names for each language, I guess? 🤷🏻‍♂️

In any case: thanks for listening AWS!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants