-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lambda is rate limited by AWS parameter store #59
Comments
Have you tried turning on SSM parameter store's higher throughput setting? https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-throughput.html. It takes you out of the free tier but does provide a higher throughput. Based on the numbers you estimated I'm not sure it will meet your requirements but worth measuring. We used SSM parameter store over S3 because the API the Data Protection framework has for providers is basically a single call to go and get all protection keys. https://github.com/aws/aws-ssm-data-protection-provider-for-aspnet/blob/master/src/Amazon.AspNetCore.DataProtection.SSM/SSMXmlRepository.cs#L85 The problem with using S3 is there isn't a bulk way of getting a bunch of objects with unknown keys. For S3 I would need to make a list object call and then individual get object calls for each data protection key. Imagine when you start getting quite a few data protection keys how that would affect your Lambda function's cold start. We could provide an alternative storage solution using DynamoDB, there we could do a query to get the list of keys for an "application" in the table. How would you feel about using DynamoDB as a storage solution? |
Hi Norm thanks for the response. I believe Kamran has fed back to you with respect to the increased throughput is not meeting our needs so we will have to discount that.
I do not have any objections to using Dynamo DB as an alternative storage solution in principle. Presumably this would just manifest itself as a different method in the .Net SDK?
Thanks,
Chris
…________________________________
From: Norm Johanson ***@***.***>
Sent: Friday, November 10, 2023 08:36
To: aws/aws-ssm-data-protection-provider-for-aspnet ***@***.***>
Cc: Chris Wood ***@***.***>; Author ***@***.***>
Subject: Re: [aws/aws-ssm-data-protection-provider-for-aspnet] Lambda is rate limited by AWS parameter store (Issue #59)
EXTERNAL MESSAGE:WARNING – TREAT WITH CAUTION! ! This email has been sent from outside UCAS. You must not click on links or attachments unless you know the sender and are absolutely certain that the content is safe.
Have you tried turning on SSM parameter store's higher throughput setting? https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-throughput.html. It takes you out of the free tier but does provide a higher throughput. Based on the numbers you estimated I'm not sure it will meet your requirements but worth measuring.
We used SSM parameter store over S3 because the API the Data Protection framework has for providers is basically a single call to go and get all protection keys. https://github.com/aws/aws-ssm-data-protection-provider-for-aspnet/blob/master/src/Amazon.AspNetCore.DataProtection.SSM/SSMXmlRepository.cs#L85
The problem with using S3 is there isn't a bulk way of getting a bunch of objects with unknown keys. For S3 I would need to make a list object call and then individual get object calls for each data protection key. Imagine when you start getting quite a few data protection keys how that would affect your Lambda function's cold start.
We could provide an alternative storage solution using DynamoDB, there we could do a query to get the list of keys for an "application" in the table. How would you feel about using DynamoDB as a storage solution?
—
Reply to this email directly, view it on GitHub<#59 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BD3E3NSC4Q5ZK7SBYN2TEO3YDXRSTAVCNFSM6AAAAAA7EVHSN6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBVGMYTAMJRGI>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Would the newly-announced batch api make Secrets Manager a viable alternative as a backing store? |
Describe the bug
Our UI application is served up using a .NET Application hosted behind a lambda via API Gateway. This lambda also provides a secure route to our APIs. This lambda uses this package for cookie authentication. Unfortunately the current method PersistKeysToAWSSystemsManager only allows 40 requests per second and this does not scale to meet our needs on our key business day when we need in the region of 300 requests per second.
We need an alternative store for data verification that will solve this problem?
Expected Behavior
That we would be able to create 300 new lambdas every second to meet our concurrency needs
Current Behavior
We are only able to create 40 lambdas per second and are unable to get the desired throughput
Reproduction Steps
In Startup.cs we have the following code
This code is then deployed to a lambda that is exposed via API Gateway. When we make 1000 requests per second the lambdas fail to create due to rate limiting in the parameter store.
Possible Solution
Offer an alternative method to PersistKeysToAWSSystemsManager. Perhaps the keys could be persisted to S3 which has a greater allowed throughput.
Additional Information/Context
No response
AWS .NET SDK and/or Package version used
Targeted .NET Platform
.NET 6
Operating System and version
Windows 10
The text was updated successfully, but these errors were encountered: