-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to execute multiple requests in parallel through sam local start-api #301
Comments
The code is storing per-request information in a context that spans multiple requests, so there is weird/broken behavior if you issue multiple requests at the same time. I've put together a cursory PR (sorry, haven't written much go) at #304 |
Would be really nice to have this functionality, or a queue system to handle multiple requests. |
Yeah it's really difficult to develop and test an API locally when it can only handle 1 request at a time. That, coupled with the fact that each requests takes 4-8 seconds to complete (container has to spin up for every request), makes the whole process painful. |
Parallel Requests are now completely supported as of v.0.3.0. We even have integ tests that exercise this :) Closing as this was addressed. |
@jfuss : I am still unable to execute multiple request using |
@chetan-nandrajog I am experimenting the same. do you use |
@ben8p that is correct. i are using |
@chetan-nandrajog I will... Error was in the middle of Sam logs |
I'm also seeing this same error when running with |
Eventually I workarounded my problem by using a custom docker network instead of the host. |
hi @ben8p. I started using a custom docker network. but every time I make a request from my front-end it tries to pull the docker images all the time which take around 10 seconds each time. The way I created the docker network was by running |
@akomiqaia |
to avoid re-building of container for each request you can use
I'm not trying to make concurrent connections while not sure how logs will be handled. I like how lambda provides consistent log of the request and there are never logs shuffled from several requests. But when adding this flag container is built once and API becomes pretty snappy Issue with this is if you send multiple requests at once and some other request is running, call fails. So address that i have this funny setup now. Shell script which just runs 10 instances and then UI is picking next port for each call when url from config is coming with
i know it's hard work around but it works fine, once all 10 ports are hit at least once they all start to response in ms not seconds |
I'm using @liesislukas hack solution for faster local development but would be nice if warm containers would work with parallel requests out of the box |
I do not use It's suggested that this was fixed but it appears not to be the case. There seems to be some suggesting that it's fixed, but also that it's not the intended use case of SAM and it won't be fixed.... which is it? |
Hello, I might encounter this error as well. If all the lambda functions can be run simultaneously and work as a local API, that would be great for local development. |
Template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
CheckoutLambda:
Type: "AWS::Serverless::Function"
Properties:
Handler: "CheckoutLambda/index.handler"
Role: redacted
Runtime: "nodejs6.10"
Timeout: 300
Environment:
Variables:
ENV: int
Events:
CheckoutApi:
Type: Api
Properties:
Path: '/checkout/donate'
Method: post
Executing two posts in parallel:
curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '5000' & curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '4000'
Output from aws sam cli:
2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) 2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) START RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Version: $LATEST 2018-02-12T22:42:57.276Z 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 4000 START RequestId: 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf Version: $LATEST 2018-02-12T22:42:57.539Z 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf 5000 END RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 REPORT RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Duration: 4033.19 ms Billed Duration: 4100 ms Memory Size: 0 MB Max Memory Used: 31 MB 2018/02/12 14:42:59 Function returned an invalid response (must include one of: body, headers or statusCode in the response object): unexpected end of JSON input
Another thing to note is that the function then later times out:
2018/02/12 14:26:05 Function CheckoutLambda/index.handler timed out after 300 seconds
handler(index.js)
`function sleepFor( sleepDuration ){
var now = new Date().getTime();
while(new Date().getTime() < now + sleepDuration){ /* do nothing */ }
}
function formatResponse (statusCode, body) {
return {
statusCode: statusCode,
body: JSON.stringify(body)
};
}
exports.handler = (event, context, callback) => {
console.log(event.body);
sleepFor(JSON.parse(event.body));
callback(null, formatResponse(200,'test'));
};`
This is a contrived example that I could share. But I'm running into this problem when trying to create integration tests for my lambda function. Ideally I'd like to be able to execute many tests in parallel against the locally spun-up api so that it mirrors the actual api. Thanks for anyone who looks into this! :)
PS: Sometimes it works as expected. Race condition?
The text was updated successfully, but these errors were encountered: