Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to execute multiple requests in parallel through sam local start-api #301

Closed
jdeon-classy opened this issue Feb 12, 2018 · 16 comments

Comments

@jdeon-classy
Copy link

jdeon-classy commented Feb 12, 2018

Template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
CheckoutLambda:
Type: "AWS::Serverless::Function"
Properties:
Handler: "CheckoutLambda/index.handler"
Role: redacted
Runtime: "nodejs6.10"
Timeout: 300
Environment:
Variables:
ENV: int
Events:
CheckoutApi:
Type: Api
Properties:
Path: '/checkout/donate'
Method: post
Executing two posts in parallel:
curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '5000' & curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '4000'
Output from aws sam cli:
2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) 2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) START RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Version: $LATEST 2018-02-12T22:42:57.276Z 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 4000 START RequestId: 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf Version: $LATEST 2018-02-12T22:42:57.539Z 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf 5000 END RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 REPORT RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Duration: 4033.19 ms Billed Duration: 4100 ms Memory Size: 0 MB Max Memory Used: 31 MB 2018/02/12 14:42:59 Function returned an invalid response (must include one of: body, headers or statusCode in the response object): unexpected end of JSON input

Another thing to note is that the function then later times out:
2018/02/12 14:26:05 Function CheckoutLambda/index.handler timed out after 300 seconds

handler(index.js)
`function sleepFor( sleepDuration ){
var now = new Date().getTime();
while(new Date().getTime() < now + sleepDuration){ /* do nothing */ }
}
function formatResponse (statusCode, body) {
return {
statusCode: statusCode,
body: JSON.stringify(body)
};
}
exports.handler = (event, context, callback) => {
console.log(event.body);
sleepFor(JSON.parse(event.body));

callback(null, formatResponse(200,'test'));
};`
This is a contrived example that I could share. But I'm running into this problem when trying to create integration tests for my lambda function. Ideally I'd like to be able to execute many tests in parallel against the locally spun-up api so that it mirrors the actual api. Thanks for anyone who looks into this! :)

PS: Sometimes it works as expected. Race condition?

@brownjava
Copy link

brownjava commented Feb 14, 2018

The code is storing per-request information in a context that spans multiple requests, so there is weird/broken behavior if you issue multiple requests at the same time. I've put together a cursory PR (sorry, haven't written much go) at #304

@araker
Copy link

araker commented Mar 23, 2018

Would be really nice to have this functionality, or a queue system to handle multiple requests.

@hobotroid
Copy link

Yeah it's really difficult to develop and test an API locally when it can only handle 1 request at a time. That, coupled with the fact that each requests takes 4-8 seconds to complete (container has to spin up for every request), makes the whole process painful.

@jfuss
Copy link
Contributor

jfuss commented May 30, 2018

Parallel Requests are now completely supported as of v.0.3.0. We even have integ tests that exercise this :)

Closing as this was addressed.

@chetan-nandrajog
Copy link

@jfuss : I am still unable to execute multiple request using sam local start-api
Using sam version SAM CLI, version 0.40.0
Whenever i try to use concurrent api call i get 503- bad gateway

@ben8p
Copy link

ben8p commented Mar 18, 2020

@chetan-nandrajog I am experimenting the same. do you use --docker-network host ?
Because that's what is causing the issue for me.
Underlying error: listen tcp :9001: bind: address already in use

@chetan-nandrajog
Copy link

chetan-nandrajog commented Mar 18, 2020

@ben8p that is correct. i are using --docker-network=host . Please let me know if you find a solution for it.
Where did you see that error though?
For me the 2nd api was just stuck and i was just getting 503- bad gateway on the browser

@ben8p
Copy link

ben8p commented Mar 18, 2020

@chetan-nandrajog I will...
All Linux users must have the same issue.

Error was in the middle of Sam logs

@seawatts
Copy link

I'm also seeing this same error when running with --docker-network host

@ben8p
Copy link

ben8p commented May 19, 2020

Eventually I workarounded my problem by using a custom docker network instead of the host.

@akomiqaia
Copy link

hi @ben8p. I started using a custom docker network. but every time I make a request from my front-end it tries to pull the docker images all the time which take around 10 seconds each time. The way I created the docker network was by running docker network create lambda-local and then tried to use sam local start-api --docker-network lambda-local. do you need to add extra flags on sam invocations or are there ways to make simultaneous requests that won't take too long?

@ben8p
Copy link

ben8p commented Feb 8, 2021

@akomiqaia
Here is the command I use:
sam local start-api --docker-network=my-docker-network --host 0.0.0.0 --port 3001 --debug --template-file ./my.template.json --docker-volume-basedir ./build

@liesislukas
Copy link

liesislukas commented Nov 30, 2021

to avoid re-building of container for each request you can use --warm-containers LAZY flag

sam local start-api --warm-containers LAZY

I'm not trying to make concurrent connections while not sure how logs will be handled. I like how lambda provides consistent log of the request and there are never logs shuffled from several requests. But when adding this flag container is built once and API becomes pretty snappy

image

Issue with this is if you send multiple requests at once and some other request is running, call fails. So address that i have this funny setup now. Shell script which just runs 10 instances and then UI is picking next port for each call when url from config is coming with {{RANDOM_LAMBDA_PORT}} as port placeholder. For my app 10 is more than enough, so basically having 10 concurrent lambdas

sam local start-api -p 11000 --warm-containers EAGER &
sam local start-api -p 11001 --warm-containers EAGER &
sam local start-api -p 11002 --warm-containers EAGER &
sam local start-api -p 11003 --warm-containers EAGER &
sam local start-api -p 11004 --warm-containers EAGER &
sam local start-api -p 11005 --warm-containers EAGER &
sam local start-api -p 11006 --warm-containers EAGER &
sam local start-api -p 11007 --warm-containers EAGER &
sam local start-api -p 11008 --warm-containers EAGER &
sam local start-api -p 11009 --warm-containers EAGER &
sam local start-api -p 11010 --warm-containers EAGER

i know it's hard work around but it works fine, once all 10 ports are hit at least once they all start to response in ms not seconds

@Pipas
Copy link

Pipas commented Jan 20, 2022

I'm using @liesislukas hack solution for faster local development but would be nice if warm containers would work with parallel requests out of the box

@CynanX
Copy link

CynanX commented Aug 31, 2023

I do not use --docker-network host, yet am experiencing the same issue. Doing multiple calls to a Python lambda fails.

It's suggested that this was fixed but it appears not to be the case.

There seems to be some suggesting that it's fixed, but also that it's not the intended use case of SAM and it won't be fixed.... which is it?

@edgejohn
Copy link

Hello, I might encounter this error as well. If all the lambda functions can be run simultaneously and work as a local API, that would be great for local development.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests