Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory consumption #403

Closed
edwmurph opened this issue Nov 20, 2017 · 7 comments
Closed

High memory consumption #403

edwmurph opened this issue Nov 20, 2017 · 7 comments

Comments

@edwmurph
Copy link

edwmurph commented Nov 20, 2017

I'm load testing with 3 virtual users, at a rate of 200 requests/second, to an api that just responds with a tiny json snippet. However, I'm seeing artillery is consuming upwards of 2G of memory. I'm currently running experiments to collect more precise data but generally speaking, is this amount of memory consumption normal for Artillery? What is Artillery using all this memory for?

@hassy
Copy link
Member

hassy commented Nov 21, 2017

No, that sounds way too high. Can you share a bit more information?

  • Version of Artillery (artillery -V) and Node.js
  • Are you using any custom JS code?
  • Are you using a CSV payload file?

A reproducible test case but would be ideal but I know it can be challenging to create/extract one.

@edwmurph
Copy link
Author

UPDATE:
I tried setting the http.pool config to 10 and now when I load test with 200 requests/second the memory stays under 400 MiB. However, my load test has 2 phases

[{
  "duration": "600",
  "arrivalRate": 1,
  "rampTo": "200"
},
{
  "duration": "600",
  "arrivalRate": "200"
}]  

and within a minute after the second phase starts, I see two interesting things happen:

  1. CPU usage immediately spikes to 100% and stays there until the end of the test.
  2. The number of concurrent users starts growing very fast (seemingly unboundedly as it ends up in the thousands by the end of the test).

To confirm that the start of the second phase triggered the problem, I tried a load test with only one long ramp up phase: e.g.

[{
  "duration": "1200",
  "arrivalRate": 1,
  "rampTo": "200"
}]

After trying this new single phase test, CPU stays below 50% and the number of concurrent users stays under 10 throughout the whole test.

Version of Artillery: 1.6.0-12

Custom JS code: artillery is being triggered from within a node app that spawns a child process that calls the artillery CLI with the --overrides flag (I'm not using the artillery npm module because I noticed strange behavior with it). I use the overrides flag to trigger an artillery test with dynamically determined parameters.

CSV payload file: Yes I'm using a .csv that contains 3 virtual users

@hassy
Copy link
Member

hassy commented Nov 22, 2017

Thanks @edwmurph! The behaviour you're observing makes sense. With an arrrivalRate of 200/second you'd be creating 200 TCP connections per second (unless http.pool is used) which is very CPU-heavy. You'd also run out of ephemeral ports on your system very quickly - in under 3 minutes on most Linux boxes with default settings - which would cause queueing showing us a high & growing number of concurrent users reported by Artillery.

What sort of a machine are you running your tests from? (CPU/RAM and is it a bare-metal host or a VM)

@edwmurph
Copy link
Author

What sort of a machine are you running your tests from? (CPU/RAM and is it a bare-metal host or a VM)

The machine is an Amazon VM - c4.2xlarge - running with 8 GB RAM and it's sharing CPU resources with whatever other apps are running but it can get up to 8 vCPUs depending on the other apps.

Regarding my observation about the number of concurrent users spiking, since I set http.pool to 10, wouldn't artillery only be using at most 10 ephemeral ports? In other words, the spike in concurrent users reported by Artillery shouldn't have been caused by there being no ephemeral ports. Right? Can you think of any other explanations? Also I thought concurrent users corresponded to the number of users in the payload .csv file?

@hassy
Copy link
Member

hassy commented Nov 23, 2017

That makes sense too. 200 new arrivals per second means at least 200 new requests queued up per second, but each of the 10 connections can only have one request at a time i.e. there won't be more than 10 concurrenty requests in-flight at any given time, so again you'de have a lot of virtual users queuing up.

I presume by users in the csv file you mean some user information such as an email/password? That would be a distinct concept of "user" from the one reported by Artillery. In Artillery each instance of an executing scenario is a virtual user (to mimic the real world). So the real world equivalent of the test I presume you're running at the moment is thousands of users using your system but all sharing the same set of credentials/other data that comes from the CSV file.

There's info on various concepts in Artillery here: https://artillery.io/docs/basic-concepts/

@edwmurph
Copy link
Author

I'm load testing a microservice that is currently only called by a single source so sockets should be reused in the real world equivalent of my load test.

Also you answered the original question I opened this issue with, which was high memory consumption, so this issue can be closed.

@hassy
Copy link
Member

hassy commented Nov 27, 2017

I'm load testing a microservice that is currently only called by a single source so sockets should be reused in the real world equivalent of my load test.

Makes sense! If the number of TCP connections to the microservice is going to be limited, then you definitely want to use the http.pool setting (and you've found that 10 connections wouldn't be enough at that level of load)

@hassy hassy closed this as completed Nov 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants