New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controlling requests per second #211

Closed
mambusskruj opened this Issue May 10, 2017 · 28 comments

Comments

Projects
None yet
10 participants
@mambusskruj

mambusskruj commented May 10, 2017

Hi guys,
I think it might be very useful to run tests with setting wishful RPS. In general it can be like this:
k6 run --vus 10 --duration 100s --rps 200 test_script.js

One of the ways to add this functionality is dynamically change some wait time value between execution scenarios per VU.

For example one VU needs to execute 20 requests every second. So, we need to calculate what time VU need to wait between starting to execution group with list of urls for achieve 20 requests per second. For this we need to calculate it dynamically paying attention to average response time of this group-urls.

So, if we have avg response time for set of group urls like 20ms, and VU need to achieve 20 rps, wait time will calculate like:

1000ms / 20rq = 50ms (wait time between execution group if response time == 0)
and then 50ms - 20ms (average group response time), so it'll be like 30ms pause for VU between execution group-scenario for achieve 20 rps.

Well, maybe I do it in a wrong way, but I think more godlike people will have some ideas for this feature.
Lets start the conversation!

@ppcano

This comment has been minimized.

Show comment
Hide comment
@ppcano

ppcano May 11, 2017

Member

@mambusskruj, Thanks for the input.

could you give more info about the type of testing you would like to perform using the RPS option?

Member

ppcano commented May 11, 2017

@mambusskruj, Thanks for the input.

could you give more info about the type of testing you would like to perform using the RPS option?

@mambusskruj

This comment has been minimized.

Show comment
Hide comment
@mambusskruj

mambusskruj May 11, 2017

Well, one situation can be like this:

We have some stats about users behavior on our production system. We have 400K active users, and from analytics we see that each user do something per 10 minutes. (like changing live video channel).
So this user push few requests for this action. (get channels list and then play channel).

We can say, that we can do load test with 400000users / 600sec(10min) ~= 667 users/sec that pushing 2 requests. So, we can setup --rps parameter with 667 * 2 = 1334 req/sec for this two-url test script.

Another situation for me its just exactly find out how my server do job with constant RPS that it handle. What response time will ll be with --rps 100 and then what response time will be with --rps 400, and so on.

mambusskruj commented May 11, 2017

Well, one situation can be like this:

We have some stats about users behavior on our production system. We have 400K active users, and from analytics we see that each user do something per 10 minutes. (like changing live video channel).
So this user push few requests for this action. (get channels list and then play channel).

We can say, that we can do load test with 400000users / 600sec(10min) ~= 667 users/sec that pushing 2 requests. So, we can setup --rps parameter with 667 * 2 = 1334 req/sec for this two-url test script.

Another situation for me its just exactly find out how my server do job with constant RPS that it handle. What response time will ll be with --rps 100 and then what response time will be with --rps 400, and so on.

@micsjo

This comment has been minimized.

Show comment
Hide comment
@micsjo

micsjo May 11, 2017

Collaborator

Your 1334 rps test would not be accurate since you would not be testing how users impact your system. You would be testing how one user impacts your system at 1334 rps. The premise of testing for rps and thinking it corresponds to users is wrong. It can be correct when you are testing parts of SOA architectures, individual endpoints, API endpoints and so on. There are a number of very valid testcases where RPS is a good requirement.

In user simulations of E2E (end to end) testing however - it is not a good one.

In your E2E setup rps would be a poor requirement - it is one of many metrics in your test result.

Different purposes for different test goals.

On a complete sidebar, I have actively tested exactly your use case, viewers changing channels for Swedens largest IP-TV provider so I have seen the issues related to this.

Collaborator

micsjo commented May 11, 2017

Your 1334 rps test would not be accurate since you would not be testing how users impact your system. You would be testing how one user impacts your system at 1334 rps. The premise of testing for rps and thinking it corresponds to users is wrong. It can be correct when you are testing parts of SOA architectures, individual endpoints, API endpoints and so on. There are a number of very valid testcases where RPS is a good requirement.

In user simulations of E2E (end to end) testing however - it is not a good one.

In your E2E setup rps would be a poor requirement - it is one of many metrics in your test result.

Different purposes for different test goals.

On a complete sidebar, I have actively tested exactly your use case, viewers changing channels for Swedens largest IP-TV provider so I have seen the issues related to this.

@mambusskruj

This comment has been minimized.

Show comment
Hide comment
@mambusskruj

mambusskruj May 11, 2017

@micsjo Thank you for your reply. Well, in fact we want that not only one user make 1334 requests per second, but 667 users do this number of requests per second. So the logic is:

We create 10 VUs and this 10 VUs starting to make 1334 rq in sum (each create 2 rq/sec) through the test. The idea is to execute test-group by user exactly per one second (instead of response time or one second + response time).

mambusskruj commented May 11, 2017

@micsjo Thank you for your reply. Well, in fact we want that not only one user make 1334 requests per second, but 667 users do this number of requests per second. So the logic is:

We create 10 VUs and this 10 VUs starting to make 1334 rq in sum (each create 2 rq/sec) through the test. The idea is to execute test-group by user exactly per one second (instead of response time or one second + response time).

@micsjo

This comment has been minimized.

Show comment
Hide comment
@micsjo

micsjo May 12, 2017

Collaborator

Then you'd be testing 10 users doing in total 1334 req/s.

The only way that tests your server capacity is if your channel change request implies that these all happen at the same time:

  • session establishment
  • authorization
  • authentication
  • channel change (if this is the actual action)
  • session termination

It might well be the case, if clients will have an STB they can be authenticated by MAC-address. But I don't think any owner of intellectual property will allow such a lax method except in closed network, not for an OTT service.

The theoretical extension to this would be that you need 400 K VU. That's not true. You could have a subset thereof establishing 400 K sessions and then a reasonable subset (such as your 667) doing the channel flipping.

But I am making a ridiculous amount of assumptions on the System Under Test here.

To establish a meaningful model would need insight into the SUT, how it works, the actual requirements and how they are derived to be able to map them to a suitable test setup and so on. So please consider this to be some input on how to setup tests - not the be all, end all of "the one true way".

Collaborator

micsjo commented May 12, 2017

Then you'd be testing 10 users doing in total 1334 req/s.

The only way that tests your server capacity is if your channel change request implies that these all happen at the same time:

  • session establishment
  • authorization
  • authentication
  • channel change (if this is the actual action)
  • session termination

It might well be the case, if clients will have an STB they can be authenticated by MAC-address. But I don't think any owner of intellectual property will allow such a lax method except in closed network, not for an OTT service.

The theoretical extension to this would be that you need 400 K VU. That's not true. You could have a subset thereof establishing 400 K sessions and then a reasonable subset (such as your 667) doing the channel flipping.

But I am making a ridiculous amount of assumptions on the System Under Test here.

To establish a meaningful model would need insight into the SUT, how it works, the actual requirements and how they are derived to be able to map them to a suitable test setup and so on. So please consider this to be some input on how to setup tests - not the be all, end all of "the one true way".

@aidylewis

This comment has been minimized.

Show comment
Hide comment
@aidylewis

aidylewis Sep 1, 2017

Collaborator

A lot of k6 users will be bashing URLs with k6 and hitting backend APIs. Setting RPS is useful here, but users should be encouraged to calculate concurrent users for complex usage scenarios and a link to @ragnarlonn's article may be useful. http://support.loadimpact.com/knowledgebase/articles/265461-calculating-the-number-of-virtual-users-concurren

Collaborator

aidylewis commented Sep 1, 2017

A lot of k6 users will be bashing URLs with k6 and hitting backend APIs. Setting RPS is useful here, but users should be encouraged to calculate concurrent users for complex usage scenarios and a link to @ragnarlonn's article may be useful. http://support.loadimpact.com/knowledgebase/articles/265461-calculating-the-number-of-virtual-users-concurren

@cyberw

This comment has been minimized.

Show comment
Hide comment
@cyberw

cyberw Oct 25, 2017

Hi! New k6 user here. I just wanted to weight in on this: This is a really important feature. Mainly to find changes in resource utilization when a change has been made in the system under test, but also to get more predictable long running tests (e.g. if your system breaks after 10000 requests, you want that to happen at the same time every time, to help with analysis). Just about every load testing tool I've seen supports this (jmeter, gatling, locust, loadrunner etc)

It is useful to be able to set both a global throughput ceiling and a per-VU-ceiling, but the first one is most important, and the hardest to build yourself (using sleeps in your test or something).

cyberw commented Oct 25, 2017

Hi! New k6 user here. I just wanted to weight in on this: This is a really important feature. Mainly to find changes in resource utilization when a change has been made in the system under test, but also to get more predictable long running tests (e.g. if your system breaks after 10000 requests, you want that to happen at the same time every time, to help with analysis). Just about every load testing tool I've seen supports this (jmeter, gatling, locust, loadrunner etc)

It is useful to be able to set both a global throughput ceiling and a per-VU-ceiling, but the first one is most important, and the hardest to build yourself (using sleeps in your test or something).

@cyberw

This comment has been minimized.

Show comment
Hide comment
@cyberw

cyberw Oct 25, 2017

Also, there are two things you might want to limit: the number of requests per second or the number of iterations per second. I think the second one is most useful, but either one works.

cyberw commented Oct 25, 2017

Also, there are two things you might want to limit: the number of requests per second or the number of iterations per second. I think the second one is most useful, but either one works.

@aidylewis

This comment has been minimized.

Show comment
Hide comment
@aidylewis

aidylewis Oct 25, 2017

Collaborator

Instead of iterating shouldn't we be firing off requests per second and shutting them down unless a connection pool is specified?

Collaborator

aidylewis commented Oct 25, 2017

Instead of iterating shouldn't we be firing off requests per second and shutting them down unless a connection pool is specified?

@cyberw

This comment has been minimized.

Show comment
Hide comment
@cyberw

cyberw Oct 25, 2017

I didnt get that? How does any of this relate to connection pools?

Just to be clear, when i said ”iterations” I meant whole scenario iterations (calls to the default function)

cyberw commented Oct 25, 2017

I didnt get that? How does any of this relate to connection pools?

Just to be clear, when i said ”iterations” I meant whole scenario iterations (calls to the default function)

@aidylewis

This comment has been minimized.

Show comment
Hide comment
@aidylewis

aidylewis Oct 25, 2017

Collaborator

So as far as I understand it, the VU iterates for a specified time range unless we explicitly specify iterations.

Collaborator

aidylewis commented Oct 25, 2017

So as far as I understand it, the VU iterates for a specified time range unless we explicitly specify iterations.

@cyberw

This comment has been minimized.

Show comment
Hide comment
@cyberw

cyberw Oct 26, 2017

How does that relate to running a constant throughput test? Are you talking about how to implement the feature or a workaround? I dont get it.

cyberw commented Oct 26, 2017

How does that relate to running a constant throughput test? Are you talking about how to implement the feature or a workaround? I dont get it.

@ragnarlonn

This comment has been minimized.

Show comment
Hide comment
@ragnarlonn

ragnarlonn Oct 26, 2017

@liclac how hard would this be to implement? Maybe we should up the prio on this as several people want it and it seems a very useful feature from an API end point regression testing perspective (which is why so many tools have it, like @cyberw wrote)

ragnarlonn commented Oct 26, 2017

@liclac how hard would this be to implement? Maybe we should up the prio on this as several people want it and it seems a very useful feature from an API end point regression testing perspective (which is why so many tools have it, like @cyberw wrote)

@ragnarlonn

This comment has been minimized.

Show comment
Hide comment
@ragnarlonn

ragnarlonn Oct 26, 2017

...or we could make it a bounty maybe? @robingustafsson ?

ragnarlonn commented Oct 26, 2017

...or we could make it a bounty maybe? @robingustafsson ?

@robingustafsson

This comment has been minimized.

Show comment
Hide comment
@robingustafsson

robingustafsson Oct 26, 2017

Member

@ragnarlonn We could perhaps make it a bounty. @liclac Would need to weigh in on the complexity of implementing the functionality and if it's suitable as a bounty. In our priority list it's currently scheduled for December.

Member

robingustafsson commented Oct 26, 2017

@ragnarlonn We could perhaps make it a bounty. @liclac Would need to weigh in on the complexity of implementing the functionality and if it's suitable as a bounty. In our priority list it's currently scheduled for December.

@aidylewis

This comment has been minimized.

Show comment
Hide comment
@aidylewis

aidylewis Oct 26, 2017

Collaborator

I believe we should be able to run requests per second (or maybe users per second (and have one request per user)). I don't think this can be implemented if we have have a VU infinite loop that is stopped under certain conditions. However, I believe RPS is a terrible measurement for UI systems with RPS being a consequence of a system and not its cause. I think RPS is valid for APIs. As a sidepoint I think we also need to be able to control connections.

Collaborator

aidylewis commented Oct 26, 2017

I believe we should be able to run requests per second (or maybe users per second (and have one request per user)). I don't think this can be implemented if we have have a VU infinite loop that is stopped under certain conditions. However, I believe RPS is a terrible measurement for UI systems with RPS being a consequence of a system and not its cause. I think RPS is valid for APIs. As a sidepoint I think we also need to be able to control connections.

@liclac

This comment has been minimized.

Show comment
Hide comment
@liclac

liclac Oct 26, 2017

Collaborator

Iterations Per Second would be easy because there's already a flow channel in core/local that controls execution rate. Requests Per Second would most easily be implemented by rate limiting js/modules/http/(*HTTP).request(), also fairly easy (you can store the rate limiter itself in js/common/*State). Someone just needs to do it, and then think of a way to explain the relationship between RPS and VUs to users.

Collaborator

liclac commented Oct 26, 2017

Iterations Per Second would be easy because there's already a flow channel in core/local that controls execution rate. Requests Per Second would most easily be implemented by rate limiting js/modules/http/(*HTTP).request(), also fairly easy (you can store the rate limiter itself in js/common/*State). Someone just needs to do it, and then think of a way to explain the relationship between RPS and VUs to users.

@ppcano

This comment has been minimized.

Show comment
Hide comment
@ppcano

ppcano Dec 20, 2017

Member

@liclac @robingustafsson

I am not sure if the Rate Limit a59abc6 solves this issue.

I think users want to configure the test based on rps instead of VUs. Here another question in stackoverflow.

export let options = {
  rps: 100,
  duration: '10s'
};

Correct me if I am wrong but the current feature configures a limit to avoid exceeding X request/per second and does not control the test load based on the request/per seconds configuration.

I am not sure if this is a need but, I suggest renaming the config option to rpsLimit to avoid confusion and re-open this issue.

Member

ppcano commented Dec 20, 2017

@liclac @robingustafsson

I am not sure if the Rate Limit a59abc6 solves this issue.

I think users want to configure the test based on rps instead of VUs. Here another question in stackoverflow.

export let options = {
  rps: 100,
  duration: '10s'
};

Correct me if I am wrong but the current feature configures a limit to avoid exceeding X request/per second and does not control the test load based on the request/per seconds configuration.

I am not sure if this is a need but, I suggest renaming the config option to rpsLimit to avoid confusion and re-open this issue.

@ragnarlonn

This comment has been minimized.

Show comment
Hide comment
@ragnarlonn

ragnarlonn Dec 20, 2017

@ppcano I agree that the current limit may be better named rpsLimit. However, I have another take on reopening this issue: I think the current implementation, where rpsLimitlimits the rate of each VU, is not at all as convenient as being able to limit the total RPS rate for the whole k6 instance. I would argue that this is what should be highest priority to implement, because it should not be too hard and it would provide value to users.

Implementing an RPS target mode, however, where you specify an RPS rate instead of number of VUs, is probably hard to do well. That is why we haven't bothered doing it so far. The reason it is hard is that k6 would have to figure out what level of concurrency to use to get the desired RPS rate, and that is very hard for a program to do. The naive implementation would be to just start at 1 VU concurrency level, note the RPS rate and then increase the concurrency (number of VUs) until you reach the desired RPS rate. This may work in some instances, but what if you're already at max RPS with 1 VU? What if you run out of CPU or memory or some other resource on the load generator machine? What if the target system cannot keep up when the number of connections becomes too high? There are tons of interrelated, moving parts that can affect the RPS rate in a load test. Vegeta has an RPS setting that tries to push through a certain number of requests/second, and fails half of the time to get it right, in my experience.

So I would argue that the first thing to build would be a better, global RPS rate limiter. If someone wants to experiment with RPS rate target functionality, great - it would be a super thing to have, but I expect it to be difficult to do it well enough that it is actually usable.

ragnarlonn commented Dec 20, 2017

@ppcano I agree that the current limit may be better named rpsLimit. However, I have another take on reopening this issue: I think the current implementation, where rpsLimitlimits the rate of each VU, is not at all as convenient as being able to limit the total RPS rate for the whole k6 instance. I would argue that this is what should be highest priority to implement, because it should not be too hard and it would provide value to users.

Implementing an RPS target mode, however, where you specify an RPS rate instead of number of VUs, is probably hard to do well. That is why we haven't bothered doing it so far. The reason it is hard is that k6 would have to figure out what level of concurrency to use to get the desired RPS rate, and that is very hard for a program to do. The naive implementation would be to just start at 1 VU concurrency level, note the RPS rate and then increase the concurrency (number of VUs) until you reach the desired RPS rate. This may work in some instances, but what if you're already at max RPS with 1 VU? What if you run out of CPU or memory or some other resource on the load generator machine? What if the target system cannot keep up when the number of connections becomes too high? There are tons of interrelated, moving parts that can affect the RPS rate in a load test. Vegeta has an RPS setting that tries to push through a certain number of requests/second, and fails half of the time to get it right, in my experience.

So I would argue that the first thing to build would be a better, global RPS rate limiter. If someone wants to experiment with RPS rate target functionality, great - it would be a super thing to have, but I expect it to be difficult to do it well enough that it is actually usable.

@cyberw

This comment has been minimized.

Show comment
Hide comment
@cyberw

cyberw Dec 20, 2017

RPS target rate with auto thread spawning (not RPS limit) is a useless feature imho. Some tools have it, but what usually happens is that you reach the maximum throughput the system under test is capable of and any additional threads just end up DoS-ing the system, which doesn't really give you any valuable information.

I guess it is a little convenient for the user not to have to think about setting the thread count, but that is a small gain...

cyberw commented Dec 20, 2017

RPS target rate with auto thread spawning (not RPS limit) is a useless feature imho. Some tools have it, but what usually happens is that you reach the maximum throughput the system under test is capable of and any additional threads just end up DoS-ing the system, which doesn't really give you any valuable information.

I guess it is a little convenient for the user not to have to think about setting the thread count, but that is a small gain...

@ppcano

This comment has been minimized.

Show comment
Hide comment
@ppcano

ppcano Dec 20, 2017

Member

@ragnarlonn I think a59abc6 implements a Global RPS rate limiter.

Then, we should document that to control request per second the script has to specify the rps or rpsLimit and a very high number of vus?

import http from "k6/http";

export let options = {
  rps: 10,
  vus: 200,
  duration: '20s'
};

export default function() {

  console.log(`VU: ${__VU}  -  ITER: ${__ITER}`);
  http.get("https://google.es");

}

This approach has worked for me so far ;)

http_reqs.............: 200    9.999879/s

Therefore, I wonder what the complexity would be if k6skip vus and stages when using rps and just continuously add new VUs based on the rps value. Just speaking loud, does it make sense?

Member

ppcano commented Dec 20, 2017

@ragnarlonn I think a59abc6 implements a Global RPS rate limiter.

Then, we should document that to control request per second the script has to specify the rps or rpsLimit and a very high number of vus?

import http from "k6/http";

export let options = {
  rps: 10,
  vus: 200,
  duration: '20s'
};

export default function() {

  console.log(`VU: ${__VU}  -  ITER: ${__ITER}`);
  http.get("https://google.es");

}

This approach has worked for me so far ;)

http_reqs.............: 200    9.999879/s

Therefore, I wonder what the complexity would be if k6skip vus and stages when using rps and just continuously add new VUs based on the rps value. Just speaking loud, does it make sense?

@ragnarlonn

This comment has been minimized.

Show comment
Hide comment
@ragnarlonn

ragnarlonn Dec 20, 2017

@ppcano Oh, sorry. I just saw @robingustafsson 's comment about it being per-VU, but missed @liclac 's response where she said it was global. Well, that's great!

About the number of VUs to recommend when people want to use this limiter, yes I would say a default recommendation of 100 or 200 VU might be good. We don't want people to run out of memory or file descriptors, so best not to recommend too much concurrency (just enough to make sure network delay and server processing time don't set a too low boundary on the max RPS rate).

ragnarlonn commented Dec 20, 2017

@ppcano Oh, sorry. I just saw @robingustafsson 's comment about it being per-VU, but missed @liclac 's response where she said it was global. Well, that's great!

About the number of VUs to recommend when people want to use this limiter, yes I would say a default recommendation of 100 or 200 VU might be good. We don't want people to run out of memory or file descriptors, so best not to recommend too much concurrency (just enough to make sure network delay and server processing time don't set a too low boundary on the max RPS rate).

@liclac

This comment has been minimized.

Show comment
Hide comment
@liclac

liclac Dec 20, 2017

Collaborator

@ragnarlonn makes a good point for why RPS targets make no sense from a purely algorithmic perspective, but there's also a practical aspect to it.

Firstly, by nature, there's no way we can hit our RPS target at the start of the test; the RPS graph for the early part of the test would be over- and undershooting it for several seconds until we find the sweet spot.

Second, k6 by design needs VUs preinstanced (with -m/--max), because that means we can precompute expensive calculations and improve runtime performance. The only way to feasibly make dynamic scaling without a preallocated pool of VUs would be to somehow make instantiation cheap, at the cost of runtime performance - and instantiating a new JS runtime will never be free, no matter what we do.

In other words, we could try to design an algorithm that dynamically scales the test in response to current RPS, but it'd mean writing a lot of needlessly complicated code for something that will end up underwhelming; if we instead simply use an RPS limiter, a human can decide how many VUs their particular hardware, network and latency needs to hit their target RPS, and make k6 conform to it to the best of its ability.

Collaborator

liclac commented Dec 20, 2017

@ragnarlonn makes a good point for why RPS targets make no sense from a purely algorithmic perspective, but there's also a practical aspect to it.

Firstly, by nature, there's no way we can hit our RPS target at the start of the test; the RPS graph for the early part of the test would be over- and undershooting it for several seconds until we find the sweet spot.

Second, k6 by design needs VUs preinstanced (with -m/--max), because that means we can precompute expensive calculations and improve runtime performance. The only way to feasibly make dynamic scaling without a preallocated pool of VUs would be to somehow make instantiation cheap, at the cost of runtime performance - and instantiating a new JS runtime will never be free, no matter what we do.

In other words, we could try to design an algorithm that dynamically scales the test in response to current RPS, but it'd mean writing a lot of needlessly complicated code for something that will end up underwhelming; if we instead simply use an RPS limiter, a human can decide how many VUs their particular hardware, network and latency needs to hit their target RPS, and make k6 conform to it to the best of its ability.

@ppcano

This comment has been minimized.

Show comment
Hide comment
@ppcano

ppcano Dec 21, 2017

Member

@liclac Thank you for the explanation.

What do you think about renaming rps to rpsLimit?

Member

ppcano commented Dec 21, 2017

@liclac Thank you for the explanation.

What do you think about renaming rps to rpsLimit?

@liclac

This comment has been minimized.

Show comment
Hide comment
@liclac

liclac Dec 21, 2017

Collaborator

Why? That just seems like more to type for no good reason.

Collaborator

liclac commented Dec 21, 2017

Why? That just seems like more to type for no good reason.

@ppcano

This comment has been minimized.

Show comment
Hide comment
@ppcano

ppcano Dec 22, 2017

Member

@liclac IMO, rpsLimit or rps-limit define better the option. The name will avoid some confusion (I was).

I don't expect all users to read the option description.

limit requests per second

Users will find examples on the internet, they will assume the option to be the rps target, they will try and then realize something is not working.

Member

ppcano commented Dec 22, 2017

@liclac IMO, rpsLimit or rps-limit define better the option. The name will avoid some confusion (I was).

I don't expect all users to read the option description.

limit requests per second

Users will find examples on the internet, they will assume the option to be the rps target, they will try and then realize something is not working.

@marklagendijk marklagendijk referenced this issue Dec 22, 2017

Closed

SubmitForm method #437

1 of 1 task complete
@danron

This comment has been minimized.

Show comment
Hide comment
@danron

danron Dec 29, 2017

@ppcano If you set the VU's and rps then rps becomes self explanatory IMHO.

danron commented Dec 29, 2017

@ppcano If you set the VU's and rps then rps becomes self explanatory IMHO.

@bofm

This comment has been minimized.

Show comment
Hide comment
@bofm

bofm Apr 14, 2018

Is it possible to rate-limit a non-HTTP load test?

bofm commented Apr 14, 2018

Is it possible to rate-limit a non-HTTP load test?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment