Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quiz about best way to define SLAs is potentially ambiguous #10

Open
MattDodsonEnglish opened this issue Sep 16, 2022 · 5 comments
Open
Assignees

Comments

@MattDodsonEnglish
Copy link
Contributor

"Setting load profiles" has a quiz question that asks about "The best way to define baseline metrics". Is this question too vague? Won't different SLAs have different key metrics? For example, if I wanted define an SLA on latency, wouldn't it also make sense to use ramping-vus with thresholds?

But even for one metric, how confidently can we say that one executor is "best" for it?

We're looking to establish some baseline metrics to define an SLA for an existing service, what is the quickest way to achieve this?
A : Use the shared-iterations executor to run through 1,000 requests.

B: Use the constant-arrival-rate to see how many virtual users it takes to maintain a constant 50 requests per second (RPS).

C: Use the externally-controlled executor to start k6 in server mode to have your sweet Bash script ramp up virtual users.

@javaducky
Copy link
Contributor

javaducky commented Sep 16, 2022

This may be splitting hairs, but the question is related to quickest way to establish some baseline. I believe my thinking may have been that this is the simplest (and default IIRC) executor and therefore the easiest to create a script to get some kind of test going (therefore the quickest). I definitely didn't intend to imply this was the best executor for establishing SLAs for any particular metric.

Call it a "trick question" 😉

@MattDodsonEnglish
Copy link
Contributor Author

Hmm, true, but either way we should be careful to use absolutes: Perhaps the quickest way is to ask your neighbor, or query your server logs. And if we keep it to k6, can we be sure that this is the quickest executor? For what SLA metric?

I propose a making a new question that's grounded in a specific problem. "SLA" is too big and general a concept.

@dgzlopes
Copy link
Member

Just passing by 👀

Given that the ones reading this will probably be users/devs who want to learn more about it, it may make sense to talk about SLOs. SLAs are very business related, and the numbers that you put are usually made up (in the sense that it's an non-real number that you don't expect to break)

@MattDodsonEnglish
Copy link
Contributor Author

Yes that's a good point. Anyway, shouldn't an SLA have a higher error budget than the true SLO? So I guess the tester is more interested in finding the SLO to set a baseline.

Since we're talking about Service level *s, in the context of k6 tests, is there any appreciable difference between a metric and an SLI?

@dgzlopes
Copy link
Member

dgzlopes commented Nov 7, 2022

Anyway, shouldn't an SLA have a higher error budget than the true SLO? So I guess the tester is more interested in finding the SLO to set a baseline.

Yup 👍

Since we're talking about Service level *s, in the context of k6 tests, is there any appreciable difference between a metric and an SLI?

The metric is the indicator in our case, yeah 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants