Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Standalone measure subcommand #138

Closed
rsevilla87 opened this issue Oct 20, 2021 · 9 comments
Closed

[RFE] Standalone measure subcommand #138

rsevilla87 opened this issue Oct 20, 2021 · 9 comments
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@rsevilla87
Copy link
Member

rsevilla87 commented Oct 20, 2021

Is your feature request related to a problem? Please describe.
It would be nice to have a measure subcommand to make kube-burner able to execute measurements (The current available are podLatency and pprof) without actually running a benchmark

Describe the solution you'd like
Adding a new measure subcommand to do something like:

kube-burner measure -c configFile --for 1h

This new feature will read the kube-burner configuration as usual and then execute the measurements defined there for the time defined by the flag "--for"

@rsevilla87 rsevilla87 added the enhancement New feature or request label Oct 20, 2021
@rsevilla87 rsevilla87 changed the title [RFE] Standalone measurement subcommand [RFE] Standalone measure subcommand Oct 20, 2021
@rsevilla87
Copy link
Member Author

cc: @chaitanyaenr @paigerube14

@smalleni
Copy link
Contributor

smalleni commented Jul 18, 2023

@rsevilla87 @shashank-boyapally @vishnuchalla I thought the purpose of this command is to setup measurements for the next time interval and not grab measurements from the past. The way I am guessing this would be used is to just run kube-burner measurements on a customer's environment for the next 1 hour or so to identify the podready latency etc.

@vishnuchalla
Copy link
Collaborator

Currently, kube-burner listens to the pod creation event and sets a creation timestamp in our program memory using time.Now().UTC() (instead of us manually setting this we can also fetch the exact pod creationTimestamp using client-go library) and keeps on looking for the update events and captures each of these timestamps while the each stage gets updated until a pod gets to Ready state.

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-07-15T23:45:46Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-07-17T19:27:07Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-07-17T19:27:07Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-07-15T23:45:46Z"
    status: "True"
    type: PodScheduled

once it has all those timestamps then it calculates latencies for each step. This is while the workload is running. Now that we know how these times are being fetched, we can make tweaks in our current code to capture them for previously ran pods or currently running pods as they transition stages over time.

So instead of an option like -for 1h, which I believe is for the future runs, we can have options like --start_time and --end_time to get these metrics for a workload which was ran with in that time range, which will also help us to run measurements on previously ran workloads as well as long as their pods are present and also isolates this measurement calculation from a workload run.

@vishnuchalla
Copy link
Collaborator

@Sai, @chaitanyaenr Please feel free to share your thoughts on this. Thank you!

@rsevilla87
Copy link
Member Author

rsevilla87 commented Jul 18, 2023

Currently, kube-burner listens to the pod creation event and sets a creation timestamp in our program memory using time.Now().UTC() (instead of us manually setting this we can also fetch the exact pod creationTimestamp using client-go library) and keeps on looking for the update events and captures each of these timestamps while the each stage gets updated until a pod gets to Ready state.

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-07-15T23:45:46Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-07-17T19:27:07Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-07-17T19:27:07Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-07-15T23:45:46Z"
    status: "True"
    type: PodScheduled

once it has all those timestamps then it calculates latencies for each step. This is while the workload is running. Now that we know how these times are being fetched, we can make tweaks in our current code to capture them for previously ran pods or currently running pods as they transition stages over time.

So instead of an option like -for 1h, which I believe is for the future runs, we can have options like --start_time and --end_time to get these metrics for a workload which was ran with in that time range, which will also help us to run measurements on previously ran workloads as well as long as their pods are present and also isolates this measurement calculation from a workload run.

It's technically possible to measure podLatency numbers from the past, but I personally don't see the point on it, it will require lot of coding and I think is more reasonable to take the measurement from upcoming events, this change shouldn't be very complex. In addition from pod latency measurements, the pprof measurement is also there, and this one cannot be obtained from the past.

@vishnuchalla
Copy link
Collaborator

cc @shashank-boyapally
Sounds good to me. Assuming that to calculate metrics for future, we can start working on the implementation part for collecting both podLatency and pporf in a standalone measure command. And we can think of extending to other use cases later only if required. Thank you!

@shashank-boyapally
Copy link
Collaborator

Yes @vishnuchalla, we should be able to implement metrics for future.

@smalleni
Copy link
Contributor

cc @shashank-boyapally Sounds good to me. Assuming that to calculate metrics for future, we can start working on the implementation part for collecting both podLatency and pporf in a standalone measure command. And we can think of extending to other use cases later only if required. Thank you!

I agree on the direction.

@github-actions
Copy link

This issue has become stale and will be closed automatically within 7 days.

@github-actions github-actions bot added the stale Stale issue label Oct 17, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 25, 2023
@vishnuchalla vishnuchalla reopened this Oct 25, 2023
@github-actions github-actions bot removed the stale Stale issue label Oct 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

4 participants