Replies: 5 comments
-
Summary from last discussion on some of the queries: Test with a valid input JSON to EM, check results using experiment_results API (after the specified duration? / loop / poll for status?)
How do we pass the input JSON to EM?
Test with an invalid JSON (Is all error checking done at RM module itself? Are there any error checks at EM?)
What is the significance of app_version?
Test the REST API (experiment_results) for valid/invalid app_names (Is querying based on autotune id supported?)
Do we just check for non-blank/non-zero values for score, mean etc?
What is the acceptable spike range?
Values for "trial_result", "trial_result_info", "trial_result_error" is blank in output json?
|
Beta Was this translation helpful? Give feedback.
-
Summary from today's design discussion: Post Invalid Prometheus queries for some of the tunables
Abort the data source (for one of the tunables / for multiple tunables / all tunables) from where metrics needs to be gathered, what is the behaviour? Are there retries with a timeout?
Input JSON with memory values other than Mi
Specify different data sources for different tunables
Move all test related technical discussions in squad-issues to Autotune project |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
EM API doc - https://github.com/kruize/autotune/blob/b1292a974d376bad41cff15766285ca15ba4766b/design/API.md New deployment/rolling update validation
|
Beta Was this translation helpful? Give feedback.
-
Input JSON to Experiment Manager which specifies the experiment trial details. Experiment manager creates a new deployment for the specified application with the updated config & env parameters.
Output JSON returned from Exp Mgr. REST API to be queried to obtain this JSON:
GET /experiment_result?application_name=<APP_NAME>
Test Scenarios:
Test with a valid input JSON to EM, check results using experiment_results API (after the specified duration? / loop / poll for status?)
Test with an invalid JSON (Is all error checking done at RM module itself? Are there any error checks at EM?)
Validate if the deployment configuration has been updated with the specified config and env
Specify another input JSONs with the same id/trial number and same config
Specify another input JSONs with same id/trial number/deployment but with a change in metrics (a new tunable included/deleted)
Specify another input JSONs with same id/trial number/deployment but with a change in env
Specify another input JSONs with trial measurement run value greater than the run value
Test with multiple input JSONS (experiments) with different configs (How many experiments can be queued?)
Specify CPU and Memory Req values more than the actual resource and check for the behaviour
If CPU/Mem Req > CPU/Mem limits, what is the behavior?
What is the significance of app_version?
Test the REST API (experiment_results) for valid/invalid app_names (Is querying based on autotune id supported?)
Validate the autotune id,trial details against the input JSON
Validate the output JSON returned and check it has returned values for all the metrics passed in the input JSON
Do we just check for non-blank values for score, mean etc?
Is there a way to simulate interrupting the experiment in between say while it is running the load?
Query the EM multiple times and see if the output JSON values varies(check for consistency of result)
Post Invalid Prometheus queries for some of the tunables
Abort the datasource (for one of the tunables / for multiple tunables/ all tunables) from where metrics needs to be gathered, what is the behaviour? Are there retries with a timeout?
Input JSON with memory values other than Mi
Specify different datasources for different tunables
Beta Was this translation helpful? Give feedback.
All reactions