You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to track the performance of our solution and be able to quickly identify if changes to our code or new features introduce a degradation of the performance of platform.
One way to do this is to have bench mark load tests that can be run with each change / deploy to YT.
Consideration
Test results from a single test gives us little insights. We will need somewhere to persist and present the test results to easily identify if and when a change to the performance occurs.
Should the number of replicas match what we are running in production at the time or be scaled up to a maximum? And regardless of what we go for here, what is the best way to scale the environment up and down with minimal effort?
Tasks
Identify relevant tests for the platform components tests suites
Set up a pipeline (DevOps or GitHub) to trigger relevant tests
Set up dashboard to present test results
- Automatisk kjøring av RF-0002 testene etter deploy til YT01 - Skalere automatisk opp før kjøring og ned etter kjøring.
Acceptance criteria
Issues are created describing what tests are missing for the test suite to be complete
There is a dashboard available to the team where the performance of the solution over time is available
The text was updated successfully, but these errors were encountered:
SandGrainOne
changed the title
Få testene til å kjøre automatisk i pipeline
Få kjørt ytelses-testene automatisk etter en deploy til yt-miljøet
Aug 29, 2022
acn-sbuad
changed the title
Få kjørt ytelses-testene automatisk etter en deploy til yt-miljøet
Automatically run load tests after deploy to YT
Dec 13, 2022
acn-sbuad
changed the title
Automatically run load tests after deploy to YT
Set up a system for automatically run load tests after deploy to YT
Dec 13, 2022
acn-sbuad
changed the title
Set up a system for automatically run load tests after deploy to YT
Set up a system for automatically running load tests after deploy to YT
Dec 13, 2022
Description
We want to track the performance of our solution and be able to quickly identify if changes to our code or new features introduce a degradation of the performance of platform.
One way to do this is to have bench mark load tests that can be run with each change / deploy to YT.
Consideration
Test results from a single test gives us little insights. We will need somewhere to persist and present the test results to easily identify if and when a change to the performance occurs.
Should the number of replicas match what we are running in production at the time or be scaled up to a maximum? And regardless of what we go for here, what is the best way to scale the environment up and down with minimal effort?
Tasks
- Automatisk kjøring av RF-0002 testene etter deploy til YT01- Skalere automatisk opp før kjøring og ned etter kjøring.Acceptance criteria
The text was updated successfully, but these errors were encountered: