-
-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggregate and sub-aggregate tests dashboard #16
Comments
Additional discussions with @llxia and @adamfarley have lead me to think about this more... jotting down the ideas here: we want to be able to provide both highly granular AND aggregate view, of more than just the external tests, this approach can and should be applied to all test types Thanks to @adamfarley for this picture: Putting some thought to what data gets us that: That is the granular seed data that then allows us to build up an aggregate view, displaying for lastPass+lastFail+lastExcluded = lastTotal Or you can filter by platform, by impl, by version, by testGroup to show same type of info, but for jdk8, or for openj9, or for xLinux, or combos of those filters. |
To achieve the aggregate view in the image above, here's a task breakdown of what needs to be done (WIP). If something has already been done, or is being done right now, a link to the relevant issue should be added next to the task. Note: Whenever we "retrieve" data, it should be processed and stored in the database, and the GUI should retrieve it from there at runtime. All the automated bits below should not be run every time the user loads the widget. Phase 1: Pass/Fail for OpenJDK Regression Testing on xLinux (Numbers only)
Phase 2: Expand functionality to include parameters and aggregate data
Phase 3: Adding other test types
Phase 4: Add GUI and Depth
Phase 5: Exclusions
For identifying which tests have been excluded for a given run, either:
Phase 6: Test Plan
Phase 7: Advanced exclusions
|
Initial thoughts upon reading comment above to make sure it is clear, this test-tools repo is not intended to be a rewrite of Axxon. We need to assess what is truly needed, not what was once used (and in past, put us into a state of technical debt). Some of the list above (disturbingly) implies adoption of heavier process. Stripping down our processes (and subsequently tools), allows us to move quickly and not own/maintain tools that go rapidly stale. That being said, I will like to introduce features to this test-tools repo from an MVP perspective. What is the minimal amount of information we can gather to better the tasks we need to do. So a clear listing of what we need to do, for starters. Needed: An aggregate view of test results. The above list is well beyond what is needed to accomplish an MVP view of aggregate test results. I have strong opinions on what functionality we should support, and will share more upon my return later this week. |
The Action Plan above was based on two principles:
The combination of these functionalities into providing a single widget was intended to meet the requirements of the profile in my use case, which is: User: Managers, team leaders, and others seeking a high-level quality overview leading up to a release. I broke these requirements down into phases, and broke those down into the functionalities I thought we'd need to complete those phases. Requirement 1: Phases 1-4. Also note that the phases are phases. MVP is Phase 1. Then we go for the MVP for the next Phase. Since this is all proving very wordy, it sounds like it'd be good to get Shelley and Lan Xia into a room (metaphorically) to chew through this. I will readily admit that they know far more about TRSS that I, and it'd be good to either improve the action plan, or replace it with a better plan. |
Just leaving this post #37 (comment) here in case that might be of some help. |
This epic will get split into 2 high-level aspects (separating into server & client work), each can be further broken down into consumable/actionable tasks. Server
Client
The API definition and implementation are the most important, as the client/UI can and will change depending on what 3rd party chartware we select. Initial MVP implementation can be a headless listing of totals that will focus on data that we already have (Pass, Fail, Skipped). Excludes info will need to be tackled in a second phase, as we do not currently collect and share this info via Jenkins jobs, the different types of tests handle excludes in very different ways, and there will need to be many changes to other tools and repos to gather the info required to include into Jenkins job reports. Any further details can be discussed in the child issues of this epic. Reminder: |
Nice enhancement idea from Martijn:
Use the external tests to track Java version support amongst popular libraries and frameworks so we can also identify which ones need help.
Display a
name-of-project-java-version-tested-against-<pass|fail>
Matrix such as this (with hyperlinks to actual builds):
application status jdkversion implementation platform
scala-jdk8_j9 pass 8 j9 x64_linux_docker
scala-jdk9_hs pass etc...
where we have columns for application, jdkversion, implementation, platform, status (so they can be sorted by same) that includes all apps, elasticsearch, wildfly, etc for jdk8, 9, 10, 11
The text was updated successfully, but these errors were encountered: