Skip to content

Latest commit

 

History

History
21 lines (10 loc) · 4.42 KB

ROADMAP.md

File metadata and controls

21 lines (10 loc) · 4.42 KB

Roadmap

The roadmap is a tentative plan for the core development team. Things change constantly as pull requests or paid engagements come in and priorities change, but it will give you an idea of our current vision and plan.

From the very beginning ReportPortal was tool to accelerate reporting for test automation from scratch. We started to use it at scale with hundreds of clients, dozens of platforms and frameworks, which lead ReportPortal to a position of stanartized reporting approach across project and organizations. Once more and more teams and project started to aggregate test automation results in one centrallized storage, they started to see actual status of the entire regression suite for all types and levels of tests. But seing only passed and failed reports was never enough, thus we intoduced fail categorization (know as Product Bugs, Automation Issues or System Issues), which gave a clear picture for test leads and managers on actual value of automated testing. Products bugs found by tests is a value of test automation at the project.

For the next stage, since we have more and more testing results, which should be trigaged by the team, we started to look for the option to minimize human effort to work with the failed reports. That was a time for a ML-based Auto-Analysis feature to be created, which uses historical data of human decision on top of previous fail to make the conclusion for a newely available reports. Along with all this data, we added the opportunity to represent this all in a graphical view, with widgets and dashboards, to have a posibility to focus on insigths at a glance.

These days ReportPortal become a recognized standard for thousands of teams and organization, from small start-ups to the Fortune100 companies. And we heavily invest into the technical possibilities to make it flexible, scalable, reliable and integral tool in project's CI/CD pipeline.

For the last 4 year the roadmap was driven by community and clients, and based on the collected feedbacks and requests we were looking for the options to address the direction of product growth. The number 1 ask from the first days was a capability to do the same for the manual test cases. Along with this ask, we started to think, how we can bring more value in reports and analytics of test automation results, what brought us to the idea of Test Library: a set of unique tests collected from test automation executions. This unique set might be a perfect prism to understand actual status of your application: how particular test case behaves on different enviroments, what is the overall status of all executed tests against particular build, etc. Along with a Test Library of automated test cases it pretty easy to add manual test cases, since they are the same.

Test case by itself is a fundamental object for testing. It has steps/script to complete and verification at the very end. It has statuses as Passed, Failed, Skipped or Untested (not yet tested). The only difference between manual and automated test case is who is execution that, who is conducting steps in the script. Thus, test case has a characteristic (or parameter) which can identify it's type as manual or automated.

Once you have the entire test library of you test cases available, you can start to look differently on the application build and it's passing rate. It's not just a percentage of failed reports in you latest execution, but not it is a aggregated status of the set of unique test cases, which have been executed against specific build on various environments, with all possible combination of re-tries, re-exectutions and manual re-checks. Spiced on top the the attributes of tests, which associated it with critical features or specific components, you now have an excessive to take a GO or NO GO decision for your build from the testing perspective.

Having all this information in one plays, this is an option to create a Quality Gate, based on the rules and conditions. Like, the job should fail if it has more then 2% of failed test cases. Easy? Yes. But what if job now should fail only if you have test cases with priority:critical within those 2% of failed reports. Hah? Or if there are new uknown and unrecognized issues. This give the ultimate possibility for the testing pipeline automation and give a huge leap forwards against Continues Testing at your project.

Still there are a lot of things to do. Please don't hesitate to contribute or support us with paid subscriptions and services.