You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 29, 2019. It is now read-only.
A feature I miss that relates to automated tools is reliability benchmarking. There are big differences between the reliability of different automated tools. Knowing how many tests a tool has and how reliable their findings are can be important. When you use a tool that monitors large numbers of web pages it is more important that a tool provides reliable results. But when you are developing a website it is important that a tool gives you as many potential issues as it can find and let the developer figure out what are real issues and what are false positives.
The text was updated successfully, but these errors were encountered:
No change. We agree that it is an important aspect, but the only way to benchmark will be a set of tests, combined with a test case description language.
Reference
http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0001
Original Comment
A feature I miss that relates to automated tools is reliability benchmarking. There are big differences between the reliability of different automated tools. Knowing how many tests a tool has and how reliable their findings are can be important. When you use a tool that monitors large numbers of web pages it is more important that a tool provides reliable results. But when you are developing a website it is important that a tool gives you as many potential issues as it can find and let the developer figure out what are real issues and what are false positives.
The text was updated successfully, but these errors were encountered: