Skip to content
This repository has been archived by the owner on Jan 29, 2019. It is now read-only.

New feature - Reliability benchmarking (Wilco Fiers) #2

Closed
darthcav opened this issue Aug 19, 2014 · 2 comments
Closed

New feature - Reliability benchmarking (Wilco Fiers) #2

darthcav opened this issue Aug 19, 2014 · 2 comments

Comments

@darthcav
Copy link

Reference

http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0001

Original Comment

A feature I miss that relates to automated tools is reliability benchmarking. There are big differences between the reliability of different automated tools. Knowing how many tests a tool has and how reliable their findings are can be important. When you use a tool that monitors large numbers of web pages it is more important that a tool provides reliable results. But when you are developing a website it is important that a tool gives you as many potential issues as it can find and let the developer figure out what are real issues and what are false positives.

@darthcav darthcav changed the title Reliability benchmarking Reliability benchmarking (Wilco Fiers) Aug 27, 2014
@nitedog nitedog changed the title Reliability benchmarking (Wilco Fiers) New feature - Reliability benchmarking (Wilco Fiers) Sep 1, 2014
@nitedog
Copy link
Contributor

nitedog commented Sep 1, 2014

Proposed Resolution

Accept - See also Issue #38

@darthcav
Copy link
Author

Resolution

No change. We agree that it is an important aspect, but the only way to benchmark will be a set of tests, combined with a test case description language.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants