Skip to content

[Bug] Automated tests confusion #27

@CBenghi

Description

@CBenghi

Hello all,

I've seen the thread at #22, but I'm not convinced with the presentation

Is your feature request related to a problem? Please describe.
Tests are supposed to help deployment decisions with clear binary pass/fail information, the current provision seems unclear in that sense.

Taken at face value, the automated tests seem to fail.

At the moment your text says for instance "should be identified as non-clickbait/green", but then all of the links are red.

Describe the solution you'd like
Accepted errors of classification should be clarified, if a header returns "red", the acceptance of "red" should be clearly stated in the document.
Otherwise the output shoyld be considered a test failure.

As a minimal intervention I'd advise to split the test file so that you have two sets:

  1. tests that are known to pass (you will have have some text for each category, but be clearly marked and grouped per your expectations)
  2. second set of tests that serves your target aspiration and helps improving the detection (tolerable fails, if you will)

In this scenario you know that you have a regression when even just one of the first set does not comply.

Describe alternatives you've considered
Alternatively you might want rethink the test harness so that it operates on threshold levels.

E.g. 95% of expected green are green, and so on.

This would be a more complete approach. While it might need a more formalised test harness to count the headlines and outcomes, it should not take long to implement.
Let me know if you need more help on this front.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions