Skip to content

Conversation

@firewave
Copy link
Collaborator

No description provided.

@firewave firewave marked this pull request as draft November 22, 2024 16:49
@firewave firewave marked this pull request as ready for review November 22, 2024 17:46
Copy link
Owner

@danmar danmar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this shouldn't be tested in the testrunner instead.

isn't there a "danger" that we will fix one of these tickets later and not see the test you added here? It would be more or less stale then. Or that we will want to tweak the test somehow maybe by moving it to testrunner for instance.

@firewave
Copy link
Collaborator Author

firewave commented Dec 5, 2024

I am not sure if this shouldn't be tested in the testrunner instead.

These are blackbox which show the overall issues the user experiences. The issues are multi-layered which need fixes in lots of places which will get individual unit tests for each part which has been resolved.

@danmar
Copy link
Owner

danmar commented Dec 5, 2024

I still feel a bit skeptic about adding tests for unfixed tickets.. it seems better to add tests when the tickets are fixed.

The issues are multi-layered

ok. I don't know off the top of my head what we need to do exactly. I trust you.

which need fixes in lots of places which will get individual unit tests for each part which has been resolved.

sounds good.

@firewave
Copy link
Collaborator Author

firewave commented Dec 5, 2024

I still feel a bit skeptic about adding tests for unfixed tickets.. it seems better to add tests when the tickets are fixed.

You mean like the 200 TODOs we have in the unit tests? 😉

It is just a dozen in the Python code and I am working semi-actively on most of them, they are just not easy to fix.

It is also good to have the coverage already to show more cases to consider while working on the code. If I have a related case I try to add must mutations I can think of to see what is going on. Some of is now done implicitly by the CI so less tests to write and only overrides to add when they fail.

@danmar
Copy link
Owner

danmar commented Dec 6, 2024

You mean like the 200 TODOs we have in the unit tests?

the difference is that we don't have tickets for those. So if I would fix a todo I would ensure the test is working. what happens if we fix the tickets later and don't see your tests?

Sometimes this happens for me:

  1. I start working on a ticket.
  2. I make a test that fails.
  3. I fix the problem.
  4. While fixing the ticket I see that my test should be tweaked.

@danmar
Copy link
Owner

danmar commented Dec 6, 2024

If you will fix these tests in the near future it's more OK imho.. but personally I would still prefer that the tests are added together with the fixes.

@firewave
Copy link
Collaborator Author

firewave commented Dec 6, 2024

the difference is that we don't have tickets for those.

That is actually bad because TODOs are essentially invisible. Especially if those are false positives or regressions they should be tracked. But that is how we did it in the past and unfortunately we have improved tremendously there.

So if I would fix a todo I would ensure the test is working. what happens if we fix the tickets later and don't see your tests

I always add a link to the PRs to the ticket. Hence the reference to the ticket. It is not different from when you fix a ticket.

Also that's why I xfail them. So if you accidentally fix them they will no longer xfail and cause the CI to fail as well (essentially what a TODO is in our unit tests).

I did some tests in the past which contained the current results and only had TODOs that they should fail. That is actually really bad because they do not reflect the actual intent at all. I should have rectified this but I will give it another look if there such tests.

@danmar
Copy link
Owner

danmar commented Dec 12, 2024

I always add a link to the PRs to the ticket. Hence the reference to the ticket. It is not different from when you fix a ticket.

ok that solves my fear.

Also that's why I xfail them. So if you accidentally fix them they will no longer xfail and cause the CI to fail as well (essentially what a TODO is in our unit tests).

that does not solve my fear at all.. it's more or less this I'm afraid of. If you make a mistake when writing the test lets say a variable name is misspelled in the code or something then the issue can be solved in cppcheck and the test can still "silently fail".. so the test is just redundant clutter..

@firewave
Copy link
Collaborator Author

that does not solve my fear at all.. it's more or less this I'm afraid of. If you make a mistake when writing the test lets say a variable name is misspelled in the code or something then the issue can be solved in cppcheck and the test can still "silently fail".. so the test is just redundant clutter..

The same can occur if you write any non-xfail test. You also have to fear that you introduced an errors if we make a change for code we have insufficient coverage for. So adding such tests at least increases the coverage and any mistakes will be discovered down the road. But that is just software development - you will never write perfect code.

Copy link
Owner

@danmar danmar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought I had approved this already.

Yes I agree we can never write perfect code.

@firewave firewave merged commit b3ca26e into danmar:main Dec 15, 2024
60 checks passed
@firewave firewave deleted the undef-test branch December 15, 2024 13:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants