-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expand the use of python unittest to all our unit testing. #255
Comments
What to do:
With the first task was also knowing how to integrate it to the flow test which has been completed. The integration of the remaining will be to propagate the call to each new suite within python framework... maybe with the creation of a main module, we'll see. |
I see this issue as a first step before getting rid of the flow test code in our sarracenia codebase which has increased its capability (static flow, dynamic flow, flakey broker) and is now located in sr_insect . |
Just a thought: I think we should write down what motivated us to segregate flow test from the codebase. To have something written down on why we are doing that and understand why it makes sense today and see in the future if it continues to make sense... |
#243 has a lot of discussion about the rationale, but I'm not sure if addresses specifically the separation. |
#243 seems to describes (very well though) the need for static flow and the possibility to add more test sets. As I understand you decided to separate the flow test through the implementation of static flow. But I still don't get it... or maybe is that test sets could be a lot of data and combersome to keep it with the code base ? is it that ? is there any other rationale ? |
yes, that was the idea... when the test data is bigger than the source code, it should not be in the same source... so far it is not that big, but the idea is to add more tests over time, in an unbounded fashion... so it should get bigger. I also think the versions, and releases, and changes to testing clutter up the source repo... like in february, something like 90% of the changes were testing, and like two changes were the code... so documenting that in the release changelogs is a bit odd. Also, for git bisection. you want to test the application version independently of the test version. if when you change releases you also change the tests... it´s not right... so it was those things also. it just seemed to make sense on balance. Now the changes to the tests will be in the test repo, and it should be clearer. We could even do releases of the test suite... though It isn´t immediately obvious why we would do that. The separation of the dynamic from the static was mostly motivated by watching your struggles. It was clear that if the dynamic passed... great. if it failed... well it probably meant there was something wrong but not certainly, and finding out what was really wrong was an ordeal every time. So stripping complexity out to get simpler earlier tests that are completely repeatable would make devs happier >90% of the time... but the dynamic stuff does include additional behaviours that are not tested by static... so simpler tests is good, but also more separate test suites is good. That is all I can think of for now. |
I have a cool testing capability to propose that is fair enough and easy to do. We could extract all options that are documented in manuals (starting with sr_subscribe.1.rst) and write a simple unit test that would automatically test all documented options. I already wrote the right pattern to extract everything easily. One big part will be to review the doc which will ensure that we standardize option documentation in a way that is parsable easily. |
that does sound cool. It sounds like the extraction process will help us debug the documentation. |
I have already done this part: the parser and updated the doc, here is part of the 74 results:
I counted 141 matches from the search 'if words0' , so I got about half of what we parse in option(self, words)... I will start with writing Boolean options test cases. It will give a basis on what to do for the remaining... |
Look at this commit (b41b58c) and give me some feedback :) |
that looks cool! Things I noticed...
Other than that, as far as I can tell, the changes are improvements to the docs! |
what do the failures mean? |
Default values are incorrect... except for report_back and retry:
|
For now I will concentrate my effort on testing and leave the interpretation and correction for later. |
OK but the default change per component, and there is overwrite_defaults in each component to set them properly... so... you need to run them per component before you decide they are wrong. |
Il n'y a pas tant d'overwrite que ça, il y a certainement une façon d'en tenir compte. je vais régler ça aujourd'hui. Je vais tester les default dans les unit test sur les composant alors... |
Ok inplace was effectively an overwrite, but durable is a problem... as I said I am not going into details for now. I am currently setting the basis and designing the right test suite for our needs. |
This will increase our testing capabilities, coverage and ease the coding of new test. With issue #240 I began that work with sr_config and the results are promising.
Here is a peak of what we could:
The text was updated successfully, but these errors were encountered: