Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand the use of python unittest to all our unit testing. #255

Open
benlapETS opened this issue Sep 6, 2019 · 19 comments
Open

Expand the use of python unittest to all our unit testing. #255

benlapETS opened this issue Sep 6, 2019 · 19 comments
Labels
Developer not a problem, more of a note to self for devs about work to do.

Comments

@benlapETS
Copy link
Contributor

benlapETS commented Sep 6, 2019

This will increase our testing capabilities, coverage and ease the coding of new test. With issue #240 I began that work with sr_config and the results are promising.
Here is a peak of what we could:

  • Detect problem that we wouldn't have tought (like resource leak)
  • Be more flexible with the verbosity:
  1. the verbose flag -v
Unit tests (Fri Sep 6 11:20:48 EDT 2019)
======= Testing: sr_config
test_checksum_0 (unit_tests.sr_config_unit_test.ChecksumTestCase) ... ok
test_checksum_N (unit_tests.sr_config_unit_test.ChecksumTestCase) ... skipped 'test_checksum_N commented  # TODO why this has been commented'
test_checksum_d (unit_tests.sr_config_unit_test.ChecksumTestCase) ... ok
test_checksum_n (unit_tests.sr_config_unit_test.ChecksumTestCase) ... ok
test_checksum_s (unit_tests.sr_config_unit_test.ChecksumTestCase) ... ok
test_find_script (unit_tests.sr_config_unit_test.PluginScriptTestCase) ... ok
...
----------------------------------------------------------------------
Ran 74 tests in 0.179s

OK (skipped=6)
  1. the default
.s.................................s.......................s......sss.....
----------------------------------------------------------------------
Ran 74 tests in 0.127s

OK (skipped=6)
  • More modular code (each testcase as a class, each unit test as a function with a name and one assert) + better handling of setup, teardown, skipping tests + a well-structured suite with a strong, well-defined api. A real test framework for more info unittest
@benlapETS benlapETS added the Developer not a problem, more of a note to self for devs about work to do. label Sep 6, 2019
@benlapETS
Copy link
Contributor Author

benlapETS commented Sep 6, 2019

What to do:

  • MERGED tests/test_sr_amqp.py--
  • MERGING tests/unit_tests/sr_cache_unit_test.py--
  • HOLD --COMPLETED in sr_subscribe does not return cleanly when run through ssh/dsh #240 ... tests/unit_tests/sr_config_unit_test.py--
  • tests/unit_tests/sr_consumer_unit_test.py
  • tests/unit_tests/sr_credentials_unit_test.py
  • tests/unit_tests/sr_ftp_unit_test.py
  • tests/unit_tests/sr_http_unit_test.py
  • tests/unit_tests/sr_instances_unit_test.py
  • tests/unit_tests/sr_pattern_match_unit_test.py
  • tests/unit_tests/sr_retry_unit_test.py
  • tests/unit_tests/sr_sftp_unit_test.py
  • tests/unit_tests/sr_util_unit_test.py

With the first task was also knowing how to integrate it to the flow test which has been completed. The integration of the remaining will be to propagate the call to each new suite within python framework... maybe with the creation of a main module, we'll see.

@benlapETS benlapETS self-assigned this Sep 6, 2019
@petersilva petersilva added bug Something isn't working Priority 5 - Defect Missing and/or broken functionality - do not forget and removed Priority 5 - Defect Missing and/or broken functionality - do not forget labels Sep 21, 2019
@petersilva petersilva removed the bug Something isn't working label Oct 8, 2019
@benlapETS
Copy link
Contributor Author

I see this issue as a first step before getting rid of the flow test code in our sarracenia codebase which has increased its capability (static flow, dynamic flow, flakey broker) and is now located in sr_insect .

@benlapETS
Copy link
Contributor Author

Just a thought: I think we should write down what motivated us to segregate flow test from the codebase. To have something written down on why we are doing that and understand why it makes sense today and see in the future if it continues to make sense...

@petersilva
Copy link
Contributor

#243 has a lot of discussion about the rationale, but I'm not sure if addresses specifically the separation.

@benlapETS
Copy link
Contributor Author

#243 seems to describes (very well though) the need for static flow and the possibility to add more test sets.

As I understand you decided to separate the flow test through the implementation of static flow. But I still don't get it... or maybe is that test sets could be a lot of data and combersome to keep it with the code base ? is it that ? is there any other rationale ?

@petersilva
Copy link
Contributor

petersilva commented Mar 27, 2020

yes, that was the idea... when the test data is bigger than the source code, it should not be in the same source... so far it is not that big, but the idea is to add more tests over time, in an unbounded fashion... so it should get bigger.

I also think the versions, and releases, and changes to testing clutter up the source repo... like in february, something like 90% of the changes were testing, and like two changes were the code... so documenting that in the release changelogs is a bit odd.

Also, for git bisection. you want to test the application version independently of the test version. if when you change releases you also change the tests... it´s not right... so it was those things also. it just seemed to make sense on balance. Now the changes to the tests will be in the test repo, and it should be clearer. We could even do releases of the test suite... though It isn´t immediately obvious why we would do that.

The separation of the dynamic from the static was mostly motivated by watching your struggles. It was clear that if the dynamic passed... great. if it failed... well it probably meant there was something wrong but not certainly, and finding out what was really wrong was an ordeal every time. So stripping complexity out to get simpler earlier tests that are completely repeatable would make devs happier >90% of the time... but the dynamic stuff does include additional behaviours that are not tested by static... so simpler tests is good, but also more separate test suites is good.

That is all I can think of for now.

@benlapETS
Copy link
Contributor Author

benlapETS commented Mar 30, 2020

I have a cool testing capability to propose that is fair enough and easy to do. We could extract all options that are documented in manuals (starting with sr_subscribe.1.rst) and write a simple unit test that would automatically test all documented options. I already wrote the right pattern to extract everything easily. One big part will be to review the doc which will ensure that we standardize option documentation in a way that is parsable easily.

@petersilva
Copy link
Contributor

that does sound cool. It sounds like the extraction process will help us debug the documentation.
I suspect we will end up iterating... you will write a parser, get stuck on something... we will consider how to change the docs to make it more easily parseable, then onto the next issue... btw... if you do sr_subscribe.1.rst first... that is going to be... oh... 90% of the work. We had options in different pages, but it ended up being more confusing when options were common, so almost all options are defined in sr_subscribe. only a small subset of options that are specific to components will be in the other man pages.

@benlapETS
Copy link
Contributor Author

benlapETS commented Mar 31, 2020

I have already done this part: the parser and updated the doc, here is part of the 74 results:

accept_unmatch (default: False)
admin <amqp_url> (default: None)
attempts (default: 3)
base_dir (default: /)
batch (default: 100)
blocksize (default: 0)
chmod_log (default: 0600)
debug (default: False)
default_dir_mode (default: 0755)
default_mode (default: 0000)
...

I counted 141 matches from the search 'if words0' , so I got about half of what we parse in option(self, words)... I will start with writing Boolean options test cases. It will give a basis on what to do for the remaining...

@benlapETS
Copy link
Contributor Author

Look at this commit (b41b58c) and give me some feedback :)

@petersilva
Copy link
Contributor

petersilva commented Mar 31, 2020

that looks cool! Things I noticed...

  • retry_ttl None... lost some information in the documentation... if you remove the text, it needs to show up in the the paragraph.

Other than that, as far as I can tell, the changes are improvements to the docs!

@benlapETS
Copy link
Contributor Author

benlapETS commented Mar 31, 2020

first results of boolean default value tests:
image

@petersilva
Copy link
Contributor

what do the failures mean?

@benlapETS
Copy link
Contributor Author

Default values are incorrect... except for report_back and retry:

SubTest failure: Traceback (most recent call last):
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
    yield
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 582, in subTest
    yield
  File "/home/benoit/git/sarracenia/tests/unit_tests/sr_config_unit_test.py", line 71, in test_default_value
    self.assertEqual(getattr(cfg, name), cfg.isTrue(default_value.strip()))
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 39, in _patched_equals
    raise native_error
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 32, in _patched_equals
    old(self, first, second, msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 912, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 905, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: None != False

SubTest failure: Traceback (most recent call last):
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
    yield
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 582, in subTest
    yield
  File "/home/benoit/git/sarracenia/tests/unit_tests/sr_config_unit_test.py", line 71, in test_default_value
    self.assertEqual(getattr(cfg, name), cfg.isTrue(default_value.strip()))
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 39, in _patched_equals
    raise native_error
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 32, in _patched_equals
    old(self, first, second, msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 912, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 905, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: True != False


SubTest failure: Traceback (most recent call last):
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
    yield
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 582, in subTest
    yield
  File "/home/benoit/git/sarracenia/tests/unit_tests/sr_config_unit_test.py", line 71, in test_default_value
    self.assertEqual(getattr(cfg, name), cfg.isTrue(default_value.strip()))
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 39, in _patched_equals
    raise native_error
  File "/opt/pycharm/pycharm-community-2019.3.4/plugins/python-ce/helpers/pycharm/teamcity/diff_tools.py", line 32, in _patched_equals
    old(self, first, second, msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 912, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 905, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: False != True


Ran 1 test in 0.052s

FAILED (failures=3, errors=2)
SubTest error: Traceback (most recent call last):
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
    yield
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 582, in subTest
    yield
  File "/home/benoit/git/sarracenia/tests/unit_tests/sr_config_unit_test.py", line 71, in test_default_value
    self.assertEqual(getattr(cfg, name), cfg.isTrue(default_value.strip()))
AttributeError: 'sr_config' object has no attribute 'report_back'

SubTest error: Traceback (most recent call last):
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
    yield
  File "/home/benoit/anaconda3/envs/sarracenia/lib/python3.8/unittest/case.py", line 582, in subTest
    yield
  File "/home/benoit/git/sarracenia/tests/unit_tests/sr_config_unit_test.py", line 71, in test_default_value
    self.assertEqual(getattr(cfg, name), cfg.isTrue(default_value.strip()))
AttributeError: 'sr_config' object has no attribute 'retry'

@benlapETS
Copy link
Contributor Author

For now I will concentrate my effort on testing and leave the interpretation and correction for later.

@petersilva
Copy link
Contributor

OK but the default change per component, and there is overwrite_defaults in each component to set them properly... so... you need to run them per component before you decide they are wrong.
the defaults in the sr_subscribe page are for that component. For example, in sr_shovel the no_download setting is always true.

@benlapETS
Copy link
Contributor Author

benlapETS commented Apr 1, 2020

Il n'y a pas tant d'overwrite que ça, il y a certainement une façon d'en tenir compte. je vais régler ça aujourd'hui.

Je vais tester les default dans les unit test sur les composant alors...

@benlapETS
Copy link
Contributor Author

Ok inplace was effectively an overwrite, but durable is a problem... as I said I am not going into details for now. I am currently setting the basis and designing the right test suite for our needs.

@benlapETS benlapETS mentioned this issue May 1, 2020
@petersilva
Copy link
Contributor

petersilva commented Mar 22, 2022

found a problem where all configurations were accumulating all masks, instead of being re-initialized for each one.
exciting.... patches: 7f84fd5 and 89ea037
which would have been detected if we had unit tests of some kind?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Developer not a problem, more of a note to self for devs about work to do.
Projects
None yet
Development

No branches or pull requests

2 participants