Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit Testing Library #23

Closed
jawrainey opened this issue May 5, 2020 · 7 comments · Fixed by #34
Closed

Unit Testing Library #23

jawrainey opened this issue May 5, 2020 · 7 comments · Fixed by #34
Assignees
Labels
testing Tests or test improvements to increase the rigidity of the repo

Comments

@jawrainey
Copy link
Collaborator

jawrainey commented May 5, 2020

Add unit testing for all library and CLI methods

Unit testing infrastructure is now in place. This issue must create unit tests for the library and edge cases as outlined below.

@davidverweij
Copy link
Owner

Originally posted by @jawrainey in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion

  • Discussion needed: remove the print statement and raise an assert instead since running pytests won't output the print statement, but an assert will. Should we maybe add a 'cleanup' fixture to take care of this as it'll likely be required when testing the library?

This also raises an interesting question worth discussing around writing to/from files in tests, although arguments against this tend to be for projects that have many tests and therefore writing to disk slows down running tests see here. This is unlikely to impact us 👍

The alternative is creating temporary files (as you've done below) or using mocking with unittest.mock. I think our current approach (reading/writing files) is fine (more functional testing than unit though?) and no need to use mocks as it'll introduce complexity that's not worth the effort imho.

@davidverweij
Copy link
Owner

Originally posted by @davidverweij in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion

... the cleanup is only needed for the standard output, as the tmp_path and tmpdir are automatically cleaned - they are kept for a t most 3 tests invocations. see here. So I do think we might not need the fixture.

Regarding the unittests vs functional tests, yes, I would like to test whether the output is correct (e.g. the nameing of the .docx) but didn't got to it, so that would be more of a unittest don't you think?

@davidverweij
Copy link
Owner

Originally posted by @jawrainey in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion

I was thinking that when we test the library by invoking convert directly that it would also create the /output/<files> and need the same cleanup. We can sort that out once we write tests for the library, but imho in a different issue since you have set the infrastructure in place here so we can progress type annotations etc alongside testing.

Yes, you're right about it being a unit test and great point about checking the output names of files. We should unit test create_unique_name specifically to verify naming, but of course we might want to test the default arguments/logic of the convert method also

@davidverweij
Copy link
Owner

Note that we still need to unit-test whether the output actually exists as intended in the location (which we currently have by seeing if the program exits gracefully, which is not the same imho)

@davidverweij
Copy link
Owner

Alternative approach to changing options in each test is using parametrizing fixtures. Open to debate as to which approach to implement 👍

Edited:

re:parametrizing -- I looked into this and it is not as simple to implement this for the CLI as pytest.mark.parametrize does not accept fixtures. See this PyCon2020 video for an overview parameterized testing. As an example, here's what it might look like for testing the library under successful conditions:

def library_success_loads():
    return  test_cases = [                                                                 
      ("tests/data/example.csv", "tests/data/example.docx", "NAME", "output", ";"),
      ("tests/data/example.csv", "tests/data/example.docx", "NAME", "nested/folder", ";"),
]                                                                              

@pytest.mark.parametrize("data, template, name, path, delimiter", test_cases)  
def test_library_convert_success(data, template, name, path, delimiter):               
     result = c2d.convert(data, template, name, path, delimiter)      
     # Note: not sure what to assert as convert success case returns None.     
     # which makes this comparison feel on pythonic  
     assert result  == None

Originally posted by @jawrainey in #28

@jawrainey
Copy link
Collaborator Author

Shall we keep this open and tweak it to focus on unit testing of the library changes?

@jawrainey jawrainey changed the title Unit Testing Unit Testing Library May 9, 2020
@davidverweij davidverweij added testing Tests or test improvements to increase the rigidity of the repo and removed enhancement New feature or request labels May 12, 2020
@davidverweij
Copy link
Owner

davidverweij commented Oct 7, 2020

I am exploring pytest-mock to test if the code handles empty rows in the .csv and alike. Might be worth applying for more cases, especially regarding:

 # Note: not sure what to assert as convert success case returns None.     

I've looked at this blog about How to Unit Test Functions Without Return Statements in Python and the Hypermodern Python blog.

-- edit

RE:

as pytest.mark.parametrize does not accept fixtures.

I've rewritten the options fixture as a generator, which allows function arguments in a fixture, and now looks like this:

@pytest.fixture
def options_gen(tmp_path: Path) -> Callable:

    def _options_gen(**kwargs) -> List[str]:
        default = {
            "template" : "tests/data/example.docx",
            "data" : "tests/data/example.csv",
            "name" : "NAME",
            "path" : str(tmp_path.resolve()),
        }

        # override values if provided
        for key, value in kwargs.items():
            if key in default:
                default[key] = value

        # convert dict to sequential list, add '--' for CLI options (i.e. the keys)
        return result = [x for (key, value) in default.items() for x in ("--" + key, value)]

    return _options_gen


def test_name_not_found(runner: CliRunner, options_gen: Callable) -> None:
    result = runner.invoke(cli.main, options_gen(name="notNAME"))
    assert result.exit_code == 0

@davidverweij davidverweij linked a pull request Oct 23, 2020 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Tests or test improvements to increase the rigidity of the repo
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants