Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn as many tests into unittests as possible #52

Open
jonashaag opened this issue Jan 12, 2012 · 6 comments · May be fixed by #164
Open

Turn as many tests into unittests as possible #52

jonashaag opened this issue Jan 12, 2012 · 6 comments · May be fixed by #164

Comments

@jonashaag
Copy link
Owner

No description provided.

@Varbin
Copy link
Contributor

Varbin commented Jan 31, 2018

As I said, I will try to work on it.

@hoefling
Copy link

hoefling commented Jul 28, 2019

As noted in #121, I have started porting scripts under tests/ to automated unit tests, but am not quite happy with the result yet. Right now I am unsure about

  1. the library to choose (unittest (sample tests) vs pytest (sample tests). I'm a big fan of pytest, but haven't yet find a way to avoid writing the boilerplate like

    with run_bjoern(app) as url:
        resp = requests.get(url)
        assert resp.content == b'hello'

    in each test.

  2. The control over the server process - right now I fork the server in a multiprocessing.Process. This has the some disadvantages, like being unable to assert in app functions or monkeypatch stuff or code coverage not being calculated properly by gcov (I tried to play with __gcov_flush, but haven't got any significant difference). I would rather stay in the same process and use threading, but don't see any other way to shutdown bjoern.run_server apart from sending a SIGINT/SIGTERM.

Aside from that, I have configured tox for automated testing against multiple environments (related tox.ini config); it's also partially used for CI builds.

@jonashaag
Copy link
Owner Author

That's great! Thanks for working on this.

For 1 you can use pytest fixtures.

For 2 you can use threads, and send signals to the server process. Or you can wrap a try...except around all the code and put any exceptions into a multiprocessing queue. Or you can use concurrent.futures which has nice child exception support built in.

I think the Python coverage tool works well with multiple processes or threads

hoefling added a commit to hoefling/bjoern that referenced this issue Aug 1, 2019
Signed-off-by: Oleg Höfling <oleg.hoefling@gmail.com>
@hoefling hoefling linked a pull request Aug 1, 2019 that will close this issue
@hoefling
Copy link

hoefling commented Aug 1, 2019

I'm not sure if I can use threads - I can only send signals to the main thread, not to spawned ones. As for the suggestion with piping data between processes - great suggestion, thanks! This what I used in the end. I hadn't any success with concurrent.futures as it's pretty limited in accessing the underlying spawned processes (although I like the concept of futures and pool executors).

As for the coverage, it can't record coverage of C extensions, only pure python modules. I'm using coverage for bjoern.py already. Aside from gcov, I don't know of any other way of collecting coverage over C code.


I think I'm ready for the initial unit tests proposal (#164). It would be awesome if you find time to look at this! I tried to port the test scripts as close to the original code as possible, but had to improvise at some spots. Also, I'm interested in your opinion about the Client class, whether you will like it or not; I tried to find inspiration in Werkzeug and aiohttp testing modules. The main idea is to simplify adding new tests to a boilerplate

def test_fizz(httpclient):
    def app(e, s):
        s('200 ok', [])
        return b'hello world'
    httpclient.host = '0.0.0.0'  # optional configuration
    httpclient.start(app)  # server is started in another process
    response = httpclient.get()
    response.raise_for_status()

The server is queried for errors on each request, so assertions in app function are possible:

def test_buzz(httpclient):
    def app(e, s):
        s('200 ok', [])
        value = env['foo']  # 'foo' not available in env
        return b'hello world'
    httpclient.start(app)
    with pytest.raises(KeyError):
        httpclient.get()

Also, each test using the client fixture will be executed twice - once with the httpclient and once with the unixclient (the one with the unix socket).

To execute the tests locally, the easiest way is to use tox. Once installed (via e.g. pip install tox), run tox -e ENV to test against a particular Python version, e.g. tox -e py27 or tox -e py36. Or test multiple envs: tox -e py27 -e py36 etc.

Another thing - if you take a look at the latest test run on Travis, you will notice that some tests fail with Python 2.7, but pass with Python 3.X - this is because the trailing whitespaces in headers are not stripped with py2, but stripped with py3. Not sure if this is an issue to fix or an edge case that should be handled differently in the tests?

@blaggacao
Copy link

blaggacao commented Dec 17, 2019

Sidenote: I'd suggest consider poetry, once #164 is merged, I might even consider opening a PR to make this project "poetry hackable". I personally prefer it over tox for convenience...

@jonashaag
Copy link
Owner Author

Please don't waste your time on coming up with a PR unless we have discussed and agreed on that beforehand. I have no plans to move away from tox at the moment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants