Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run tests asynchronously #69

Closed
philpep opened this issue Oct 9, 2017 · 20 comments
Closed

Run tests asynchronously #69

philpep opened this issue Oct 9, 2017 · 20 comments
Labels

Comments

@philpep
Copy link

philpep commented Oct 9, 2017

Hi,

I'm not sure this is the purpose of this library but I want to run pytest tests asynchronously.

Consider this example:

import asyncio
import pytest


@pytest.mark.asyncio
async def test_1():
    await asyncio.sleep(2)


@pytest.mark.asyncio
async def test_2():
    await asyncio.sleep(2)
$ py.test -q
..
2 passed in 4.01 seconds

It would be nice to run the test suite in ~2 seconds instead of 4. Is it currently possible with pytest-asyncio or with another library ? I guess we would need to asyncio.gather() all async tests and run them in the same event loop.

Thanks !

@asvetlov
Copy link
Contributor

No.
Running 2 tests on the same loop concurrently breaks test isolation principle (testA can be broken by side effect of testB, asyncio and pytest-asyncio` iteself cannot detect the situation).

@nicoddemus
Copy link
Member

nicoddemus commented Dec 24, 2017 via email

smagafurov pushed a commit to smagafurov/pytest-asyncio that referenced this issue Apr 4, 2018
Update Python versions in build matrix
@bjoernpollex-sc
Copy link

I think this would be a very useful feature. I often use pytest for writing integration tests, and currently I want to run tests that execute small jobs in a cloud. Each test submits a job and waits for it to complete. asyncio is perfect for modeling this. xdist could be used, but is overkill here.

Is there any chance that such a feature could be added?

@dimaqq
Copy link

dimaqq commented Dec 10, 2018

I too would like a limited feature like this.
I think this could be useful for functional testing,
but not for unit testing that uses mock.patch.

In fact, I'll try to hack up an ugly demo and see how far this takes me...

@nicoddemus
Copy link
Member

As a staring point, you could override pytest_runtestloop:

def pytest_runtestloop(session):
    if session.testsfailed and not session.config.option.continue_on_collection_errors:
        raise session.Interrupted("%d errors during collection" % session.testsfailed)

    if session.config.option.collectonly:
        return True

    for i, item in enumerate(session.items):
        nextitem = session.items[i + 1] if i + 1 < len(session.items) else None
        item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
        if session.shouldfail:
            raise session.Failed(session.shouldfail)
        if session.shouldstop:
            raise session.Interrupted(session.shouldstop)
    return True

You would separate the items in two lists: async and sync. sync would continue to run like this, but async could be run by creating an event loop and submitting all async items to it.

This is just to give you an initial idea, I'm sure there are a lot more details you will encounter as you dig into this. 😁

@bsamuel-ui
Copy link

bsamuel-ui commented Jan 23, 2019

pytest-yield runs tests concurrently, though via generators rather than via asyncio (and it seems to be incompatible with asyncio) and it looks like the removal of yield tests breaks it.

Notably, it doesn't try to address the isolation problem, so it's better for running a batch of integration tests.

@f0ff886f
Copy link

@Tinche is there any feedback from your end on whether or not this could be something exposed by pytest-asyncio?

I am asking because if one where to prototype a solution that would not be accepted on grounds of philosophy, then forking to something like pytest-asyncio-concurrent could be a viable strategy.

It seems like a very useful feature, especially for longer running tests. I don't mind as the author of tests to ensure isolation across tests by enforcing it in fixtures.

@elijahbenizzy
Copy link

Is there any progress on this? The side-effect argument doesn't convince me. One could add a pure_function marker that allows them to run in tandem. Also tests really shouldn't have side-effects in the first place... The contract for most testing software doesn't say that the order of the tests will be respected.

@dimaqq
Copy link

dimaqq commented Jun 13, 2019

Wow, Elijah, tone it down :)

Not so easy

I was thinking of hacking something up, but I realised that my tests mock.patch a lot, which means they cannot be naively ran concurrently.

Likewise, most tests share fixtures, there needs to be a way to specify that a given fixture is pure, either by virtue of having no side effects, or because side effects are explicitly considered, like a common database with unique table per test, or common table with unique record per test.

This first problem is harder than appears at first:

  • patching is prevalent, it's a great way to test, I wouldn't want to ban that;
  • I could track which test patches what and only allowed non-intersecting patch sets in parallel, but...
  • patch effects are wider than its target:
    • imagine call chain test()->foo()->requests.get()->socket.socket() and
    • one test patches requests.get,
    • while another patches socket.socket

I'm still considering tests with non-overlapping coverage, but that would kill any @parametrize parallelism, so it doesn't seem trivial either.

Regarding side-effects

I think there are 2 kinds of common tests, one is a unit test on a pure function, input is sent to code under test, output is examined. Such test can be parallelised. Then again, code under test is typically small, synchronous (or if technically async, doesn't actually wait) and would hardly benefit from concurrent async testing.

The other is a functionality test, typically the global environment is set up, code under test is called, and global environment is examined afterwards. Such a test is built on side-effects, such a test is slow if async (database, network, ...) and this I think is where concurrent testing would shine.

Edit: re: test execution order

Let's not conflate test order with concurrency. Here's a simple example:

@pytest.parametrize("now", (42, 99))
async def test_request_start_time(now):
    with patch("time.time", return_value=now):
        assert (await create_request()).start_time == now

These two tests are order-independent: patch x, test ok, unpatch; patch y, test ok, unpatch
Yet they cannot be ran concurrently: patch x, patch y; test ok, test fail; unpatch, unpatch

@elijahbenizzy
Copy link

Sorry for the tone! I had not had my coffee yet when I posted. Will post less tone-y in the future -- I understand that you folks have thought a lot more about this than me :) Thanks for the detailed response.

You have a very good point about side-effects -- if you're testing for them specifically, then running in parallel might be a little messy. I can also see that order can be specified (https://stackoverflow.com/questions/17571438/test-case-execution-order-in-pytest). I was just making the assumption that most tests were pure unit tests (as you described), but that won't be the case for all users. To be honest I'm not familiar with the mocking you're doing, but it seems to me that passing the burden of determining "function purity" to the user as much as possible is the best way to solve this.

From an API perspective, one could easily imagine:
pytest.mark.asyncio(pure=True)

That way you have fairly trivial backwards compatibility (it defaults to False), but and if the user sets it to True, purity becomes part of the contract. Then, if they're doing something fancy, its their fault if it breaks. I think the complexity of measuring patch overlaps is far too magical to implement, but we could pretty easily implement it as a power-option.

If you have a stateful service and you're looking at side-effects, I think it should only matter in the case in which ordering of queries matter. So if that's not the case, then you can pass pure=True to async, but something like concurrent=True might be a better name for the option... Again, it could be left up to the user.

Cheers,
Elijah

@bglmmz
Copy link

bglmmz commented Oct 8, 2019

I got the same problem, so, is there any aproach for this?
BTW: I want the async but not parallel.

@bglmmz
Copy link

bglmmz commented Oct 9, 2019

Hi,

I'm not sure this is the purpose of this library but I want to run pytest tests asynchronously.

Consider this example:

import asyncio
import pytest


@pytest.mark.asyncio
async def test_1():
    await asyncio.sleep(2)


@pytest.mark.asyncio
async def test_2():
    await asyncio.sleep(2)
$ py.test -q
..
2 passed in 4.01 seconds

It would be nice to run the test suite in ~2 seconds instead of 4. Is it currently possible with pytest-asyncio or with another library ? I guess we would need to asyncio.gather() all async tests and run them in the same event loop.

Thanks !

HERE! This is what you wanted! More simpler~

https://github.com/reverbc/pytest-concurrent

@dimaqq
Copy link

dimaqq commented Oct 9, 2019

Use pytest_collection_modifyitems :)
Needs a call to config.hook.pytest_deselected to report tests correctly.
I'll ask if we can open-source our crud, but it's pretty simple.

After that, run several pytest processes, each would execute own subset of tests.

Open-source example: https://github.com/micktwomey/pytest-circleci/blob/master/pytest_circleci/plugin.py

@willemt
Copy link

willemt commented Mar 22, 2020

for anyone who needs this, I got cooperative multitasking working here:
https://github.com/willemt/pytest-asyncio-cooperative

@baiyyee
Copy link

baiyyee commented Apr 11, 2021

Ah, I want this feature!!!

@RonnyPfannschmidt
Copy link
Member

this issue as proposed should be closed, its simply not sensible to have concurrent setupstate in a pytest session

having pytest itself be async supportive is a different can of beans and starts with making pluggy async-native (which would also be appreciated by projects like datasette)

@seifertm
Copy link
Contributor

@RonnyPfannschmidt Thanks for getting involved :) Could you elaborate on your comment?

What problems do you see with running tests concurrently in general? Could they not share a single SetupState instance?

What's the connection between running async tests concurrently and making pluggy async-native?

@RonnyPfannschmidt
Copy link
Member

Async pluggy is needed to manage function coloring within pytest

Concurrent pytest would have to create multiple sessions that each maintained a distinct Setupstate

Technically it wouldn't even matter if threads or async tasks where used

@RonnyPfannschmidt
Copy link
Member

Distinct sessions / collection is necessary as setupstates currently taint nodes (alltho it kinda can work in trivial cases)

@seifertm
Copy link
Contributor

I see, thanks for the explanation. As far as I understand it's pretty much impossible to run tests concurrently with the current way pluggy and pytest work. This is not something we can solve in pytest-asyncio.

If someone would like to push this feature forward, please get in touch with https://github.com/pytest-dev/pluggy to discuss ways to make pluggy async-native.

I'm closing this issue for now. As always, feel free to add to the discussion anyways, if any new information pops up.

@seifertm seifertm closed this as not planned Won't fix, can't repro, duplicate, stale Oct 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests