Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make network based tests as concurrent as possible #4271

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

harshil21
Copy link
Member

@harshil21 harshil21 commented May 22, 2024

Ideas implemented:

  • ... WIP

Benchmarks:

  • WIP

@harshil21 harshil21 marked this pull request as draft May 22, 2024 21:03
@Bibo-Joshi
Copy link
Member

  • Introduce pytest-socket and completely disable network access for *WithoutRequest tests

Just question for my understanding: This is intended to make any tests noticed that should instead go to WithRequest?

  • Extend that naming scheme to telegram.ext tests

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

  • Introduce pyfakefs to test suite

Which problem does this solve?

  • Check if using uvloop in tests provides any measurable speed up.

Sounds like an interseting thing to check out, however I see great value in testing on the default setups on the different os-ses.

  • Change CI's pytest --dist to worksteal.
  • Isolate tests from other tests
  • Continue above point and introduce pytest-randomly (need to check if pytest-xdist effectively does the job of randomly)

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases like sticker_set that currently rely on the order :D

  • Profile all tests and see what clogs up and fix that.
  • solve the TODO in test_official.exceptions
  • dirty-equals sounds like it could be quite useful for us!

sounds interesting :) feel free to try it out!

@harshil21
Copy link
Member Author

question for my understanding: This is intended to make any tests noticed that should instead go to WithRequest?

Yes, and also there are currently some tests in there which don't access the network, but whose fixtures do access the network.

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

good idea. Ideally one should be able to test locally without network connection (<1 min with pytest-xdist) and be sure that the ones which do use the network pass.

Which problem does this solve?

hm, it is supposed to be faster (since there's no disk I/O) and safer, but we don't do much I/O so probably not worth it.

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases like sticker_set that currently rely on the order :D

it is one of the possible reasons for the flakiness, and it is good practice to keep tests isolated from each other

@Bibo-Joshi
Copy link
Member

question for my understanding: This is intended to make any tests noticed that should instead go to WithRequest?

Yes, and also there are currently some tests in there which don't access the network, but whose fixtures do access the network.

👍

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

good idea. Ideally one should be able to test locally without network connection (<1 min with pytest-xdist) and be sure that the ones which do use the network pass.

👍

Which problem does this solve?

hm, it is supposed to be faster (since there's no disk I/O) and safer, but we don't do much I/O so probably not worth it.

I was at least wondering, since we e.g. already use pytests tmp_path fixtures.

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases like sticker_set that currently rely on the order :D

it is one of the possible reasons for the flakiness, and it is good practice to keep tests isolated from each other

👍

@harshil21 harshil21 added the WIP For PR's which are still a WIP label May 31, 2024
Copy link
Member

@Bibo-Joshi Bibo-Joshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Harshil, I had a look at the changes that are here so far.

What would you think of converting the wishlist in the comments into an issue and adressing them in multiple separate PR? that would give as the benefit of the individual points earlier, make reviewing probably easier and would also make it easier to distribute the open points onto different people :)

@@ -180,6 +180,23 @@ async def tz_bot(timezone, bot_info):
return default_bot


@pytest.fixture(scope="session")
async def tz_bots(tzinfos, bot_info):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I recommend to add a comment in this fixture about why we use a list here instead of properly parametrizing (speed)
  2. do we still need the tz_bot fixture for anything else, or can it be removed, then?
  3. Can we make sure that the tests at least point out the timezone on failure?

Copy link
Member Author

@harshil21 harshil21 Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still need the tz_bot fixture for anything else, or can it be removed, then?

we do still use it in the NonRequest tests, where the time penalty is very negligible.

Can we make sure that the tests at least point out the timezone on failure?

that would be hard to do without making it even more complex, but let me have another look again

@@ -309,6 +326,18 @@ def tzinfo(request):
return BasicTimezone(offset=datetime.timedelta(hours=hours_offset), name=request.param)


@pytest.fixture(scope="session")
def tzinfos():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the benefit of having this as a fixture? IISC it's used only in tz_bots and could also just be hardcoded there?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm yeah, a bit of a premature optimization at this stage in the PR, but yes I'll keep it mind

@harshil21
Copy link
Member Author

What would you think of converting the wishlist in the comments into an issue and adressing them in multiple separate PR? that would give as the benefit of the individual points earlier, make reviewing probably easier and would also make it easier to distribute the open points onto different people :)

okay, though some of the points might be better done sequentially rather than concurrently otherwise one would have to rewrite / resolve many conflicts. I constantly have to test and rewrite many times before I get the iteration right. Anyway it's probably for the best that more than one person work on this.

@harshil21 harshil21 changed the title Improve tests structure, performance, and flakiness Make network based tests as concurrent as possible Jun 26, 2024
@harshil21
Copy link
Member Author

created a new issue for the other points. I will have to rebase this branch and remove commits which are not relevant to the purpose of this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests WIP For PR's which are still a WIP
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants