Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

this plugin doesn't work with unittest #38

Merged
merged 1 commit into from
May 3, 2021
Merged

Conversation

stas00
Copy link
Contributor

@stas00 stas00 commented May 1, 2021

Hope this note will save time the next person considering trying this module with unittests. I spent half hour trying to figure out if I did something wrong.

Hope this note will save time the next person considering trying this module with unittests. I spent half hour trying to figure out if I did something wrong.
@flzara
Copy link
Collaborator

flzara commented May 2, 2021

imho, this precision does not belong to the requirements section.

The fact that this is a pytest plugin is explicit in the very first statement of the file. If a emphasis is needed on this point I would

  • add a link to https://pytest.org to the word pytest;
  • do it in italic at the end of the Feature section.

@stas00
Copy link
Contributor Author

stas00 commented May 2, 2021

Based on your reply, it sounds like you're suggesting that I did something wrong.

Yet, I've attempted to use this plugin exactly as prescribed, exactly under prescribed platform which is pytest.
The whole test suite of https://github.com/huggingface/transformers/ is running under pytest yet this plugin is not working.

Could you please clarify how emphasizing that this a pytest plugin will tell users that this is a pytest plugin that doesn't support unittest tests?

Please refer to https://docs.pytest.org/en/6.2.x/unittest.html, where it clearly says that unittest is the type of test, but it still can be run with pytest like a native pytest test. Quote:

pytest supports running Python unittest-based tests out of the box.

So perhaps to follow suite, the requirement of this plugin could say:

While pytest supports running Python unittest-based tests, this plugin will ignore this type of tests.

Or perhaps another variation of an additional requirement entry:

one of the requirements for this plugin to work is that the tests are written as pure pytest tests, and pytest-monitor doesn't support other variants such as Python unittest, even if they are run under pytest.

I'm totally not attached to the wording, just asking that you fairly warn the potential users that unittest is not supported.

Thank you!

@js-dieu
Copy link
Collaborator

js-dieu commented May 3, 2021

Hello

I was not aware of this pytest feature (pretty neat imho).
After giving it a small try, the pytest-plugin nearly does the job: no crashes, tests run, session found and so on.
The point is the test handler itself which is pretty tricky. As for DocTestItem, there is another dedicated and not so clear event sequence.

I think the better for now is:

  • to clearly states that this plugin is not designed to work with unittest suite as you propose.
  • File a ticket for this need and we will give it a deeper look.

@stas00 / @flzara Your opinion ?

@flzara
Copy link
Collaborator

flzara commented May 3, 2021

  • to clearly states that this plugin is not designed to work with unittest suite as you propose.

+1. But I would do it in the feature section of the README

  • File a ticket for this need and we will give it a deeper look.

+1

@stas00
Copy link
Contributor Author

stas00 commented May 3, 2021

I trust that you will know the best where and how to document that now that you know about this pytest-almost-unittest-100%-supported nuance ;)

The point is the test handler itself which is pretty tricky.

I'm trying to identify a possible memory leak in some 5K test-suite, that's how I stumbled upon your creation.

Meanwhile I'm trying this approach to scope out the memory usage:

# conftest.py
before = None
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    global before
    outcome = yield
    res = outcome.get_result()
    if res.when == "setup" and res.passed:
        before = cpu_mem_used()
    elif res.when == "teardown":
        after = cpu_mem_used()
        ...

but I didn't have enough time yet to sort it out. Measuring linux memory usage is tricky. And we can't use tracemalloc since we have to include non-python memory allocators too.

@js-dieu
Copy link
Collaborator

js-dieu commented May 3, 2021

pytest does lots (too much ?) things! Hard to be know each features :)

You are absolutely right with memory measures.
I use the same approach as yours: I start the session with a dummy test to identify the current python footprint which is then removed for each tests being run.

I'll try to find some time this week to check if I can identify the sequence involved for this use case.

@js-dieu js-dieu closed this May 3, 2021
@js-dieu js-dieu reopened this May 3, 2021
@sonarcloud
Copy link

sonarcloud bot commented May 3, 2021

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@stas00
Copy link
Contributor Author

stas00 commented May 3, 2021

As requested opened an Issue: #39

@stas00
Copy link
Contributor Author

stas00 commented May 3, 2021

You are absolutely right with memory measures.
I use the same approach as yours: I start the session with a dummy test to identify the current python footprint which is then removed for each tests being run.

Would it be beneficial to discuss the nuances / exchange notes in another Issue? For example a few possible things to discuss:

  • I noticed you aren't doing gc.collect() before taking a new measurement, without which you may get incorrect result, as gc isn't run immediately on object deletion. At least in all the profilers I have been developing I've been using this approach.
  • memory_profiler that you use relies on RSS, which I also use mostly but it's not super-reliable if you get memory paged out. Recently I discovered PSS - proportional set size - via smem via this https://unix.stackexchange.com/a/169129/291728, and it really helped to solve the "process being killed because it used up its cgroups quota" which I used in the analysis of pytest being killed on CI here: [CI] solving the pytest crashing and hanging CI job huggingface/transformers#11408 (please scroll down to smem discussion in item 4).

I'll try to find some time this week to check if I can identify the sequence involved for this use case.

Amazing - thank you!

@js-dieu
Copy link
Collaborator

js-dieu commented May 3, 2021

I am sure that all these details would make more sense in a separate issue. Thanks for that :)

Few notes on your remarks:

  • the gc.collect can be added as an option for more precise measures. It clearly makes sense to add this enhancement as it simply run out of my mind during the development phase (did I rely too much on memory_profile ? ;) ).
  • Got the point. Interesting metric this PSS. Point now is how to collect it ... but this is discussing about details.

I merge the PR.

@js-dieu js-dieu self-assigned this May 3, 2021
@js-dieu js-dieu merged commit 2fe25ac into CFMTech:master May 3, 2021
@stas00 stas00 deleted the patch-1 branch May 3, 2021 23:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants