Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't measure coverage from certain lines #668

Open
nedbat opened this issue Jun 20, 2018 · 13 comments
Open

Don't measure coverage from certain lines #668

nedbat opened this issue Jun 20, 2018 · 13 comments
Labels
enhancement New feature or request

Comments

@nedbat
Copy link
Owner

nedbat commented Jun 20, 2018

Originally reported by pckroon (Bitbucket: pckroon, GitHub: pckroon)


Hi all,

first off, thanks for the great work you've been doing so far with this :)

This is a duplicate of the issue I filed on pytest-cov earlier (pytest-dev/pytest-cov#207).
I have a project that parses data files on import, which means that the parser code is always considered as covered, even though it is not actually tested. Is there any way to either record a "background" which is later substracted, or a way to specifically exclude coverage coming from certain lines (i.e. the import statements)?

Minimal example:

project
|-setup.py
|-project
||-data.py
||-func.py
||-__init__.py
|-tests
||-test_x.py
||-test_data.py

project/init.py:

from .func import f

project/data.py:

def parse_datafile():
    return 8

a = parse_datafile()

project/func.py:

def f(x): return x**2

tests/test_x.py

from project.data import a  # pragma: no cover                                  
from project import f                                                           
                                                                                
def test_f():                                                                   
    assert f(a) == a**2                                                         

tests/test_data.py:

from project.data import parse_datafile  # pragma: no cover

def test_parse():
    assert parse_datafile() == 8
> coverage erase && pytest --cov=project tests/test_x.py
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/peterkroon/python/coverage_meuk, inifile:
plugins: cov-2.5.1
collected 1 item                                                               

tests/test_x.py .                                                        [100%]

----------- coverage: platform linux, python 3.5.2-final-0 -----------
Name                  Stmts   Miss  Cover
-----------------------------------------
project/__init__.py       1      0   100%
project/data.py           3      0   100%
project/func.py           2      0   100%
-----------------------------------------
TOTAL                     6      0   100%

I expect project/data.py to be 0% covered, unless I also run test_data.py. I can't make that work with a .coveragerc or pragmas. Any advice is highly welcome :)


@nedbat
Copy link
Owner Author

nedbat commented Jun 20, 2018

Hmm, this seems like the opposite of the nocover pragma: a line that is executed, but you don't want counted as covered. There is no way to do that now. I'm not sure how you would use it in your scenario, since running test_data.py should count it as covered.

Perhaps if #170 (who tests what) is ever implemented, it will give you the information you need?

Can you say more how you would use this in a real project? Are you trying to make the total percentage more accurate, or are you trying to make the red/green line markers more accurate? Or something else?

@nedbat
Copy link
Owner Author

nedbat commented Jun 20, 2018

Original comment by pckroon (Bitbucket: pckroon, GitHub: pckroon)


Thanks for the lightning fast reply. And indeed, running test_data should count is as covered, but currently running test_x also covers it.

What I want to use it for is to make sure the parser code is also actually tested and produces the expected output, so it's mostly about the percentages.

@nedbat
Copy link
Owner Author

nedbat commented Jun 20, 2018

How do you envision running your tests so that you can run test_data and get the lines counted, and also run other tests and not get the lines counted?

@nedbat
Copy link
Owner Author

nedbat commented Jun 20, 2018

Original comment by pckroon (Bitbucket: pckroon, GitHub: pckroon)


I think (from coverage's perspective) I can make a file like such:

background.py

import project.data

and then do
coverage run --background=background.py
where first background.py is ran and recorded, and the lines covered there are substracted from the actual coverage results from the tests. Pytest-cov could wrap this by making the background equivalent to test discovery.

@nedbat nedbat added minor enhancement New feature or request labels Jun 23, 2018
@nedbat nedbat removed the 4.5 label Aug 17, 2018
@nedbat
Copy link
Owner Author

nedbat commented Oct 9, 2018

@pckroon The new context feature in 5.0a3 might be usable for this: https://nedbatchelder.com/blog/201810/who_tests_what_is_here.html

@nedbat nedbat removed the minor label Oct 9, 2018
@pckroon
Copy link

pckroon commented Oct 9, 2018

Cheers. I glossed over it and it looks good.

As it is now I see two ways of implementing what I need:

  1. Coverage keeps track of how often a line is hit. You/I record this for the testsuite, and for a file that just does the imports, and subtract the coverages.
  2. Trash all coverage that did not come from a test function. That way coverage that came from imports is not reported.

With the new feature, option 2 seems easier to implement.
What is your view on this?

@nedbat
Copy link
Owner Author

nedbat commented Oct 9, 2018

Definitely option 2 could be done now. If you use 5.0a3, you can delete the recorded data for the empty context, and then report on what is left.

@pckroon
Copy link

pckroon commented Oct 9, 2018

Then from a practical point of view: do I try to convince someone from pytest-cov to implement this, or write my own plugin for either coverage.py, pytest-cov or pytest.

I guess for short term making my own plugin for coverage would be the quickest test/implementation. For long term adaptation I should make/revive an issue on pytest-cov. I'll have a look at that soon.

@fleuryc
Copy link

fleuryc commented Jan 27, 2023

What's the status here ?

@nedbat
Copy link
Owner Author

nedbat commented Jan 27, 2023

There's been no progress on this. Do you have a new scenario that could help us find a solution?

@nedbat
Copy link
Owner Author

nedbat commented Jan 27, 2023

With static contexts, you could run the test suite once with no tests and --context=background. Then run the tests with --context=tests. Then there are two options:

  • Run a separate program to delete the "background" context data from the SQLite data file, and report as usual.
  • Add an option to the coverage reporting commands to report on only certain contexts (or to exclude certain contexts).

@nedbat
Copy link
Owner Author

nedbat commented Jan 27, 2023

Run a separate program to delete the "background" context data from the SQLite data file, and report as usual.

In commit 6a1c275 I threw together a quick program to do this: select_contexts.py.

I'm not sure it does what we need yet. Maybe one of these two scenarios is what you want... Try it out and let me know:

Excluding code outside of any test

In your .coveragerc file, add:

[run]
dynamic_context = test_function

This will record which test function was running for each data point. Run your test suite as usual. The code running outside the test functions will have an empty context recorded.

Use select_context.py to subset the data file, then report the resulting data:

% python select_context.py --exclude='^$'   # regex to exclude empty string
% coverage html --data-file=output.data

Excluding code run when no tests are run

% coverage run --parallel --context=background  ...  somehow run your test suite with no tests ...
% coverage run --parallel --context=tests ... run your test suite as usual ...
% coverage combine
% python select_context.py --exclude=background
% coverage html --data-file=output.data

@LucasBerger
Copy link

@nedbat Your Script helped me out really well. Would be nice to be able to ignore those lines found all together and mark them as "not executable". In my Django Project I really want to ignore those unnecessary loading/declaration statements. Though I saw this data is not in the objects that are currently traversed by your script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants