Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving test coverage #2453

Open
3 of 5 tasks
mejrs opened this issue Jun 14, 2022 · 8 comments
Open
3 of 5 tasks

Improving test coverage #2453

mejrs opened this issue Jun 14, 2022 · 8 comments

Comments

@mejrs
Copy link
Member

mejrs commented Jun 14, 2022

This issue gathers some areas where we need to improve our test coverage.

You can view pyo3's test coverage online here or locally by following the steps here (do this when writing tests, so you can check them).

In particular (but not limited to) these areas need improvement:

Please keep in mind to write meaningful tests - do not just write tests that take the happy path, but also tests that take error paths or corner cases.

@davidhewitt
Copy link
Member

See also #2191

@aganders3
Copy link
Contributor

I finally got coverage working locally and I'm going to chip away at some of these.

@davidhewitt
Copy link
Member

davidhewitt commented Jul 3, 2022

What do you folks think about coverage for pyo3-ffi? On one hand, it's probably useful to have coverage, on the other, the only stuff which has code in there is copy-paste implementations of macros from CPython and #[derive]-d traits.

@adamreichold
Copy link
Member

Doesn't that coverage include whether an external function is called or not? While I am sympathetic to the idea that covering FFI definitions does not gain much (and this is definitely a sore spot for rust-numpy as well), I would say there is still value in having runtime tests for declared foreign API. Especially since that FFI is not just enabling PyO3 itself, but published as a separate crate.

@davidhewitt
Copy link
Member

I think from what I've seen on codecov our coverage stats don't say which extern functions are called (but I could be wrong).

@davidhewitt
Copy link
Member

Another thought - in the future it could be worth adding #[no_coverage] to #[test] functions. IMO whether the test is covered is not that interesting, really we want to measure the library code coverage. We could potentially already do this with https://github.com/taiki-e/coverage-helper if folks here agree with this thought.

@adamreichold
Copy link
Member

That sounds reasonable to me. Especially since the always-covered test lines "dilute" the actual coverage, i.e. even with zero percent coverage but three times as much test code as production code, I could end up with (0+3N)(N+3N)=3/4=75% coverage.

@aganders3
Copy link
Contributor

My experience is more in Python but most projects I've worked with exclude tests from coverage. I'm not sure if this is just convention or best practice but I'd agree with it for the above reasons.

As for FFI I'm also +1 on having tests. I don't think they'd be too be much effort (I'm willing to help), shouldn't need much maintenance, and they would provide some value. That seems like the best option vs. leaving them uncovered or marking #[no_coverage].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants