Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Push for 90% Test Coverage #11885

Closed
3 of 14 tasks
kshyatt opened this issue Jun 26, 2015 · 68 comments
Closed
3 of 14 tasks

The Push for 90% Test Coverage #11885

kshyatt opened this issue Jun 26, 2015 · 68 comments
Labels
good first issue Indicates a good issue for first-time contributors to Julia test This change adds or pertains to unit tests

Comments

@kshyatt
Copy link
Contributor

kshyatt commented Jun 26, 2015

The testing situation right now

The latest Codecov run shows a test coverage percentage of 83.24%. This is amazing work by all, but now let's push for that beautiful green badge, following the great work at #9493. @ViralBShah asked us to open new issues rather than keep accumulating comments over there. There are many files in Julia which are poorly covered, but some are more target-rich than others. In particular, the files / folders with the most lines missed are:

  • base/multi.jl - 348 lines missed
  • base/pkg/entry.jl - 284 lines missed
  • base/linalg/lapack.jl - 293 52 lines missed
  • base/inference.jl - 229 lines missed
  • base/printf.jl - 234 lines missed
  • base/show.jl - 170 lines missed
  • base/LineEdit.jl - 199 lines missed
  • base/abstractarray.jl - 196 19 lines missed - claimed by @davidagold
  • base/REPL.jl - 184 lines missed
  • base/stream.jl - 169 lines missed
  • base/interactiveutil.jl - 163 lines missed - claimed by @Rory-Finnegan
  • base/string.jl - 139 lines missed - claimed by @ScottPJones
  • base/int.jl - 127 81 lines missed
  • libgit2 folder - 400 lines missed

To get to 90% coverage, we need to cover 3580 lines. If we focus on the above files, we can drastically improve their coverage and Julia's at once. I also like this set because there's a good spread across Julia functionality - linear algebra, strings, parallel processing, the REPL, etc.

Isn't That a Lot of Lines?

Yeah, it is. Luckily, we've already made great progress (we passed 80%!) thanks to a great group of new contributors. Spread out across a bunch of people, it's not so much work.

File specifics

lapack.jl is the file I know best. There are many functions wrapped in there that "can't" be tested right now because they require matrix storage formats that Julia doesn't support, like banded or RFP. But there are also functions that do complicated factorizations that can be difficult to test without a lot of domain expertise. Linear algebra experts welcome!

multi.jl, the file in the worst state, has many methods for starting/stopping/talking to workers that have not been tested at all. Maybe this is an issue with the buildbots that @tkelman or @staticfloat can clarify more?

In general the state of the REPL testing is quite poor. If someone would like to familiarize themselves with the REPL internals writing these tests would be a wonderful way to do so.

How you can help

If there's an area in the above list you have some expertise in or want to learn more about, feel free to "claim" the file and start testing the heck out of it! That's how I got involved in the linear algebra testing and it worked out pretty well 😉. The first set of tests was the hardest to write, but following the instructions in the OP of #9493 will get you most of the way there. You absolutely do not need to read the following 150+ comments! Following the Coveralls repo or the Codecov repo is a good way to track our progress, and you will be able to see your numbers go up.

If someone claims a file as their testing baby, I will check it off in the list above. If there's no checkbox, that file is up for grabs and in need of some awesome Julians, newbies or experienced, to write some tests for it!

We now have a Gitter chat-room: https://gitter.im/kshyatt/julia-coverage

"Help! I'm a first time contributor and I have no idea how to use Coveralls!"

Here's a step-by-step image guide to how I find things to test using Coveralls.

  1. Click on the latest build on the JuliaLang julia repo on Coveralls
    step1
  2. Coveralls by default doesn't show us a lot of files. Let's change that.
    step2
  3. Let's view 100 files per page. Since Julia base has about 220 files, we can capture all the ones in the worst state this way
    step3
  4. Now let's look at all the files in Julia, rather than just the ones changed in the latest merge
    step4
  5. lapack.jl here is in a bad state. Look at all those missed lines! Let's click on it and find something to test.
    step5
  6. Here's a line in red that's not tested. It's an error throw, which are often missed. Writing a test for this functionality should be pretty simple.
    step6
@kshyatt kshyatt added test This change adds or pertains to unit tests good first issue Indicates a good issue for first-time contributors to Julia labels Jun 26, 2015
@tkelman
Copy link
Contributor

tkelman commented Jun 26, 2015

There was a question elsewhere that I don't think has been answered yet about what happens when parallel workers try to write coverage data to the same cov file - I'm not sure whether or not this is happening on the buildbots. Setting the environment variable JULIA_CPU_CORES=1 may help (or hurt the coverage?).

Also #11802 may make these numbers more honest (worse) when it gets merged.

@ScottPJones
Copy link
Contributor

I'd be happy to try to handle the lines in string.jl, if nobody else wants it.
Would people mind me combining that with splitting up string.jl into a number of independent files,
and then splitting up test/strings.jl (my nemesis, because I keep having to rebase when somebody changes it) as well as test/unicode.jl into files matching the split up string.jl (and the ones that are already separate, i.e. utf8.jl, utf16.jl, and utf32.jl)?

@ScottPJones
Copy link
Contributor

@tkelman That was me asking, and I have a very strong suspicion that it isn't being handled, and that maybe the coverage is actually higher, because esp. heavily used things like abstractarray.jl.cov may be getting overwritten by the different processes.

@Ismael-VC
Copy link
Contributor

I'll keep helping the best I can! 👍 for 100%

@tkelman
Copy link
Contributor

tkelman commented Jun 27, 2015

Maybe dumb possibility: write files to ".jl.$(getpid()).cov" then merge them in a post-processing step?

@ViralBShah
Copy link
Member

That may be the simplest thing to do.

@ViralBShah
Copy link
Member

@ScottPJones It would be best IMO to not mix up the testing with other PRs. That way we can fast track merging things that are focussed.

@ScottPJones
Copy link
Contributor

Sounds like a good first step to me, as long as the same worker processes are used during the runtests (just in case the os could reuse the pid if the workers are forked during the testing)

@timholy
Copy link
Sponsor Member

timholy commented Jun 27, 2015

I haven't dug into how the buildbots are running things, but AFAIK we've always been measuring coverage of base with JULIA_CPU_CORES=1. Indeed, the current lack of info back from workers may be the major factor behind multi.jl's apparently-poor coverage.

@ScottPJones
Copy link
Contributor

@timholy That's good, I'd been trying to run coverage tests on my laptop, and was getting things overwritten. It would be good to solve this though, so that we can speed up coverage testing.
@ViralBShah Whatever people feel best, I was thinking of splitting things up by 3-4 commits in one PR,
but I can also do it as separate PRs for splitting up string.jl, splitting up test/strings.jl test/unicode.jl, and adding any tests that are not covered

@davidagold
Copy link
Contributor

Happy to jump on base/abstractarray.jl, will work around pending changes from #11895.

@kshyatt
Copy link
Contributor Author

kshyatt commented Jun 28, 2015

Wonderful! Thanks so much, @davidagold. I have a weak commitment to lapack.jl since there are a few functions in there that need better error messages anyway.

@davidagold
Copy link
Contributor

My pleasure!

To make things interesting, I will bet anybody a beverage of their choice (and some semblance of bragging rights) that I submit a PR with full coverage of abstractarrays.jl before they submit a PR with an equivalent coverage increase in their file.

@ScottPJones
Copy link
Contributor

Has @carnaval's change to use the min() for the coverage of several statements on one line made it in to master yet? If not, I think there will be a number of places that look covered, but are not really.
(except in @kshyatt's changes, since she rewrote stuff as if/end instead of && or ||)

@kshyatt
Copy link
Contributor Author

kshyatt commented Jun 29, 2015

Coverage has not run since June 22, and someone told me that this is because the nightly builds are failing - @staticfloat, is that true? I am pretty sure @carnaval's change hadn't made it in by that date.

@staticfloat
Copy link
Sponsor Member

Yes it's true. I pushed out a change that will hopefully fix the nightly breakage today. We'll see I suppose. :)

@kshyatt
Copy link
Contributor Author

kshyatt commented Jun 29, 2015

Someday @staticfloat will become so tired of my tedious "Hello, yes, it's me again, so about the coverage runs..." emails he will simply force ME to curate the buildbot!

@ScottPJones
Copy link
Contributor

I've been running coverage tests manually (but haven't totally succeeded yet, because of lots of breakage with various unit tests if CPU_CORES == 1. I'm preparing a simple fix to not call "spawn",but now it is dying on the "reflection" test for some reason).
This coverage testing would be a lot easier if

  • Base.runtests() worked with CPU_CORES == 1
  • Or the .cov files from different processes didn't overwrite each other
  • We had a command line flag to ignore sys.dylibs object code, and act like you just had sys.ji

@staticfloat
Copy link
Sponsor Member

Someday @staticfloat will become so tired of my tedious "Hello, yes, it's me again, so about the coverage runs..." emails he will simply force ME to curate the buildbot!

And thus, the cycle continues. :)

  • We had a command line flag to ignore sys.dylibs object code, and act like you just had sys.ji

This actually reminds me; the lack of this command line flag is ALSO holding up the coverage stats, since our song and dance so far has been to download the nightly generic linux tarballs (which rightly contain only sys.dylib now, no sys.ji anymore), delete sys.dylib and try to run coverage. Unfortunately, this causes Julia to not startup anymore, since there is no sys.ji to fall back to.

KristofferC pushed a commit to KristofferC/julia that referenced this issue Jun 30, 2015
@dpsanders
Copy link
Contributor

I would like to try number 11, int.jl. There doesn't seem to be an int.jl file in the test directory. Can I add this?

@kshyatt
Copy link
Contributor Author

kshyatt commented Jun 30, 2015

I think many of the int.jl methods are "implicitly" tested when we run the linalg and math tests. But looking at int.jl it appears much of the type conversion between int types isn't tested at all. Personally, I don't have a problem with adding an int.jl test file. I will add you to the OP.

Keep in mind that to run the tests in your new file, you will need to add it to test/choosetests.jl.

@StefanKarpinski
Copy link
Sponsor Member

Testing integer conversions makes sense and would be a great addition, especially various tricky corner cases – i.e. making sure that converting a value equal to the largest/smallest representable number in the target type works while converting values just one more extreme raises an InexactError.

tkelman added a commit that referenced this issue Jun 30, 2015
RFC: Additional string tests for #11885
@kshyatt
Copy link
Contributor Author

kshyatt commented Jul 3, 2015

My latest codecov results show us at 79.17% coverage. Keep up the great work, everyone! @dpsanders, I can't wait to see that number go up once your PR is merged.

@dpsanders
Copy link
Contributor

My PR is ready. I added some tests like those proposed by @StefanKarpinski. I may be off the radar for the next 10 days.

@dpsanders
Copy link
Contributor

One problem is that there is quote a lot of code in int.jl for 32 bit machines. Is there some way to test that?

@kshyatt
Copy link
Contributor Author

kshyatt commented Jul 7, 2015

More tests are always useful and appreciated! You're added to the OP :)

@davidagold
Copy link
Contributor

Question: should I include a test for a deprecation warning?

@ScottPJones
Copy link
Contributor

Check out my PR #11976, where I added tests for deprecated functions.
It was a bit tricky, because you need to handle the fact that when Base.runtests() does tests using workers, they have depwarn == 2, but if you do Base.runtests("deprecated") from the REPL, depwarn is whatever you have set (normally 1).

@davidagold
Copy link
Contributor

Thank you, Scott. I wonder, though, if I just X-Y problem'd y'all (or just
asked an unfortunately imprecise question). My particular issue is dealing
with this line:
https://coveralls.io/builds/2976082/source?filename=base%2Fabstractarray.jl#L17.
Maybe I'm missing it (I'm unfamiliar with the internals of deprecation),
but this doesn't seem to hook into the same deprecation interface that you
were treating -- right?

On Wed, Jul 8, 2015 at 9:38 AM, Scott P. Jones notifications@github.com
wrote:

Check out my PR #11976 #11976,
where I added tests for deprecated functions.
It was a bit tricky, because you need to handle the fact that when
Base.runtests() does tests using workers, they have depwarn == 2, but if
you do Base.runtests("deprecated") from the REPL, depwarn is whatever you
have set (normally 1).


Reply to this email directly or view it on GitHub
#11885 (comment).

@ScottPJones
Copy link
Contributor

It does, I was also using depwarn().
Note: I've also got a PR improving the error messages for abstractarray.jl, #11797

@davidagold
Copy link
Contributor

Ah, so the key is depwarn(). I'd better look that up... Thanks, Scott =)

@davidagold
Copy link
Contributor

Okay, I think my tests are ready to go. PR here: #12086.

Suggestions are of course welcome -- my working assumption is that my PR might need some iteration...

@kshyatt kshyatt changed the title The Push for 80% Test Coverage The Push for 90% Test Coverage Jul 15, 2015
@ScottPJones
Copy link
Contributor

@randyzwitch Now that my split up of string.jl has been merged, please go ahead and tackle printf coverage! (Unless you are too busy making interesting tweets! 😀)

@ScottPJones
Copy link
Contributor

I've mostly finished with unicode/*.jl, so now I'm going to claim string/*.jl.

@hayd
Copy link
Member

hayd commented Oct 13, 2015

@kshyatt
Copy link
Contributor Author

kshyatt commented Oct 13, 2015

The reason is shown here, a failing test introduced by #13491.

@hayd
Copy link
Member

hayd commented May 23, 2016

Seems to have dropped to 11% in the last day or so! https://codecov.io/github/JuliaLang/julia/commits

@tkelman
Copy link
Contributor

tkelman commented May 23, 2016

#16501
The most recent coverage run timed out https://build.julialang.org/builders/coverage_ubuntu14.04-x64/builds/281/steps/Run%20non-inlined%20tests/logs/stdio, not sure why there are ASCIIString/UTF8String deprecation warnings when running the base tests.

@omus
Copy link
Member

omus commented Aug 15, 2016

I've been looking into #17990 and I was hoping to add some tests dealing with credentials and LibGit2. My current plan is to make a GitHub machine user associated with the JuliaLang org and utilize deploy keys (SSH access) and personal access tokens (HTTP access) which would have read-only access to a private JuliaLang repo.

The issues I see with this plan is that it would require access to a private GitHub repo (which costs money) and that repo would be accessible by anyone with a copy to the Julia code base. Does this sound like a good plan or are there better ideas for testing credentials and LibGit2?

@ViralBShah
Copy link
Member

I think we have free private repos in the JuliaCon org. I will verify that.

@JaredCrean2
Copy link
Contributor

Not sure if this the right place to report this, but code coverage dropped to 4%. 15 days ago. Something strange is going on?

@StefanKarpinski
Copy link
Sponsor Member

There's a chronic build failure going on that's probably related. I do wish that failing builds would report that coverage metrics are unavailable instead of reporting nonsense metrics.

@tkelman
Copy link
Contributor

tkelman commented Jan 4, 2017

@prasad-marne
Copy link
Contributor

I'd be happy to try to handle the lines in printf.jl, if nobody else wants it.

@waldyrious
Copy link
Contributor

Looks like this issue should be closed, or perhaps renamed to "The Push for 95% Test Coverage" :)

@vtjnash
Copy link
Sponsor Member

vtjnash commented Feb 22, 2018

We also need to update the script to upload stdlib/*/src reports

@ViralBShah
Copy link
Member

I think we should close this one.

@KristofferC
Copy link
Sponsor Member

Agreed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Indicates a good issue for first-time contributors to Julia test This change adds or pertains to unit tests
Projects
None yet
Development

No branches or pull requests