-
-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create coverage actions #3483
create coverage actions #3483
Conversation
It is also possible to link/trigger this workflow only after the successful run of e.g. the tests workflow. Generally speaken, we could link / depend workflows that they might have a more particular order e.g. |
I have no clue, why all other tests are failing. As this PR has no changes in the source files nor build system. |
I pushed #3484 to see if the tests fail there, too. |
They did, so those failures are unrelated. I'll see if I can sort them out today. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I like your suggestion of making this workflow run only after all others have passed.
- It seems like this report is only going to show the current coverage but I think what we're really interested in seeing in the PR is the change in coverage. This is where I suspect using a third-party tool like Coveralls comes in handy because such a tool would store the historical data for this purpose. IMO, ideal would be to only make a comment if there is a significant decrease in coverage, but even just showing the change would be an improvement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree, that this isn't especially helpful without a baseline. I think coveralls can do that, though -- and even post a comment only if some conditions apply? It would be nice to get alerts if PRs dramatically reduce the coverage rate or introduce a lot of new un-covered code!
Yes, thats actually what the Coveralls Action would do.
So I would say we set it up with coveralls, as their action does both of the mentioned suggestions automatically. I already requested from the GothenBurgBitFactory Organization Access for adding the Repository to Coveralls, I just don't know who will get the notification for that one. |
4a20ee4
to
f4a90e8
Compare
Okay, I checked it a bit out with the running one action after another and it did not directly work as expected. However, when we want to use the coveralls action (which would make sense) then it does need to run on pull requests to post the comment etc. thus linking it over the However, what we could still do is to use then this action (the coverage one) to test is only on Ubuntu 22.04, and if this one is working, then test it on all other distros. Meaning in a bad case we would save the 8x4 minutes. But thats up for discussion, as we also currently do not hit any CI limits to my knowledge its not really required. |
I think that sounds fine. Most failures fail on every builder, so all but the first are a waste anyway. |
This should be approved now. |
Integration with Coveralls is now setup and can be seen here. I assume we would need to also set up the comment function by inviting the coveralls bot see here the documentation. On another note, I think it's quite interesting to see that over the last commits different tests fail, e.g. over a time-out from |
Closes #3413.
Change to proper `xml-out` option branch, use Ninja for Build.
Complexity is hidden as somehow the reporting just shows zero.
3ec5156
to
2f985ae
Compare
An example failure is |
As it seams that there is some inconsistency inbetween the runs, which are non related to the proper test. Maybe a second invokation of ctest on the failed runs helps to clean that up. The exit code of the first one is "ignored".
As aparently the chaining of the workflows is not working, adding the coverage as a condition into the normal tests workflow.
Alright, seams if I found a way to properly chain them. With the current setup the |
Nice work - thanks! |
This reverts commit c44229d.
Okay this was not intended to merge it without the review! Sorry for that! |
Currently, the settings for Coveralls are set to: 80% Overall Coverage (currently 84%) I have not yet set it up that Coveralls leaves a PR Comment about the change in coverage, however if wanted this could also be setup see the documentation. |
Hm, I'm seeing the SIGABRT in https://github.com/GothenburgBitFactory/taskwarrior/actions/runs/9613757645/job/26517145647?pr=3494 In wonder if that's due to the |
You mean for the coverage invocation build? There it should not be needed to add a Now reading over the documentation from the runners I see they only have four cores - so maybe we should only add a Ah and also the documentation of |
I think I had already marked this as "Approved" - that's probably why it merged? Anyway, it's fine :) |
I didn't realize Ninja was parallelizing already. But, removing the parallelism from ctest did help the tests not fail. I'm not sure the tests are designed to run in parallel -- they might be using some shared resource? |
Hm, I assumed as even the old test script before #3446 allowed at least for a flag to run the test suite in parallel that this should not be a problem. What I think is confusing and does not really add up is that in all the distro tests, we do run the tests in parallel ( |
Could it be different versions of ctest? I hate autocorrect |
One thought, is that it's possible the measurement of the number of processors is different in a container. I have forgotten most of what I ever knew about Docker. But perhaps the tasks weren't really running in parallel in the container? |
Huh, well, not running tests in parallel didn't help -- that PR failed anyway. |
Closes #3413.
With this PR a coverage comment is automatically created and updated over the corresponding actions file and attached to the PR.
It is also possible to link the generated coverage report to some online viewer such as Coveralls, SonarCloud to get even more insight e.g. evolution over coverage, deeper insights which lines are covered etc.
I do think this makes sense in general, but only if we would then also look at the metrics. Otherwise, it is more dead code/dead connection and a simple comment is probably more helpful.