Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JUnit report sent to wrong build #40

Open
jawnsy opened this issue Feb 11, 2021 · 33 comments · Fixed by #421
Open

JUnit report sent to wrong build #40

jawnsy opened this issue Feb 11, 2021 · 33 comments · Fixed by #421
Assignees

Comments

@jawnsy
Copy link

jawnsy commented Feb 11, 2021

Dear maintainer,

Thanks so much for creating this excellent plugin and sharing it for others to use!

I have multiple workflows defined, and it seems that the check is added to the first workflow, rather than the one that actually ran the test suite. Is there some way to associate the report with the correct workflow definition? Ideally I'd like the report to be below the test suite (e.g. job that runs tests, publishes report, should have the report appended as a check below it or nearby)

@mikepenz
Copy link
Owner

Hi @jawnsy

This is sadly a problem of GitHub, which you can find more about here:
https://github.community/t/github-actions-status-checks-created-on-incorrect-check-suite-id/16685

Hoping them providing a fix for that soon

@pbrisbin
Copy link

pbrisbin commented Nov 18, 2021

Hi there- unfortunately, this issue makes this Action pretty unusable for us since the check run output is the only place our test suite failures are visible and when they get added to some other "random" Workflow, they're impossible to find.

I waded through some of the related issues and there seems to be a potential workaround:

JUnit reporter github action should update existing check instead of creating new one.

Get check which triggered action:
https://docs.github.com/en/rest/reference/checks#get-a-check-run

Update this check with provided information about test results:
https://docs.github.com/en/rest/reference/checks#update-a-check-run

If I understand correctly, this would add all of your annotations to the actual Job where I'm using the step (our tests), instead of as a new check that is getting misplaced in some other Workflow.

Besides being a workaround for this issue, this is actually exactly where I was trying to find these annotations to begin with. I think it's a much more useful place to put them. Even if the check was showing up under the correct Workflow, I think users would still be looking for failures in the existing, failed "Test Job" check and not some separate "JUnit Reporter" check (I know we could configure the name to be more obvious, but still).

Am I missing anything? Would you consider implementing such functionality?

@mikepenz
Copy link
Owner

@pbrisbin thank you very much for the added notes.

It sure sounds like an opportunity to use those APIs and make it so it will only update the current run. Would need to look into it.
Do you maybe know an action already using this approach?

@pbrisbin
Copy link

Hmm, I can't say that I do. A lot of Actions add annotations to the Job's check where they're run, but they do it by outputting messages with a problem-matcher attached, not by updating the Check. I suppose that's another alternative you could consider.

Looking at the docs a bit more closely, I think it would look something like,

const ref = head_sha
const check_name = github.context.job // assumes check names are job.<job-id>
const filter = 'latest'

const checks = await octokit.rest.checks.listForRef({
  ...github.context.repo
  ref,
  check_name,
  filter
});

// assumes a check for our job exists if we're running, probably want some error-handling
const check_run_id = checks[0].run_id

// as per docs: "Each time you update the check run, annotations are 
// appended to the list of annotations that already exist for the check run."
oktokit.rest.checks.update({ owner, repo, check_run_id, ... })

Oh, and I noticed you're currently only sending your first 50 annotations. I suppose it's reasonable to expect a test suite in a to never have more than 50 test failures in a given run, but you could send everything if you call this .update() multiple times, each with fewer than 50.

The Checks API limits the number of annotations to a maximum of 50 per API request. To create more than 50 annotations, you have to make multiple requests to the Update a check run endpoint. Each time you update the check run, annotations are appended to the list of annotations that already exist for the check run

@mikepenz
Copy link
Owner

@pbrisbin there's a version coming up using the outlined APIs. Sadly those lead to a different result than anticipated, and due to the check still being running at the time of updating the result, it won't keep the failure state for example.

Just for reference, there is still no API to create a check in a specific suite / workflow sadly:

https://github.community/t/specify-check-suite-when-creating-a-checkrun/118380/9
https://github.community/t/associate-a-check-run-with-a-workflow/122540
https://github.community/t/associating-a-check-run-with-a-specific-action-run/126682

@mikepenz mikepenz reopened this Nov 21, 2021
@pbrisbin
Copy link

Awesome!

due to the check still being running at the time of updating the result, it won't keep the failure state for example

Let me make sure I understand...

So, if I use this in a app-test job for example, it runs tests then does the JUnit thing, the Job will pass/fail based on the tests and not the JUnit processing? That seems... desired? I kind of think the only reason the JUnit action needs to control the failure at all is because it's doing that separate-check thing. When it's part of the test's check, the tests setting the check result just makes sense (to me).

Am I missing something?

@mikepenz
Copy link
Owner

@pbrisbin you are actually right, that particular thing may not be an issue in this case.

One thing seems to be a problem though. The way the annotations are rendered if we use the update approach:
Screenshot 2021-11-22 at 21 58 46

vs.

Screenshot 2021-11-22 at 21 59 28

https://github.com/mikepenz/action-junit-report/runs/4279043601

@pbrisbin
Copy link

Wow, well that's super annoying. I wonder why that is 🤔

@pbrisbin
Copy link

I might be grasping at straws, but in your PR, it looks like the Update sets { output.conclusion } while the Create sets { conclusion, output.* }. Is it possible that matters?

@mikepenz
Copy link
Owner

@pbrisbin sadly this wasn't it, but this makes the build to fail, while it's still ongoing, breaking a few things :D
https://github.com/mikepenz/action-junit-report/actions/runs/1492172809
After it's done, it will change it's state to successful in this particular build

@pbrisbin
Copy link

Ha, not unexpected in hindsight. I would say that when doing the update path you don't want conclusion at all -- you aren't intending to conclude it right?

@pbrisbin
Copy link

I wonder if the test's conclusion/output is somehow taking over as the "important one". So your annotations remain, but they get subdued in the UI once the test completes?

Either way, totally annoying. And sorry if this ends up a dead-end.

@mikepenz
Copy link
Owner

@pbrisbin I feel somehow that this is potentially an issue with the github API. the documentation itself would not indicate any behavior difference as far as I found briefly: https://docs.github.com/en/rest/reference/checks#update-a-check-run

Thank you for also looking into it. Really appreciated!

@pbrisbin
Copy link

Yeah I agree. Looking at the API docs between Create and Update the POST bodies are identical, you would expect the result to be the same too.

@pbrisbin
Copy link

There seems to be more functionality that doesn't work for the update vs create:

I don't see the annotations appear in the diff. Unfortunately, I think not seeing the annotations in-line is a bridge too far for us to use this style.

I also notice here that the Process completed with exit code 1. annotation actually includes the Job/Check name (tts-test), but the others do not -- as if they're attached to the Workflow but not any one Job/Check. I wonder if that's somehow related.

@mikepenz
Copy link
Owner

@pbrisbin that's unfortunate :/

I haven't had the chance to check if there have been other reports on the update endpoint behaving differently, or if there has anything been mentioned on the forums.

@pbrisbin
Copy link

pbrisbin commented Nov 29, 2021

Oh, that may actually be our fault. I need to investigate where file/line information is meant to live in a JUnit XML. Our formatter seems to just have it as part of the failure message, but that's probably not right.

Notice the "Line 1 in {spec description}", even though {file}:{line} is there as the first part of the message.

@mikepenz
Copy link
Owner

@pbrisbin sadly it seems line numbers are not consistent between different output formats.
Fun enough there was a feature request today to add support for handling the line attribute in the report.xml:

There surely is some way to expand support for handling it. It should already handle retrieving file numbers from file paths as you show in your screenshot:
https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L58-L60

maybe if you have a sample output we. can use it to expand test cases and see why it does not work for your case

@pbrisbin
Copy link

pbrisbin commented Nov 29, 2021

Oh, that's convenient!

Here is an example of what it is right now: golden.xml

But we author this formatter, so I can pretty easily add attributes.{file,line}, which it looks like would get picked up before any inferring is done.

@mikepenz
Copy link
Owner

I didn't realize. In this case I'd propose to add the line attribute :) for direct support.

And thanks for providing the xml. I see, so we can't parse it from the description. if it was part of the file, it would parse it already

@pbrisbin
Copy link

So you are saying that if we added file="{file}:{line}" that would also work?

line numbers are not consistent between different output formats

And there isn't any one way to do that is more common or standardized than any other?

@mikepenz
Copy link
Owner

I believe using the line attribute is the most stable form, as it's shown in this junit format xunit export: https://github.com/mikepenz/action-junit-report/blob/main/test_results/xunit/report.xml

In cases neither is available (line via the file, or via the line attribute) the action tries to do some more searching to find the the line number in the stacktrace: https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L53-L55

I'll check again and see why it wouldn't match the description in your case as it should be the stacktrace.


Did some search but the xsd for junit output for example does not specify line. so it seems generally this is an extension to the spec.

@koyama-yoshihito
Copy link

@mikepenz
Hi!
Thank you for developing this nice plugins.

I am looking for a way to solve this issue.
Is there any progress or workaround?

@mikepenz
Copy link
Owner

@koyama-yoshihito as of this discussion here, the issue is still not fixed by GitHub: community/community#14891

@koyama-yoshihito
Copy link

@mikepenz
We just keep watching this discussion.
Thanks for info.

@laughedelic
Copy link
Contributor

Could this feature help with the issue: https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries?

@mikepenz
Copy link
Owner

@laughedelic thank you for the link. The newest version of this action does publish a job summary.

While it seems to be a partial solution I think it's not fully covering the whole element.

Sample display:
https://github.com/mikepenz/action-junit-report/actions/runs/2676469277

Sadly this would loose the great feature of seeing the test results within the reported checks:

Screenshot 2022-07-20 at 18 09 49

We could add a flag to offer disabling the report being sent and only having a summary - for people who'd prefer that.

@laughedelic
Copy link
Contributor

Glad to know that it's already in use! 👏

I think the summary in the checks list is a very nice feature, but unfortunately, when there are a lot of tests ran in parallel, the number of check summaries is doubled, so it would be good to either be able to disable it, or consolidate all summaries by some tag (as suggested in #195).

@mikepenz
Copy link
Owner

mikepenz commented Jul 22, 2022

@laughedelic without offering a configuration where multiple imports can be done at once, I feel that the current summary APi exposed by the core library of GitHub won't offer that possibility yet.

But given a sample summary of this: https://github.com/mikepenz/action-junit-report/actions/runs/2717236074#summary-7464569272 it seems like it's not too bad.

What also is possible already today is to enable the annotate_only mode which will not create a check run anymore. Meaning only the summary will be created.

@mikepenz
Copy link
Owner

Btw. regarding multiple imports as one. Please see my comment here: #195 (comment) with the current (in-progress) PR

garfieldnate added a commit to SoarGroup/Soar that referenced this issue Dec 12, 2022
Builds were failing because `junit` was not installed. Use a dedicated
action plugin to ensure it is available and also publish the test report
on PR's for convenient viewing. For now we don't create an actual PR
check, only annotations, because the check is currently sent to the
wrong build. See
mikepenz/action-junit-report#40.
@igiona
Copy link

igiona commented Mar 6, 2023

As a workaround you could use the same approach as other actions (i.e. https://github.com/EnricoMi/publish-unit-test-result-action) and add in the summary a link to the check run.
It doesn't solve the issue, but one at least is able to find the "details" in one click...

@AkhremDmitry
Copy link

Hi @mikepenz
Thanks a lot for this action!
It works fine while workflow is triggered by merge or PR.
But I've faced with problem that looks like related to this issue.
I have workflow that triggered on schedule in that case all JUnit reports are created in another workflow run.
Can I somehow customise runId where the check should be created?

image
image
image

@mikepenz
Copy link
Owner

mikepenz commented Jun 26, 2023

hi @AkhremDmitry as far as I am aware no new API was introduced to fix the root problem of this ticket.

However one thing I use in different projects these days is to only have annotations and the job summary:

annotate_only: true
detailed_summary: true

And having the main build running the tests fail, so it would not cause such wrong association.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants