Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue for libtest JSON output #49359

Open
Gilnaa opened this Issue Mar 25, 2018 · 7 comments

Comments

Projects
None yet
8 participants
@Gilnaa
Copy link
Contributor

Gilnaa commented Mar 25, 2018

Added in #46450
Available in nightly behind a -Z flag.

@johnterickson

This comment has been minimized.

Copy link

johnterickson commented Feb 21, 2019

@nrc You mentioned "The long-term solution is going to be deeply intertwined with pluggable test runners" in #46450. Does that mean that JSON won't be stabilized until there are pluggable test runners?

I'm trying to figure out if it makes sense for me to continue looking at #51924 right now, or if I should hold off for the time being. For example, I'm looking at adding test durations to the JSON (master...johnterickson:testjson) and converting the output to JUnit (https://github.com/johnterickson/cargo2junit/).

@djrenren

This comment has been minimized.

Copy link
Contributor

djrenren commented Feb 26, 2019

Yeah I think we should stabilize something like this independently of custom test frameworks. It makes sense that the included test framework would have a programmatically readable output. Libtest's conceptual model is pretty simple and most likely won't undergo any drastic changes so I'm all for stabilizing something. I'd prefer to have a JSON Schema that describes the structure before stabilization. We'd also need to audit to ensure it's resilient to any minor changes or additions over time.

@alexcrichton thoughts?

@alexcrichton

This comment has been minimized.

Copy link
Member

alexcrichton commented Feb 26, 2019

Seems like a plausible addition to me! So long as it's designed careful I think it'd be good to go

@epage

This comment has been minimized.

Copy link

epage commented Feb 27, 2019

From the experience of creating naiive serde structs / enums for libtest's output, here are my thoughts from translating it exactly how it is written in libtest (rather than inventing my own schema that happens to be compatible):

  • For suites / tests, the data is two enums nested. For some cases, this might work well, for others, it is unnecessary nesting to deal with.
  • Probably the most annoying aspect is that the type field conflates suite/test being finished and why it finished. This is annoying for if you want to get information regardless of the completion status/.
  • No bench start even is sent
  • My gut is telling me that the bench event is too narrowly focused on the output of the current implementation and not on what might be wanted

My PR where you can see the data structure's I created: https://github.com/crate-ci/escargot/pull/24/files

Ideas to consider

  • Combine event and type fields. Unsure of the value of this
  • Split type field into type and status and define more fields as being available for all types. For example, it could be useful to programmatically report stdout for a successful test and leave it to the rendering engine to decide whether to ignore it or not. This doesn't mean all test implementations need to report all of the fields, but define it as possible.
  • Get someone from other bench implementations, like criterion, to review the bench schema.
@andoriyu

This comment has been minimized.

Copy link
Contributor

andoriyu commented Mar 3, 2019

I think it's worth adding things like package and type of test. Type of test: unit test, doc test, integration test.

The package is something like: module for unit test, resource path for doc test (crate::foo::Bar), and filename for integration test.

@epage

This comment has been minimized.

Copy link

epage commented Mar 4, 2019

Since libtest tracks test times (at minimum, reporting if its taking too long), it would be nice if that got included in the programmatic output so we can report it to the user, e.g. in JUnit. Alternatively, we'd have to timestamp the events as we read them.

@johnterickson

This comment has been minimized.

Copy link

johnterickson commented Mar 6, 2019

@epage / @andoriyu Are one of you planning on proposing a better schema? I like what you're proposing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.