-
Notifications
You must be signed in to change notification settings - Fork 17.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: cmd/test2json: Allow Go Tests to Pass Metadata #43936
Comments
I looked into this a bit; Unfortunately I don't think it can work quite the way you've proposed (at least, not with the current testing architecture). In particular, testing likes to emit everything in a text stream, and the JSON blobs are reconstituted with As an additional wrinkle, It SHOULD be possible, however, to have something which worked like: func TestFoo(t *testing.T) {
t.Log("Foo")
// outputs: {"action": "output", "output": "foo_test.go:12 Foo\n"}
t.Meta("requestID", "123")
// outputs: {"action": "meta", "meta": {"requestID": "123"}}
t.Log("Something Else")
// outputs: {"action": "output", "output": "foo_test.go:16 Foo\n"}
} And it could work by emitting an output like:
Where ":" is a forbidden character in the key, and the value is trimmed for whitespace. I think that this functionality might be "good enough" when parsing the test output JSON; metadata would effectively accumulate for the duration of the test, since the test JSON is effectively scanned top-to-bottom anyway to extract information about a given test. I can write up a CL that we can poke at, if folks don't hate this :) |
Actually just went ahead and made a CL: https://go-review.googlesource.com/c/go/+/357914 |
Change https://golang.org/cl/357914 mentions this issue: |
+1 for this feature. Being able to set arbitrary metadata would be a great way of helping to get our go test results into a test management system without having to rely on a third party testing library. I see the CL is kinda stalled out, I can offer time to help push this forward if anything is needed. |
Now that we have slog, I wonder if this proposal should be about adding some kind of |
cc @aclements @dmitshur as I believe y'all have been looking at structured output with cmd/dist. |
This proposal has been added to the active column of the proposals project |
Good discussion on #59928 to figure out how to hook up slog to testing. If we do that, I think that will take care of the need here. |
Actually, #59928 (comment) convinced me of the opposite. Slog output should be If we do keep these separate, then I suggest |
One thing I would very much like to have (but maybe cannot 😄 ) is that if For example:
could yield
IIUC one of the constraints here which makes this unpleasant is that |
The proposal in #43936 (comment) looks more practical to me than #43936 (comment). Specifically, different behavior in
could be a source of a lot of issues. How would it behave with input like |
Yeah I think you're right, given the constraints that "testing" cannot validate if a string is valid json or not. Small errors would lead to confusion in the output. |
Though I was thinking that, practically, test2json CAN validate, and the str/json division would show prominently when developing a producer/consumer pair. But really it's not a big deal for a consumer to decode the string as json, and pushing it out of test2json also means less overhead in test2json itself, too. I think a bigger error would be if you had some consumer which expected a string (in some other encoding), but it just SO HAPPENS to be valid json, and test2json decodes it... that would definitely be annoying. |
@riannucci How would LUCI make use of the proposed feature (something like t.WithMetadata)? It seems like it helps individual tests get structured output through to something like LUCI. Is that all you are going for? It would not let the overall execution of a test binary be annotated with JSON metadata. |
So the original context of my involvement in this proposal was maybe frivolous, but I think the proposal does generally have merit beyond that. Originally I was interested in this because I was involved with the I was thinking about how to improve this metadata output situation though, and I think the general objective of "I want to be able to communicate data, correlated with individual tests, from within the test to a higher level tool sitting outside of The direct consumer of such data in LUCI would be ResultDB's streaming test result system; it has the ability to associate test artifacts and other metadata directly with test cases, archiving them to e.g. BigQuery. It's possible to emulate this, of course, with specially crafted Log lines... but I would prefer if there was some out-of-band way to communicate (even if under the hood, currently, it's really 'testing' and 'test2json' trying their best to produce/parse stdout). I would rather have the 'communication channels' be something that An alternative to this proposal which I thought of, but don't especially like, which would be to produce a second, independent channel/file/pipe from the test binary which only has metadata. There are a number of downsides to this, though:
(Now that I think of it... https://pkg.go.dev/cmd/go#hdr-Test_packages doesn't mention
I understand this to mean "adding metadata to edit: formatting Footnotes
|
(Oh, I forgot the other bit that goconvey did; for failing assertions it was able to write them out in a structured way, again so that the web UI had better ability to display them; this included things like outputting diffs between actual/expected values) |
I think there are several distinct proposals here, all of which are about getting structured information out of tests in some way, but all of which seem to differ significantly in intent:
I think we need concrete use cases to actually move this discussion forward. |
From my read of the original post the proposal could arguably be for category 3. That data may also be in regular log output, but the goal is for some other program to read the data. The data doesn't need to be associated with any particular log line, just the test case. The proposal happened to include it with a log line, but the benefits section seems to highlight the "read it from another program" more than the association with a log line. The use case I'm familiar with is integration with systems like TestRail. My understanding is that they may have their own identifier for a test case separate from the name. And this "metadata" would be a way to associate test cases with their identifier. As far as I can tell all the use cases described in the original post and in comments are all in category 3. Some of the comments related to log lines were an attempt to propose a solution, but none of the use cases required association or ordering with existing log lines. |
Just chiming in to add another use case to the discussion: We use the Go test framework to run a fairly large set of integration tests across our telephony infrastructure. Hundreds of tests and subtests that take about 30 minutes to run in total. Most of these tests start by initiating one or more calls and then check if our services handle the calls and user actions on those calls correctly. Every once in a while, one of these tests will fail after we've made a change or added a new feature. The cause of the failure can not always be found in the logging of the integration tests. Sometimes, something will have gone wrong somewhere in the SIP path and we have to look at logging in other places of our infrastructure. Instead of having to first also dig through the logging of the integration tests to find the associated Call-ID's to query on and such, it would be nice if the Go test framework had a way of exposing some metadata for each test so that we can nicely present it in our test reports (generated from test2json output). I'm not sure if the Go test framework is intended to be used in this fashion, but figured I'd explain our use case anyway just in case. I believe the proposed |
I don't think we understand exactly what we need here yet. slog is attractive because it provides the kind of structured data we're talking about, but it's also probably wrong since we want data associated with the test, not a specific log line (see in particular this example from @dnephin). That suggests that we don't want
but instead we just want something more like:
That is, it seems like the metadata should not be attached to a specific error message, just to the test itself. slog is attractive as a way to write structured data, just not the "logging a message" part. I wonder if we should reuse slog's attribute syntax though. We could add
and define that the attribute list is exactly as defined by slog, including being allowed to pass slog.Attrs. These would be emitted in an
line in the output, and in test2json mode would also appear in a
line. Or we could go whole hog and say t.Attrs takes a ...any that it hands to slog to turn into a record and then marshals the record. In test2json mode slog's JSON would end up in the action as
I'm brainstorming here, not arguing for a specific thing. |
@bcmills the original example you referenced is not a representative example of the discussion that followed. I believe we should ignore it, and I propose we remove it from the description because at this point it seems like it's a distraction. Let's review what has been discussed so far. First, the original proposal
The request is for data about a test, not about a particular operation the test performed. I believe we all agree that the second point from the proposal is already covered by #62728, as you've already mentioned. The proposal also says:
This is the first indication that mixing this with logging is not a good solution, but there were many more to follow. First let's look at all the use cases that support the idea of this being data about a test case (not about a log line). #43936 (comment) says
#43936 (comment) says
#43936 (comment) references
#43936 (comment) says
All of these are great examples of use cases where we need to associate data with a test. This suggests the If we look for use cases for attaching data to logging, I think we'll notice there are none! There is not a single use case that would be better served by attaching the data to a specific log line, and there are many examples of why that would not work well. #43936 (comment) looked into how easy it would be and found problems because of how #43936 (comment) started a discuss about slog, but without presenting any use cases that required it. It was a thought that was explored, and was found to not be a good fit (1, 2). #43936 (comment) says it well
All of this points at
That's fine, the test author can and should log separate messages for request started and request failed. If the test author wants to communicate that data using t.Attr("request.start.0", ...)
t.Attr("request.start.1", ...)
t.Attr("request.end.0", ...)
... |
Seems more appropriate given this encoding in the go text output format:
Including multiple key/value pairs in that line would make parsing it into JSON impractical because value (at least) could contain spaces. Each key/value will need to be on a separate line:
That makes A couple question about the implementation:
If we want to support json values in I'm quite happy with the proposed t.Data(map[string]any{"key": "testID", "value": "abafaf"})
t.Data(testIDs) // where testIDs is a struct that can be JSON marshalled |
With the
where
It wouldn't even need to parse JSON to make that transformation. It just has to know how to skip over a JSON string, so it can find the whitespace between So I agree @dnephin, encoding the value as JSON from the start makes the most sense. To answer your questions: spaces are allowed in keys, and the restrictions on the value are same as for If |
I still don't think we understand exactly what we need here yet. I looked at gotestyourself/gotestsum#311 (comment) but I don't understand that either. If the test ID would be coming from an environment variable then it seems like it would be the same for every test, which suggests not something that you do with a testing.T. The link to TestRail goes to a marketing page, not something that explains how TestRail uses IDs. Can someone explain how they would use this, with links to the systems they would use it with - not just advertising pages but the technical details of how those systems would consume the attributes? |
My use case is getting our go test results into Allure (https://github.com/allure-framework). Allure is a test report tool that allows you to filter test results on a variety of metadata as well as display metadata in the report such as links to ticket tracking software and attachments (like an output file for a specific test). A live demo is here. A diagram of the Allure processing model is here: https://allurereport.org/docs/how-it-works/ There is prior art for Go Allure adaptors: However, both of these require writing Allure tests, not Go tests. I'd prefer to be able to just annotate my Go tests with the various metadata fields Allure tracks and then post process my go test json into the Allure result json. There is prior art for this as well here https://github.com/ilyubin/gotest2allure. This provides a version of this functionality by emitting specially constructed log messages and then using a custom parser to extract the metadata back out I'd like to make round-tripping this metadata more robust than relying on specific log message prefixes, and it would be nice if I didn't have to tie myself to a specific report framework in my implementation. Ideally data like a link to a ticket tracker should be able to be implemented generically within my test, and provided it is easily discoverable in the json output, any test reporting framework should then be able to pull the link back out with a suitable adaptor. |
This proposal is becoming more like a partial solution to #41878 |
@seankhliao #41878 seems to be about getting access to the data that is already provided by |
@rsc I think there are two use cases that have been shared so far. I'll try to summarize. Use case 1 - integration with test management systemsSome test management systems track tests using their own IDs. Anyone using these systems needs a way to map the Go test name to the ID used by the test management system. Existing solutions are unreliable and require parsing log output. Examples: TestRail, Allure framework This example is in python, but I think it makes it very clear: https://www.browserstack.com/docs/test-management/upload-reports-cli/frameworks/pytest. Each test uses
Use case 2 - links to external logs and reportsAny test that runs multiple goroutines (or multiple processes) that produce significant log output need some place to store the logs. Trying to output all the logs from concurrent operations to a single stream often makes the test output unusable. Instead of a mess of interleaved logs, each process or goroutine writes the logs to its own log file. When a test fails the user may need to dig through those log files to find more information about the failure. There's no convenient way to expose those logs files (or links to logs) from the Examples: integration tests across telephony infrastructure, I've experience this same problem when working with https://github.com/hashicorp/consul test suite. Ci systems have some basic support for this today using test artifacts (see artifacts docs on github actions, circleci, gitlab CI), but all of those require local files, there's no way to create a link to external files.
|
#41878 is about giving the user the raw data to output in a format they need directly, rather than trying to parse go test's output. Additional metadata may be a part of that. Rather than trying to hack links together with attr references, #59928 would be a better solution to output from parallel tests. |
I would expect that #41878 still needs some way to become aware of this data.
I am looking forward to #59928, but I don't think it solves this problem either. In one of the examples I linked the tests exercise a system that spans multiple machines. The logs for those requests are not in the current process, so Even when all the logs are available on the machine, if any of the concurrent processes/goroutines have debug logging enabled, trying to output all of the logs to stdout makes the test difficult to debug. It's much easier when those verbose logs are sent to separate files and only the test writes to stdout. |
So @nine9ths, would the functions in Before:
After:
|
@jba presumably, or better yet the entire |
It seems to me that there are three use cases here:
I don't really understand 1. The name of a test is already a unique and stable identifier. Why does a test management system need another ID? What purpose does it serve? I can see the use of 2, and perhaps it doesn't matter that I don't understand 1 because 1 can be subsumed by 2. However, case 2 also has clear overlap with Go sub-benchmark naming, and I'm not sure how to tease them apart. I think we use this much more extensively in Go sub-benchmarks, where sub-benchmarks are conventionally named Name/key1=val/key2=val/ and we have a whole domain-specific language and set of tools built around filtering on names formed this way. Use cases 2 and 3 seem very different to me. In 2, the labels are independent variables, whereas in 3 they're dependent variables. For example, it wouldn't make sense to filter on metadata that is links to artifacts. And presumably you want tools to be able to identify and copy artifacts around without getting confused by metadata that's just meant to be filtered on. |
I think a good example of use cases are those used for JUnit XML test properties today, e.g.: @aclements, I would say the use cases definitely include use case 3 (links and references to artifacts). It also includes labels, which is roughly your use case 2, but usually for a purpose that is bit distinct from subbenchmarks. I think the purpose of the metadata labels is less about filtering the results from an individual test run and more about enabling tooling to aggregate results by label across many test runs of potentially many different tests. The subbenchmark key-value pairs seem primarily about making "intra-test" distinctions, whereas the metadata labels are more "inter-test" ones. I don't foresee any request for go-supplied tooling to filter by metadata labels in the way that is provided for subbenchmarks. Just the opposite: the ask is to populate these values in the json output, which enables external tooling to do what it wants with that information. Many test runners populate XUnit XML test properties for the same reason. |
paging @neild |
Is there anything that can't be done, less elegantly, with t.Log and some custom well-known syntax? That is, what's the difference between Most of the justifications for this feature request are to improve the integration between test output and various test management systems. It seems like it should be possible to demonstrate that usage that integration using t.Log and some custom syntax, which would give us a better understanding of what a test-metadata API needs. |
@neild It wouldn't only be less elegant, it would be more error-prone. You'd be parsing "ATTR x y" out of raw output. |
This external tooling wants to have the metadata attributes structured in json. The title of this issue is specific to support in test2json. |
Here is one concrete example that uses t.Log now but would use t.Attr: https://github.com/ilyubin/gotest2allure/blob/master/pkg/allure/allure.go |
Thanks for the link to the example attributes from JUnit. It looks like sometimes these are extraneous variables. That is, a variable that may affect the outcome of a test, but that the test can't directly control. The JUnit examples are frustratingly abstract, but I think "commit" would be an example of this. You wouldn't put these in the subtest part of the name because they can vary and you don't want that to break historical lineage. A parallel to benchmarks is that we report the CPU running the benchmark--it's important to interpreting the results and it's not a measured output ("dependent variable"), but it's not something a benchmark can directly control either. Some of these are outputs beyond pass/fail or the test log. "Attachments" are a clear example of this, as is In the JUnit example, these still seem like they're being used for a lot of different things. That's not necessarily a bad thing, but it makes it a lot harder to understand what this is for, and harder to communicate in a doc comment to a potential user when they should use a Request: It would help if the people who believe they understand the intent of this proposal could write an example prescriptive doc comment for
@greg-dennis (or anyone), could you point to concrete examples of test properties used by xUnit test runners? We have a handful from Allure (thanks @jba).
I don't see this as a strong argument. It would be easy to JSON encode more complex data and pass that to I see this as two basic questions: framing and standardization. You can emit these attributes in the test log and a tool will be able to pick them up. This proposal would raise the framing to a level that is harder (though not impossible) to break. That's certainly nice, but I haven't seen evidence that it's critical. The more critical ask here is for standardizing the concept of test metadata/attributes/properties and how they are communicated in Go test output. |
Interestingly, the JUnit XML supports defining properties as part of the test output to support systems (like Go) that don't provide a way to set a property:
JUnit properties are a string key and a string value. Are there examples of commonly used systems that use anything other than a string value for test properties? |
In general, we add new APIs to std when the API is some combination of very useful, expected to be widely used, and difficult to implement outside of std. (For example: We added errors.Join because there were multiple third-party packages implementing equivalent functionality, those packages were widely used, and implementing the feature well without modifying the internals of errors.Is/As is difficult.) Test properties can be emitted without any changes to std, by writing the properties to the test output. The JUnit XML format supports properties in test output (https://github.com/testmoapp/junitxml?tab=readme-ov-file#properties-output). For tests whose output is eventually converted to JUnit XML, using this output format should allow for conveying properties from the test to any eventual consumer without requiring any changes to the testing package or test2json. For simple cases, a simple helper function should suffice to write properties in this format to the test output:
A more sophisticated implementation could handle any necessary escaping in the name and value. We do not yet have evidence that test properties will be widely used. For example, the "github.com/ilyubin/gotest2allure/pkg/allure" package is a good example of test properties being used in practice, but it has no importers as reported by pkgsite. So I think that we should:
|
Some added notes and comments. The JUnit format itself does not authoritatively support properties in the test output. That referenced section discusses a non-standard encoding, supported by a subset of JUnit XML consumers, for adding properties in individual test cases (as opposed to test suites). It's a workaround to the fact that the JUnit XML schema specification only supports properties at the testsuite level. Some of the most common JUnit XML consumers, including Hudson/Jenkins, have no support for this workaround. Even if one decides to use JUnit XML, they still need a way to convert the test output to JUnit XML. The most popular, if not only, project for converting Go test ouput to JUnit xml is (go-junit-report), and it has no support for this syntax today. It currently only supports properties being passed in at the command line to the conversion, which makes it very difficult to encode the properties in the test itself. Maybe support for the workaround encoding could be added? cc: @jstemmer in case he has thoughts. Even if such support is added to This is all to say that, as of this writing, there isn't a straightforward alternative path for getting test properties into any sort of structured output. The only project I know of that successfully worked around all these limitations is ondatra. To allow the test to populate the JUnit XML test properties in a convenient way, it hijacks I don't work at Google anymore, but I know from my time there that test properties were used extensively internally, including in Go tests. If you're looking to gather more data, it might be relatively easy to gather data from those internal uses, since the tests used a dedicated function for adding properties that would be easy to search for. |
Adding support for properties-in-output to go-junit-report seems like an excellent path to generating data on the general usefulness of properties.
I think this is the first time in this discussion that the need for suite-level properties has been mentioned. What is a "suite" in Go terms? I'd think probably a single test package, but you say "suite/file" so perhaps it's a file instead? Or do we need both suite- and file-level properties? What does the API for this look like? This sort of thing is why I think we need working examples of properties in use in the real world before we can attempt to design an API for the testing package. We can't tell if we've got the API right if we can't see the actual use cases we're satisfying. And given that it is possible (if clumsy) to emit and consume properties without testing package support, this doesn't need to be a chicken-and-egg situation.
Thanks for that reference. I have not run into this package before. I poked around and for anyone with access, the place to start looking is AddProperty under google3/testing/gobase. It does appear to have extensive internal usage--without any direct support from the testing package. |
Yeah, I should have said suite/package, not suite/file, because of course packages can have many test files. Internally at Google, the go test output is transformed into Sponge XML, a superset of JUnit XML which allows properties at both the suite- and test case-level, where "suite" is at the level of a single blaze test target. I don't recall how the internal test runner is wired up to get the arguments to If you're asking me for a proposed API, I would say something like |
"Attr outputs the key and value in a form that allows them to be reliably extracted from the test output. It is useful when test output is consumed by tools outside the Go toolchain, and especially with the Test authors can use Attr to associate information with a test, such as:
Programs that process test output can find these attributes reliably when using |
The properties aren't generally used by xUnit test runners. They are output by the tests and consumed by tooling that is external to the test runner, usually written in-house by companies for test tracking and data analysis. The examples of allure and ondatra above may be the only two open-source examples to be found, and those aren't really examples of using the properties but really provide syntactic sugar for outputting them. As for examples of the types of properties, those have been given in the discussions above and in @jba's prior comment. They include: paths to test artifacts (e.g. logs and screenshots), identifiers that associates the test to an external test plan (e.g. a Jira issue ID); and labels that indicate the dimensions under which the test invocation was run (e.g. the platform or environment in which it was executed). |
Test2json, more particularly
go test -json
, has been quite a pleasant discovery. It allows for programs to analyze go tests and create their own formatted output.For example, using GitHub Actions' formatting capabilities, I was able to better format go tests to look more user friendly when running in the UI:
Before:
After:
With that said, there are still some missing features that would allow programs to better understand the JSON output of a test.
Proposal
It would be great if Go Tests can attach metadata to be included in the JSON output of a test2json run.
Something along these lines:
Benefits:
This allows for a few highly beneficial use cases:
test2json
cannot distinguish between when a user calledt.Fatal(...)
ort.Log(...)
which makes sense ast.Fatal
just callst.Log
-- but the user can include metadata so we know exactly where the error occurred and use CI capabilities such as Actions' error command to set the file and line number to be displayed in the UI.Alternative solutions:
Include directives in the output string that the json-parsing program can analyze to see if there's metadata. But this solution is very fragile and prone to error.
Thanks!
The text was updated successfully, but these errors were encountered: