Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: structured output for test attributes #43936

Open
marwan-at-work opened this issue Jan 27, 2021 · 62 comments
Open

proposal: testing: structured output for test attributes #43936

marwan-at-work opened this issue Jan 27, 2021 · 62 comments
Labels
Milestone

Comments

@marwan-at-work
Copy link
Contributor

marwan-at-work commented Jan 27, 2021

Test2json, more particularly go test -json, has been quite a pleasant discovery. It allows for programs to analyze go tests and create their own formatted output.

For example, using GitHub Actions' formatting capabilities, I was able to better format go tests to look more user friendly when running in the UI:

Before:

Screen Shot 2021-01-26 at 5 22 08 PM

After:

Screen Shot 2021-01-26 at 5 21 58 PM

With that said, there are still some missing features that would allow programs to better understand the JSON output of a test.

Proposal

It would be great if Go Tests can attach metadata to be included in the JSON output of a test2json run.

Something along these lines:

func TestFoo(t *testing.T) {
  t.Log("Foo")
  // outputs: {"action": "output", "output": "foo_test.go:12 Foo\n"}
  t.WithMetadata(map[string]string{"requestID": "123"}).Errorf("Foo failed")
  // outputs: {"action": "output", "output": "Foo failed", "metadata": {"requestID": "123"}}
}

Benefits:

This allows for a few highly beneficial use cases:

  1. If a test fails, then the program that's analyzing the failed test's json can receive metadata about why it failed: such as requestID, userID etc and then provide the user helpful links to logs and queries.
  2. A test can provide source-code information about where things failed. Because right now test2json cannot distinguish between when a user called t.Fatal(...) or t.Log(...) which makes sense as t.Fatal just calls t.Log -- but the user can include metadata so we know exactly where the error occurred and use CI capabilities such as Actions' error command to set the file and line number to be displayed in the UI.

Alternative solutions:

Include directives in the output string that the json-parsing program can analyze to see if there's metadata. But this solution is very fragile and prone to error.

Thanks!

@bcmills bcmills added this to the Proposal milestone Jan 27, 2021
@riannucci
Copy link

I looked into this a bit; Unfortunately I don't think it can work quite the way you've proposed (at least, not with the current testing architecture). In particular, testing likes to emit everything in a text stream, and the JSON blobs are reconstituted with cmd/test2json from that text stream; it would be really tricky to attach metadata to a particular logging statement.

As an additional wrinkle, encoding/json has a dependency on testing, meaning that testing cannot actually use Go's json package for any encoding :(.

It SHOULD be possible, however, to have something which worked like:

func TestFoo(t *testing.T) {
  t.Log("Foo")
  // outputs: {"action": "output", "output": "foo_test.go:12 Foo\n"}
  t.Meta("requestID", "123")
  // outputs: {"action": "meta", "meta": {"requestID": "123"}}
  t.Log("Something Else")
  // outputs: {"action": "output", "output": "foo_test.go:16 Foo\n"}
}

And it could work by emitting an output like:

=== RUN   TestFoo
    foo_test.go:12: Foo
--- META: TestFoo: requestID: 123
    foo_test.go:16: Something Else
--- PASS: TestFoo (N.NNs)

Where ":" is a forbidden character in the key, and the value is trimmed for whitespace.

I think that this functionality might be "good enough" when parsing the test output JSON; metadata would effectively accumulate for the duration of the test, since the test JSON is effectively scanned top-to-bottom anyway to extract information about a given test.

I can write up a CL that we can poke at, if folks don't hate this :)

@riannucci
Copy link

Actually just went ahead and made a CL: https://go-review.googlesource.com/c/go/+/357914

@gopherbot
Copy link
Contributor

Change https://golang.org/cl/357914 mentions this issue: testing: allow structured metadata in test2json

@rsc rsc changed the title cmd/test2json: Allow Go Tests to Pass Metadata proposal: cmd/test2json: Allow Go Tests to Pass Metadata Jun 22, 2022
@rsc rsc moved this to Incoming in Proposals Aug 10, 2022
@rsc rsc added this to Proposals Aug 10, 2022
@nine9ths
Copy link

+1 for this feature. Being able to set arbitrary metadata would be a great way of helping to get our go test results into a test management system without having to rely on a third party testing library.

I see the CL is kinda stalled out, I can offer time to help push this forward if anything is needed.

@prattmic
Copy link
Member

Now that we have slog, I wonder if this proposal should be about adding some kind of slog API to testing.T, which is passed through when using test2json?

@prattmic
Copy link
Member

cc @aclements @dmitshur as I believe y'all have been looking at structured output with cmd/dist.

@rsc rsc moved this from Incoming to Active in Proposals Jul 19, 2023
@rsc
Copy link
Contributor

rsc commented Jul 19, 2023

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@rsc
Copy link
Contributor

rsc commented Jul 26, 2023

Good discussion on #59928 to figure out how to hook up slog to testing. If we do that, I think that will take care of the need here.

@jba
Copy link
Contributor

jba commented Jul 27, 2023

Actually, #59928 (comment) convinced me of the opposite. Slog output should be action="output" but this information should be action="somethingelse".

If we do keep these separate, then I suggest TB.Data(key string, value any), where value is formatted with%q to keep it on one line. "Metadata" is the wrong word here (as it often is elsewhere). This isn't data about data, it's data about tests. I was also thinking of Info, but that might cause confusion with slog, whose Info method takes a message before its keys and values.

@riannucci
Copy link

One thing I would very much like to have (but maybe cannot 😄 ) is that if value is JSON, that it would be decoded and incorporated into the test2json output without requiring an additional parsing step.

For example:

func TestSomething(t *testing.T) {
  t.Data("mykey", `{"hello": "world"}`)
  t.Data("myotherkey", `hello world`)
}

could yield

{"action": "data", "key": "mykey", "json": {"hello": "world"}}
{"action": "data", "key": "myotherkey", "str": "hello world"}

IIUC one of the constraints here which makes this unpleasant is that testing cannot depend on encoding/json, so we can't make t.Data(key, value) pass value through encoding/json... However I think test2json can depend on more things, so, maybe, possibly, it could see and decode these?

@martin-sucha
Copy link
Contributor

The proposal in #43936 (comment) looks more practical to me than #43936 (comment).

Specifically, different behavior in

func TestSomething(t *testing.T) {
  t.Data("mykey", `{"hello": "world"}`)
  t.Data("myotherkey", `hello world`)
}

could be a source of a lot of issues. How would it behave with input like t.Data("mykey", "{hello}")? We should only have one way to expose the information. It seems to me that the two keys (json and str) would complicate consumers of the data.

@riannucci
Copy link

Yeah I think you're right, given the constraints that "testing" cannot validate if a string is valid json or not. Small errors would lead to confusion in the output.

@riannucci
Copy link

Though I was thinking that, practically, test2json CAN validate, and the str/json division would show prominently when developing a producer/consumer pair. But really it's not a big deal for a consumer to decode the string as json, and pushing it out of test2json also means less overhead in test2json itself, too.

I think a bigger error would be if you had some consumer which expected a string (in some other encoding), but it just SO HAPPENS to be valid json, and test2json decodes it... that would definitely be annoying.

@rsc
Copy link
Contributor

rsc commented Aug 9, 2023

@riannucci How would LUCI make use of the proposed feature (something like t.WithMetadata)? It seems like it helps individual tests get structured output through to something like LUCI. Is that all you are going for? It would not let the overall execution of a test binary be annotated with JSON metadata.

@riannucci
Copy link

riannucci commented Aug 10, 2023

So the original context of my involvement in this proposal was maybe frivolous, but I think the proposal does generally have merit beyond that.

Originally I was interested in this because I was involved with the goconvey project (a now very-outdated testing library which did a lot of interesting things, as well as a lot of not-great things). One of the things this library did was that it reported metadata about every assertion which passed, and displayed these in a web UI... the WAY that it passes this data out from the test is pretty bad though (it dumps JSON objects to stdout at various points in the test execution with funny text markers and then tries to parse them out again. This basically doesn't work with subtests or with very much test parallelism). A lot of the weirdness here is due to goconvey's age - it was originally written in the Go 1.2ish era1.

I was thinking about how to improve this metadata output situation though, and I think the general objective of "I want to be able to communicate data, correlated with individual tests, from within the test to a higher level tool sitting outside of go test" is a reasonable one. Reporting passing assertions is not very high value, but reporting metrics (or other statistics) or notating the location of testing artifacts (large log files, perhaps outputs from a multi-process testing scenario), etc. I think would be valid use cases.

The direct consumer of such data in LUCI would be ResultDB's streaming test result system; it has the ability to associate test artifacts and other metadata directly with test cases, archiving them to e.g. BigQuery.

It's possible to emulate this, of course, with specially crafted Log lines... but I would prefer if there was some out-of-band way to communicate (even if under the hood, currently, it's really 'testing' and 'test2json' trying their best to produce/parse stdout). I would rather have the 'communication channels' be something that go test owns rather than some other mechanism.

An alternative to this proposal which I thought of, but don't especially like, which would be to produce a second, independent channel/file/pipe from the test binary which only has metadata. There are a number of downsides to this, though:

  • Unless go test implements a flag for this, any DIY solution would suffer from a discovery problem (i.e. "does this particular package support this custom flag?"). Goconvey 'solves' this by scanning the test imports which is both slow and weird. Additionally, a custom flag would render the test output uncacheable, which is unfortunate.
  • If go test DOES support a way to do this... it seems to me that supporting it via go test -json is probably better than adding an additional flag/output file.
  • It would be difficult to correlate the metadata with the other output (log lines/stdout) from the test, which could be useful, though not essential for the cases I have in mind.
  • Another alternative would be to set an environment variable for this side channel, but environment variables have their own issues - this could also behave oddly with the test cache, since the test would have different cached output depending on the environment

(Now that I think of it... https://pkg.go.dev/cmd/go#hdr-Test_packages doesn't mention -json as a cacheable flag - is it?)

It would not let the overall execution of a test binary be annotated with JSON metadata.

I understand this to mean "adding metadata to go test as proposed would only allow a test to add metadata scoped to a single named test, not the overall test binary output", which is fine for the cases I had in mind.

edit: formatting

Footnotes

  1. Coincidentally... I'm rewriting our repos' use of goconvey these last couple weeks... the new version is substantially more normal/modern Go.

@riannucci
Copy link

riannucci commented Aug 10, 2023

(Oh, I forgot the other bit that goconvey did; for failing assertions it was able to write them out in a structured way, again so that the web UI had better ability to display them; this included things like outputting diffs between actual/expected values)

@aclements
Copy link
Member

I think there are several distinct proposals here, all of which are about getting structured information out of tests in some way, but all of which seem to differ significantly in intent:

  1. Attaching structured information to individual log lines, where the log lines themselves continue to be regular text. This is my read of the original post. I'm not clear on whether this information should or should not appear in plain text test output because it really is "metadata".
  2. A way to emit structured information interleaved with the test log, which is specifically ordered with respect to the reset of the log lines. This is my read of riannucci's comment. I'm not clear how this differs from structured logging in testing. This type of output seems integral to the test log, and thus should be presented in some way even when not in JSON mode. This also doesn't actually seem like "metadata" to me, since it's not data about data, it's just structured data.
  3. A way to attach metadata to a test, not in any way ordered with respect to the test log. I'm not sure anyone is asking for this, but this is the other way I could interpret "allowing Go tests to pass metadata".

I think we need concrete use cases to actually move this discussion forward.

@dnephin
Copy link
Contributor

dnephin commented Aug 21, 2023

From my read of the original post the proposal could arguably be for category 3. That data may also be in regular log output, but the goal is for some other program to read the data. The data doesn't need to be associated with any particular log line, just the test case. The proposal happened to include it with a log line, but the benefits section seems to highlight the "read it from another program" more than the association with a log line.

The use case I'm familiar with is integration with systems like TestRail. My understanding is that they may have their own identifier for a test case separate from the name. And this "metadata" would be a way to associate test cases with their identifier.

As far as I can tell all the use cases described in the original post and in comments are all in category 3. Some of the comments related to log lines were an attempt to propose a solution, but none of the use cases required association or ordering with existing log lines.

@seankhliao
Copy link
Member

#41878 is about giving the user the raw data to output in a format they need directly, rather than trying to parse go test's output. Additional metadata may be a part of that.


Rather than trying to hack links together with attr references, #59928 would be a better solution to output from parallel tests.

@dnephin
Copy link
Contributor

dnephin commented Feb 7, 2024

I would expect that #41878 still needs some way to become aware of this data. t.Attr seems like that mechanism, so my impression is that #41878 could complement this proposal, but does not replace it.

Rather than trying to hack links together with attr references, #59928 would be a better solution to output from parallel tests.

I am looking forward to #59928, but I don't think it solves this problem either.

In one of the examples I linked the tests exercise a system that spans multiple machines. The logs for those requests are not in the current process, so t.Output() doesn't help. The test code can construct a url to find relevant logs from whatever system receives them, but trying to export all of the logs to the test process is often not feasible.

Even when all the logs are available on the machine, if any of the concurrent processes/goroutines have debug logging enabled, trying to output all of the logs to stdout makes the test difficult to debug. It's much easier when those verbose logs are sent to separate files and only the test writes to stdout.

@jba
Copy link
Contributor

jba commented Feb 7, 2024

So @nine9ths, would the functions in pkg/allure change like this?

Before:

func Feature(t *testing.T, feature string) {
	t.Logf("%s%s", prefix.Feature, feature)
}

After:

func Feature(t *testing.T, feature string) {
	t.Attr("feature", feature)
}

@nine9ths
Copy link

@jba presumably, or better yet the entire pkg/allure goes away and I just call t.Attr("feature", feature) directly in my tests.

@aclements
Copy link
Member

It seems to me that there are three use cases here:

  1. Attaching a test management system's test ID with a test.
  2. Attaching labels to a test that allow collections of test results to be filtered by those labels.
  3. Attaching artifacts to a test.

I don't really understand 1. The name of a test is already a unique and stable identifier. Why does a test management system need another ID? What purpose does it serve?

I can see the use of 2, and perhaps it doesn't matter that I don't understand 1 because 1 can be subsumed by 2. However, case 2 also has clear overlap with Go sub-benchmark naming, and I'm not sure how to tease them apart. I think we use this much more extensively in Go sub-benchmarks, where sub-benchmarks are conventionally named Name/key1=val/key2=val/ and we have a whole domain-specific language and set of tools built around filtering on names formed this way.

Use cases 2 and 3 seem very different to me. In 2, the labels are independent variables, whereas in 3 they're dependent variables. For example, it wouldn't make sense to filter on metadata that is links to artifacts. And presumably you want tools to be able to identify and copy artifacts around without getting confused by metadata that's just meant to be filtered on.

@greg-dennis
Copy link

I think a good example of use cases are those used for JUnit XML test properties today, e.g.:
https://github.com/testmoapp/junitxml?tab=readme-ov-file#properties

@aclements, I would say the use cases definitely include use case 3 (links and references to artifacts). It also includes labels, which is roughly your use case 2, but usually for a purpose that is bit distinct from subbenchmarks. I think the purpose of the metadata labels is less about filtering the results from an individual test run and more about enabling tooling to aggregate results by label across many test runs of potentially many different tests. The subbenchmark key-value pairs seem primarily about making "intra-test" distinctions, whereas the metadata labels are more "inter-test" ones.

I don't foresee any request for go-supplied tooling to filter by metadata labels in the way that is provided for subbenchmarks. Just the opposite: the ask is to populate these values in the json output, which enables external tooling to do what it wants with that information. Many test runners populate XUnit XML test properties for the same reason.

@adonovan
Copy link
Member

paging @neild

@neild
Copy link
Contributor

neild commented Oct 23, 2024

Is there anything that can't be done, less elegantly, with t.Log and some custom well-known syntax? That is, what's the difference between t.Attr(name, value) and t.Logf("ATTR: %v %v", name, value)?

Most of the justifications for this feature request are to improve the integration between test output and various test management systems. It seems like it should be possible to demonstrate that usage that integration using t.Log and some custom syntax, which would give us a better understanding of what a test-metadata API needs.

@jba
Copy link
Contributor

jba commented Oct 23, 2024

@neild It wouldn't only be less elegant, it would be more error-prone. You'd be parsing "ATTR x y" out of raw output.

@greg-dennis
Copy link

greg-dennis commented Oct 23, 2024

This external tooling wants to have the metadata attributes structured in json. The title of this issue is specific to support in test2json.

@jba
Copy link
Contributor

jba commented Oct 23, 2024

It seems like it should be possible to demonstrate that usage that integration using t.Log and some custom syntax, which would give us a better understanding of what a test-metadata API needs.

Here is one concrete example that uses t.Log now but would use t.Attr: https://github.com/ilyubin/gotest2allure/blob/master/pkg/allure/allure.go
This is the allure package mentioned in #43936 (comment).

@aclements
Copy link
Member

Thanks for the link to the example attributes from JUnit.

It looks like sometimes these are extraneous variables. That is, a variable that may affect the outcome of a test, but that the test can't directly control. The JUnit examples are frustratingly abstract, but I think "commit" would be an example of this. You wouldn't put these in the subtest part of the name because they can vary and you don't want that to break historical lineage. A parallel to benchmarks is that we report the CPU running the benchmark--it's important to interpreting the results and it's not a measured output ("dependent variable"), but it's not something a benchmark can directly control either.

Some of these are outputs beyond pass/fail or the test log. "Attachments" are a clear example of this, as is allure.Issue.

In the JUnit example, these still seem like they're being used for a lot of different things. That's not necessarily a bad thing, but it makes it a lot harder to understand what this is for, and harder to communicate in a doc comment to a potential user when they should use a t.Attr API.

Request: It would help if the people who believe they understand the intent of this proposal could write an example prescriptive doc comment for (*T).Attr(key string, value any).

Many test runners populate XUnit XML test properties for the same reason.

@greg-dennis (or anyone), could you point to concrete examples of test properties used by xUnit test runners? We have a handful from Allure (thanks @jba).

This external tooling wants to have the metadata attributes structured in json. The title of this issue is specific to support in test2json.

I don't see this as a strong argument. It would be easy to JSON encode more complex data and pass that to t.Log. It doesn't seem to add much value for the testing package itself to do this. In fact, there's some downside because it adds a new dependency on encoding/json to every test binary.

I see this as two basic questions: framing and standardization. You can emit these attributes in the test log and a tool will be able to pick them up. This proposal would raise the framing to a level that is harder (though not impossible) to break. That's certainly nice, but I haven't seen evidence that it's critical.

The more critical ask here is for standardizing the concept of test metadata/attributes/properties and how they are communicated in Go test output.

@neild
Copy link
Contributor

neild commented Oct 23, 2024

Interestingly, the JUnit XML supports defining properties as part of the test output to support systems (like Go) that don't provide a way to set a property:

<testcase name="name">
  <properties>
    <property name="name" value="value">
  </properties>
  <system-out>
    Alternate way of setting the same property:
[[PROPERTY|name=value]]
  </system-out>
</testcase>

JUnit properties are a string key and a string value. Are there examples of commonly used systems that use anything other than a string value for test properties?

@neild
Copy link
Contributor

neild commented Oct 23, 2024

In general, we add new APIs to std when the API is some combination of very useful, expected to be widely used, and difficult to implement outside of std. (For example: We added errors.Join because there were multiple third-party packages implementing equivalent functionality, those packages were widely used, and implementing the feature well without modifying the internals of errors.Is/As is difficult.)

Test properties can be emitted without any changes to std, by writing the properties to the test output. The JUnit XML format supports properties in test output (https://github.com/testmoapp/junitxml?tab=readme-ov-file#properties-output). For tests whose output is eventually converted to JUnit XML, using this output format should allow for conveying properties from the test to any eventual consumer without requiring any changes to the testing package or test2json.

For simple cases, a simple helper function should suffice to write properties in this format to the test output:

func TestProperty(t *testing.T, name string, value any) {
  // JUnit XML property syntax. The \n is unfortunate but necessary (I think).
  t.Logf("\n[[PROPERTY|%v=%v]]", name, value) 
}

A more sophisticated implementation could handle any necessary escaping in the name and value.

We do not yet have evidence that test properties will be widely used. For example, the "github.com/ilyubin/gotest2allure/pkg/allure" package is a good example of test properties being used in practice, but it has no importers as reported by pkgsite.

So I think that we should:

  1. Add no new API for now.
  2. Recommend that users who want to emit test properties use the JUnit XML output format.
  3. Revisit this issue in the future if there is evidence for common use of test properties.

@greg-dennis
Copy link

greg-dennis commented Oct 24, 2024

Some added notes and comments.

The JUnit format itself does not authoritatively support properties in the test output. That referenced section discusses a non-standard encoding, supported by a subset of JUnit XML consumers, for adding properties in individual test cases (as opposed to test suites). It's a workaround to the fact that the JUnit XML schema specification only supports properties at the testsuite level. Some of the most common JUnit XML consumers, including Hudson/Jenkins, have no support for this workaround.

Even if one decides to use JUnit XML, they still need a way to convert the test output to JUnit XML. The most popular, if not only, project for converting Go test ouput to JUnit xml is (go-junit-report), and it has no support for this syntax today. It currently only supports properties being passed in at the command line to the conversion, which makes it very difficult to encode the properties in the test itself. Maybe support for the workaround encoding could be added? cc: @jstemmer in case he has thoughts.

Even if such support is added to go-junit-report, it's a solution only for properties at the individual test case level, whereas proper support for properties would probably allow you to add them at the suite/file level, too. Finally, this solution requires (unless you are very clever -- more on that in a moment), that you invoke the conversion separately, forgoing the convenience of just passing the -json flag.

This is all to say that, as of this writing, there isn't a straightforward alternative path for getting test properties into any sort of structured output. The only project I know of that successfully worked around all these limitations is ondatra. To allow the test to populate the JUnit XML test properties in a convenient way, it hijacks os.Stdout and pipes it to go-junit-report. It's probably not something we want emulated often.

I don't work at Google anymore, but I know from my time there that test properties were used extensively internally, including in Go tests. If you're looking to gather more data, it might be relatively easy to gather data from those internal uses, since the tests used a dedicated function for adding properties that would be easy to search for.

@neild
Copy link
Contributor

neild commented Oct 24, 2024

Adding support for properties-in-output to go-junit-report seems like an excellent path to generating data on the general usefulness of properties.

Even if such support is added to go-junit-report, it's a solution only for properties at the individual test case level, whereas proper support for properties would probably allow you to add them at the suite/file level, too.

I think this is the first time in this discussion that the need for suite-level properties has been mentioned. What is a "suite" in Go terms? I'd think probably a single test package, but you say "suite/file" so perhaps it's a file instead? Or do we need both suite- and file-level properties? What does the API for this look like?

This sort of thing is why I think we need working examples of properties in use in the real world before we can attempt to design an API for the testing package. We can't tell if we've got the API right if we can't see the actual use cases we're satisfying. And given that it is possible (if clumsy) to emit and consume properties without testing package support, this doesn't need to be a chicken-and-egg situation.

I don't work at Google anymore, but I know from my time there that test properties were used extensively internally, including in Go tests

Thanks for that reference. I have not run into this package before. I poked around and for anyone with access, the place to start looking is AddProperty under google3/testing/gobase. It does appear to have extensive internal usage--without any direct support from the testing package.

@greg-dennis
Copy link

Yeah, I should have said suite/package, not suite/file, because of course packages can have many test files. Internally at Google, the go test output is transformed into Sponge XML, a superset of JUnit XML which allows properties at both the suite- and test case-level, where "suite" is at the level of a single blaze test target. I don't recall how the internal test runner is wired up to get the arguments to AddProperty into the XML, but I don't think it's trivial for external Go users to mimic.

If you're asking me for a proposed API, I would say something like func (t *T) Attr(key, value string) for adding testcase-level attributes and func (m *M) Attr(key, value string) for adding suite-level attributes would suffice. And maybe for Benchmarks and Fuzz tests, too. Whether to choose the name "Attr," "Prop," "Meta," or something else can be debated, but that's the API I would start with as a working solution.

@aclements
Copy link
Member

I'm still looking for answers to a couple questions:

  • Could someone write an example prescriptive doc comment for (*T).Attr(key string, value any)?
  • What are some concrete, compelling examples of test properties used by xUnit test runners? We have a handful from Allure (thanks @jba).

@jba
Copy link
Contributor

jba commented Oct 30, 2024

  • Could someone write an example prescriptive doc comment for (*T).Attr(key string, value any)?

"Attr outputs the key and value in a form that allows them to be reliably extracted from the test output. It is useful when test output is consumed by tools outside the Go toolchain, and especially with the -json flag to go test.

Test authors can use Attr to associate information with a test, such as:

  • paths to artifacts associated with a test, like output files
  • test identifiers for some external test framework, such as a "story" (in the agile development framework) or "behavior" (behavior-driven development)

Programs that process test output can find these attributes reliably when using go test -json (or equivalently, the test2json command) because each call to Attr corresponds to a single TestEvent whose Action field is set to "attr". This is in contrast to using Log or Logf to output this type of information, requiring parsing of raw test output."

@greg-dennis
Copy link

What are some concrete, compelling examples of test properties used by xUnit test runners? We have a handful from Allure (thanks @jba).

The properties aren't generally used by xUnit test runners. They are output by the tests and consumed by tooling that is external to the test runner, usually written in-house by companies for test tracking and data analysis. The examples of allure and ondatra above may be the only two open-source examples to be found, and those aren't really examples of using the properties but really provide syntactic sugar for outputting them. As for examples of the types of properties, those have been given in the discussions above and in @jba's prior comment. They include: paths to test artifacts (e.g. logs and screenshots), identifiers that associates the test to an external test plan (e.g. a Jira issue ID); and labels that indicate the dimensions under which the test invocation was run (e.g. the platform or environment in which it was executed).

@aclements
Copy link
Member

If we're allowing the value to be any JSON-encodable type, then every test will have to depend on encoding/json, which is unfortunate.

I think it would be much simpler for value to be a string. That seems to be sufficient for all of the mentioned use cases. If a test does need a more complex attribute type, it's free to JSON-encode it, and the consumer just has to JSON-decode it.

@aclements
Copy link
Member

@greg-dennis , I see you're involved in ondatra. Do we have anyone here involved in a Go Allure adapter? @ilyubin for https://github.com/ilyubin/gotest2allure, perhaps? I'll be much more comfortable that we're not operating in a vacuum here if we have at least two projects that can commit to using testing.(*T).Attr.

@greg-dennis
Copy link

@aclements, unfortunately I'm no longer involved in Ondatra since I left Google earlier this year, so I couldn't make that commitment. I acknowledge that one of the challenges here is that the uses are going to be primarily internal and proprietary to development efforts, as these attributes cater to the bespoke internal tracking and debugging needs of individual companies and software projects.

@aclements
Copy link
Member

Okay, with a bunch of help from @mknyszek, @prattmic, and @neild, I've been fishing around for concrete uses of this.

@jba gave two abstract use cases for this above: 1. paths to file artifacts, and 2. test identifiers for some external test framework.

github.com/ilyubin/gotest2allure gives a few concrete examples, though that package has no public imports, so take this with a grain of salt:

  • Detailed description of a test
  • Feature and story for behavior-driven development
  • Issue link
  • Link to test case (I'm not sure what this means)

I reached out to the Ondatra team, but haven't heard anything back.

For the Go project itself, I believe we would use test attributes to link to Go issues, especially for testenv.SkipFlaky.

I could see us wanting to use them in tests that produce files like pprof profiles or execution traces, but that immediately runs into a tension. Normally these are temporary files that get deleted immediately after the test. If we put their paths into test attributes, presumably we don't want to delete them. We could control this behavior with a command line flag, but that just kicks the can down the road because now you have to decide whether to set this flag. If the test is being run by a person who could decide to set the flag, I don't see an attribute being any better than just a log line. If the test is being run in CI, it's probably just not going to set that flag. Maybe you only keep them around if the test fails, but this proposal is only one piece of making that work well. I could see this being compelling, but test attributes are just one piece of the puzzle.

The Go project's LUCI infrastructure attaches LUCI tags for GOOS and GOARCH, the ID of the bot the test ran on, and various infrastructure-level stuff for isolating test failures. This proposal would allow us to represent these in Go test JSON, rather than just at the ResultDB level, but 1. it's not clear that's a useful change, and 2. while test sharding can cause some of these to vary from test to test, there would be a huge amount of repetition.

The Chrome project, which also uses LUCI, create a large number of attributes ("tags") in its test infrastructure: genTestResultTags. I can't quite tell for sure, but these all look like "infrastructure" things to me that aren't necessarily being reported by tests themselves.

Finally, we looked inside Google, which does have an equivalent of this API. It has a huge number of callers, though note that tests within Google are part of a much bigger infrastructure. A very strong majority of the call sites I sampled are reporting links to artifacts stored in other systems, such as log links, links to various reports, or links to charts. Many other uses were things that really could have been in the test log, but Google test logs are so famously spammy that test attributes are sometimes used just to lift up "important" things. Some of these are secondary test results that I could see being valuable to call out to automated systems, such as time and memory measurements. I'll note that these are all "high cardinality" uses: basically every test is going to have distinct values.

Other than these, there's a long tail of uses, including linking to various IDs in other systems like test IDs and things related to feature- and behavior-driven development; and infrastructure things like kernel info and commit info. These all appear to be "low cardinality", in that a lot of tests would share the same value of an attribute.

BTW, it's worth noting that literally all of the examples I found used a string key and a string value, so that strongly supports the idea of using just flat strings.

@mknyszek
Copy link
Contributor

@jba's "path to concrete file" use-case reminded me of a LUCI integration that I forgot to mention. (Sorry about that!)

The ResultDB API allows attaching artifacts to tests and uploading them to storage. Plumbing this through is straightforward with the ResultSink API (hosted locally on the bot) which allows passing local file paths for artifacts to pass along to ResultDB, so this would fit pretty well with that use-case.

If the test is being run in CI, it's probably just not going to set that flag.

If we had attributes for artifacts and a convention for naming those attributes such that they could be interpreted by tooling, that we could use to tell our tooling to upload them to ResultDB.

I think this would clean up a few workarounds we currently have in our tests. For example, execution trace and profile tests just dump the text format which tends to be hard to read and requires extra steps to plumb back into our regular tooling. We could also have some part of the system (go test?) automatically attach core files to tests that crash in fun and interesting ways.

@seankhliao seankhliao changed the title proposal: cmd/test2json: Allow Go Tests to Pass Metadata proposal: testing: structured output for test attributes Nov 20, 2024
@aclements
Copy link
Member

If we had attributes for artifacts and a convention for naming those attributes such that they could be interpreted by tooling, that we could use to tell our tooling to upload them to ResultDB.

Now that we have more concrete uses, this brings up another question: should this API be more "typed"? Paths to artifacts seem like one of the major use cases for this, and with T.Attr(key, val string), the only way to distinguish a file path (or URL) is through some convention with higher level CI tooling. (Switching to a JSON value doesn't help here because these would still just be strings.)

I can see CI tooling using these attribute keys in either a "closed" or "open"-ended way. The examples from github.com/ilyubin/gotest2allure are "closed": there are a handful of designated keys with particular meanings. In that case, the CI tooling would simply define that some keys are file paths.

LUCI test attributes are, I believe, open ended. The ones internal to Google are definitely open ended. For these, the CI system needs some way to know if the value is a path to a file it should save. Inside Google, this is solved by having the test upload artifacts to a storage system and then record just a URL. There's some appeal to the simplicity of that in the testing package, but that requires a lot of CI infrastructure to make work, and isn't friendly to running tests locally. A CI system could have some convention on the key name or the value to make it clear when it's a file path, but a string is a stark data structure.

I propose that we add both T.Attr(key, val string) and T.AttrURL(key, url string). They will be distinguished in the JSON output. I propose that AttrURL treat its url argument as relative to the working directory (expressed as a file URL), and resolve it to an absolute URL before emitting it in the JSON. This way regular file paths can simply be passed as strings, though there is a footgun if the path contains :, ?, or #. Using URLs is a nicely general and standard way of referencing external resources, including files.

@greg-dennis
Copy link

@aclements, I might not understand the AttrURL suggestion. It would emit an absolute URL with the "file://" scheme? I think those tend to be a bit clunky to work with and clients would just prefer a simpler filepath string offered by Attr. And for http URLs, what benefit would AttrURL offer?

My default position is to offer T.Attr(key, val string) only, and deliberately put the semantics of the value entirely in the hands of the client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Active
Development

No branches or pull requests