Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SpecContext to ReportAfterSuite callback body. #1345

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 29 additions & 7 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3098,7 +3098,8 @@ SynchronizedBeforeSuite(func(ctx SpecContext) []byte {
```
are all valid interruptible signatures. Of course you can specify `context.Context` instead and can mix-and-match interruptibility between the two functions.

Currently the **Reporting** nodes (`ReportAfterEach`, `ReportAfterSuite`, and `ReportBeforeEach`) cannot be made interruptible and do not accept callbacks that receive a `SpecContext`. This may change in a future release of Ginkgo (in a backward compatible way).
**Reporting** nodes `ReportAfterEach`, `ReportBeforeEach`, `ReportBeforeSuite` `ReportAfterSuite` can be made interruptible,
to do this you need to provide it a node function which accepts both `SpecContext` and `SpecReport` for `*Each` nodes and `Report` for `*Suite` nodes.

As for **Container** nodes, since these run during the Tree Construction Phase they cannot be made interruptible and so do not accept functions that expect a context. And since the `By` annotation is simply syntactic sugar enabling more detailed spec documentation, any callbacks passed to `By` cannot be independently marked as interruptible (you should, instead, use the `context` passed into the node that you're calling `By` from).

Expand Down Expand Up @@ -3498,22 +3499,30 @@ Ginkgo's reporting infrastructure provides an alternative solution for this use

#### Reporting Nodes - ReportAfterEach and ReportBeforeEach

Ginkgo provides three reporting-focused nodes `ReportAfterEach`, `ReportAfterSuite`, and `ReportBeforeEach`.
Ginkgo provides four reporting-focused nodes `ReportAfterEach`, `ReportBeforeEach` `ReportBeforeSuite`, and `ReportAfterSuite`.

`ReportAfterEach` behaves similarly to a standard `AfterEach` node and can be declared anywhere an `AfterEach` node can be declared. `ReportAfterEach` takes a closure that accepts a single [`SpecReport`](https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#SpecReport) argument. For example, we could implement a top-level ReportAfterEach that emits information about every spec to a remote server:
`ReportAfterEach` behaves similarly to a standard `AfterEach` node and can be declared anywhere an `AfterEach` node can be declared.
`ReportAfterEach` can take either a closure that accepts a single [`SpecReport`](https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#SpecReport) argument or both `SpecContext` and `SpecReport`
For example, we could implement a top-level ReportAfterEach that emits information about every spec to a remote server:

```go
ReportAfterEach(func(report SpecReport) {
customFormat := fmt.Sprintf("%s | %s", report.State, report.FullText())
client.SendReport(customFormat)
})
// interruptible ReportAfterEach node
ReportAfterEach(func(ctx SpecContext, report SpecReport) {
customFormat := fmt.Sprintf("%s | %s", report.State, report.FullText())
client.SendReport(customFormat)
}, NodeTimeout(1 * time.Minute))
```

`ReportAfterEach` has several unique properties that distinguish it from `AfterEach`. Most importantly, `ReportAfterEach` closures are **always** called - even if the spec has failed, is marked pending, or is skipped. This ensures reports that rely on `ReportAfterEach` are complete.

In addition, `ReportAfterEach` closures are called after a spec completes. i.e. _after_ all `AfterEach` closures have run. This gives them access to the complete final state of the spec. Note that if a failure occurs in a `ReportAfterEach` your the spec will be marked as failed. Subsequent `ReportAfterEach` closures will see the failed state, but not the closure in which the failure occurred.

`ReportAfterEach` is useful if you need to stream or emit up-to-date information about the suite as it runs. Ginkgo also provides `ReportBeforeEach` which is called before the test runs and receives a preliminary `types.SpecReport` - the state of this report will indicate whether the test will be skipped or is marked pending.
`ReportAfterEach` is useful if you need to stream or emit up-to-date information about the suite as it runs. Ginkgo also provides `ReportBeforeEach` which is called before the test runs and
receives a preliminary `types.SpecReport` ( or both `SpecContext` and `types.SpecReport` for interruptible behaviour) - the state of this report will indicate whether the test will be skipped or is marked pending.

You should be aware that when running in parallel, each parallel process will be running specs and their `ReportAfterEach`es. This means that multiple `ReportAfterEach` blocks can be running concurrently on independent processes. Given that, code like this won't work:

Expand All @@ -3532,23 +3541,32 @@ ReportAfterEach(func(report SpecReport) {
you'll end up with multiple processes writing to the same file and the output will be a mess. There is a better approach for this usecase...

#### Reporting Nodes - ReportBeforeSuite and ReportAfterSuite
`ReportBeforeSuite` and `ReportAfterSuite` nodes behave similarly to `BeforeSuite` and `AfterSuite` and can be placed at the top-level of your suite (typically in the suite bootstrap file). `ReportBeforeSuite` and `ReportAfterSuite` nodes take a closure that accepts a single [`Report`]((https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#Report)) argument:
`ReportBeforeSuite` and `ReportAfterSuite` nodes behave similarly to `BeforeSuite` and `AfterSuite` and can be placed at the top-level of your suite (typically in the suite bootstrap file).
`ReportBeforeSuite` node take a closure that accepts either [`Report`]((https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#Report)) or, both `SpecContext` and `Report` converting the node to an interruptible node.

```go
var _ = ReportBeforeSuite(func(report Report) {
// process report
})

var _ = ReportBeforeSuite(func(ctx SpecContext, report Report) {
// process report
}, NodeTimeout(1 * time.Minutes))

var _ = ReportAfterSuite("custom report", func(report Report) {
// process report
})

var _ = ReportAfterSuite("interruptible ReportAfterSuite", func(ctx SpecContext, report Report) {
// process report
}, NodeTimeout(1 * time.Minutes))
```

`Report` contains all available information about the suite. For `ReportAfterSuite` this will include individual `SpecReport` entries for each spec that ran in the suite, and the overall status of the suite (whether it passed or failed). Since `ReportBeforeSuite` runs before the suite starts - it does not contain any spec reports, however the count of the number of specs that _will_ be run can be extracted from `report.PreRunStats.SpecsThatWillBeRun`.

The closure passed to `ReportBeforeSuite` is called exactly once at the beginning of the suite before any `BeforeSuite` nodes or specs run have run. The closure passed to `ReportAfterSuite` is called exactly once at the end of the suite after any `AfterSuite` nodes have run.

Finally, and most importantly, when running in parallel both `ReportBeforeSuite` and `ReportAfterSuite` **only run on process #1**. Gingko guarantess that no other processes will start running their specs until after `ReportBeforeSuite` on process #1 has completed. Similarly, Ginkgo will only run `ReportAfterSuite` on process #1 after all other processes have finished and exited. Ginkgo provides a sinle `Report` that aggregates the `SpecReports` from all processes. This allows you to perform any custom suite reporting in one place after all specs have run and not have to worry about aggregating information across multiple parallel processes.
Finally, and most importantly, when running in parallel both `ReportBeforeSuite` and `ReportAfterSuite` **only run on process #1**. Gingko guarantess that no other processes will start running their specs until after `ReportBeforeSuite` on process #1 has completed. Similarly, Ginkgo will only run `ReportAfterSuite` on process #1 after all other processes have finished and exited. Ginkgo provides a single `Report` that aggregates the `SpecReports` from all processes. This allows you to perform any custom suite reporting in one place after all specs have run and not have to worry about aggregating information across multiple parallel processes.

Given all this, we can rewrite our invalid `ReportAfterEach` example from above into a valid `ReportAfterSuite` example:

Expand Down Expand Up @@ -5244,8 +5262,12 @@ The `SuppressProgressOutput` decorator allows you to disable progress reporting

```go
ReportAfterEach(func(report SpecReport) {
//...
// ...
}, SuppressProgressReporting)

ReportAfterEach(func(ctx SpecContext, report SpecReport) {
// ...
}, NodeTimeout(1 * time.Minute), SuppressProgressReporting)
```

#### The PollProgressAfter and PollProgressInterval Decorators
Expand Down
22 changes: 17 additions & 5 deletions internal/internal_integration/report_each_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,17 @@ var _ = Describe("Sending reports to ReportBeforeEach and ReportAfterEach nodes"
reports["interrupt"] = append(reports["interrupt"], report)
})
})
Context("when a after each reporter times out", func() {
It("passes", rt.T("passes"))
ReportAfterEach(func(ctx SpecContext, report types.SpecReport) {
select {
case <-ctx.Done():
rt.Run("timeout-reporter")
reports["timeout"] = append(reports["timeout"], report)
case <-time.After(100 * time.Millisecond):
}
}, NodeTimeout(10*time.Millisecond))
})
})
ReportBeforeEach(func(report types.SpecReport) {
rt.Run("outer-RBE")
Expand All @@ -114,15 +125,16 @@ var _ = Describe("Sending reports to ReportBeforeEach and ReportAfterEach nodes"
"outer-RBE", "inner-RBE", "passes", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "fails", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "panics", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "inner-RAE", "outer-RAE", //pending test
"outer-RBE", "inner-RBE", "inner-RAE", "outer-RAE", // pending test
"outer-RBE", "inner-RBE", "skipped", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "inner-RAE", "outer-RAE", //flag-skipped test
"outer-RBE", "inner-RBE", "inner-RAE", "outer-RAE", // flag-skipped test
"outer-RBE", "inner-RBE", "also-passes", "failing-RAE", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "failing-in-skip-RAE", "inner-RAE", "outer-RAE", //is also flag-skipped
"outer-RBE", "inner-RBE", "failing-in-skip-RAE", "inner-RAE", "outer-RAE", // is also flag-skipped
"outer-RBE", "inner-RBE", "writer", "writing-reporter", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "failing-RBE", "not-failing-RBE", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "passes-yet-again", "inner-RAE", "outer-RAE",
"outer-RBE", "inner-RBE", "interrupt-reporter", "inner-RAE", "outer-RAE", //skipped by interrupt
"outer-RBE", "inner-RBE", "interrupt-reporter", "inner-RAE", "outer-RAE", // skipped by interrupt
"outer-RBE", "inner-RBE", "timeout-reporter", "inner-RAE", "outer-RAE", // skipped by timeout
"after-suite",
))
})
Expand Down Expand Up @@ -191,7 +203,7 @@ var _ = Describe("Sending reports to ReportBeforeEach and ReportAfterEach nodes"
Ω(reports["inner-RAE"].Find("writes stuff").CapturedGinkgoWriterOutput).Should(Equal("GinkgoWriter from It\nGinkgoWriter from ReportAfterEach\n"))
Ω(reports["inner-RAE"].Find("writes stuff").CapturedStdOutErr).Should(Equal("Output from It\nOutput from ReportAfterEach\n"))

//but a report containing the additional output will be send to Ginkgo's reporter...
// but a report containing the additional output will be send to Ginkgo's reporter...
Ω(reporter.Did.Find("writes stuff").CapturedGinkgoWriterOutput).Should((Equal("GinkgoWriter from It\nGinkgoWriter from ReportAfterEach\n")))
Ω(reporter.Did.Find("writes stuff").CapturedStdOutErr).Should((Equal("Output from It\nOutput from ReportAfterEach\n")))
})
Expand Down
80 changes: 70 additions & 10 deletions internal/internal_integration/report_suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,14 @@ import (
)

var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite nodes", func() {
var failInReportBeforeSuiteA, failInReportAfterSuiteA, interruptSuiteB bool
var failInReportBeforeSuiteA, timeoutInReportBeforeSuiteB, failInReportAfterSuiteA, timeoutInReportAfterSuiteC, interruptSuiteB bool
var fixture func()

BeforeEach(func() {
failInReportBeforeSuiteA = false
timeoutInReportBeforeSuiteB = false
failInReportAfterSuiteA = false
timeoutInReportAfterSuiteC = false
interruptSuiteB = false
conf.RandomSeed = 17
fixture = func() {
Expand All @@ -31,11 +33,20 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
F("fail in report-before-suite-A")
}
})
ReportBeforeSuite(func(report Report) {
ReportBeforeSuite(func(ctx SpecContext, report Report) {
timeout := 200 * time.Millisecond
if timeoutInReportBeforeSuiteB {
timeout = timeout + 1*time.Second
}
rt.RunWithData("report-before-suite-B", "report", report)
writer.Print("gw-report-before-suite-B")
outputInterceptor.AppendInterceptedOutput("out-report-before-suite-B")
})
select {
case <-ctx.Done():
outputInterceptor.AppendInterceptedOutput("timeout-report-before-suite-B")
case <-time.After(timeout):
outputInterceptor.AppendInterceptedOutput("out-report-before-suite-B")
}
}, NodeTimeout(500*time.Millisecond))
Context("container", func() {
It("A", rt.T("A"))
It("B", rt.T("B", func() {
Expand All @@ -61,6 +72,19 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
writer.Print("gw-report-after-suite-B")
outputInterceptor.AppendInterceptedOutput("out-report-after-suite-B")
})
ReportAfterSuite("Report C", func(ctx SpecContext, report Report) {
timeout := 200 * time.Millisecond
if timeoutInReportAfterSuiteC {
timeout = timeout + 1*time.Second
}
rt.RunWithData("report-after-suite-C", "report", report)
writer.Print("gw-report-after-suite-C")
outputInterceptor.AppendInterceptedOutput("out-report-after-suite-C")
select {
case <-ctx.Done():
case <-time.After(timeout):
}
}, NodeTimeout(500*time.Millisecond))
AfterSuite(rt.T("after-suite", func() {
writer.Print("gw-after-suite")
F("fail in after-suite")
Expand All @@ -87,7 +111,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
"before-suite",
"A", "B", "C",
"after-suite",
"report-after-suite-A", "report-after-suite-B",
"report-after-suite-A", "report-after-suite-B", "report-after-suite-C",
))
})

Expand Down Expand Up @@ -159,7 +183,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
It("doesn't run any specs - just reporting functions", func() {
Ω(rt).Should(HaveTracked(
"report-before-suite-A", "report-before-suite-B",
"report-after-suite-A", "report-after-suite-B",
"report-after-suite-A", "report-after-suite-B", "report-after-suite-C",
))
})

Expand All @@ -176,6 +200,41 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
})
})

Context("when a ReportBeforeSuite times out", func() {
BeforeEach(func() {
timeoutInReportBeforeSuiteB = true
success, _ := RunFixture("report-before-suite-B-timed-out", fixture)
Ω(success).Should(BeFalse())
})

It("reports on the failure, to Ginkgo's reporter and any subsequent reporters", func() {
Ω(reporter.Did.WithLeafNodeType(types.NodeTypeReportBeforeSuite).WithState(types.SpecStateTimedout)).
Should(ContainElement(HaveTimedOut(
types.NodeTypeReportBeforeSuite,
"A node timeout occurred",
CapturedGinkgoWriterOutput("gw-report-before-suite-B"),
CapturedStdOutput("timeout-report-before-suite-B"),
)))
})
})

Context("when a ReportAfterSuite times out", func() {
BeforeEach(func() {
timeoutInReportAfterSuiteC = true
success, _ := RunFixture("report-after-suite-C-timed-out", fixture)
Ω(success).Should(BeFalse())
})

It("reports on the failure, to Ginkgo's reporter and any subsequent reporters", func() {
Ω(reporter.Did.Find("Report C")).Should(HaveTimedOut(
types.NodeTypeReportAfterSuite,
"A node timeout occurred",
CapturedGinkgoWriterOutput("gw-report-after-suite-C"),
CapturedStdOutput("out-report-after-suite-C"),
))
})
})

Context("when a ReportAfterSuite node fails", func() {
BeforeEach(func() {
failInReportAfterSuiteA = true
Expand All @@ -189,7 +248,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
"before-suite",
"A", "B", "C",
"after-suite",
"report-after-suite-A", "report-after-suite-B",
"report-after-suite-A", "report-after-suite-B", "report-after-suite-C",
))
})

Expand Down Expand Up @@ -220,6 +279,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
"A", "B", "C",
"after-suite",
"report-after-suite-A",
"report-after-suite-C",
))
})
})
Expand Down Expand Up @@ -259,7 +319,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
"before-suite",
"A", "B", "C",
"after-suite",
"report-after-suite-A", "report-after-suite-B",
"report-after-suite-A", "report-after-suite-B", "report-after-suite-C",
))
})

Expand Down Expand Up @@ -309,7 +369,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node

Context("when a non-primary proc disappears before it reports", func() {
BeforeEach(func() {
close(exitChannels[2]) //proc 2 disappears before reporting
close(exitChannels[2]) // proc 2 disappears before reporting
success, _ := RunFixture("disappearing-proc-2", fixture)
Ω(success).Should(BeFalse())
})
Expand Down Expand Up @@ -342,7 +402,7 @@ var _ = Describe("Sending reports to ReportBeforeSuite and ReportAfterSuite node
It("only runs the reporting nodes", func() {
Ω(rt).Should(HaveTracked(
"report-before-suite-A", "report-before-suite-B",
"report-after-suite-A", "report-after-suite-B",
"report-after-suite-A", "report-after-suite-B", "report-after-suite-C",
))
})

Expand Down
Loading
Loading