-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add debug test output #7765
Add debug test output #7765
Conversation
21b40ac
to
b0294e4
Compare
… OOM or killed with ctrl-c
…k through a bunch of empty subproject/target folders before finding the one you care about.
… it's implemented.
…ut of TestRunnerJVM. Looks like I still have a file lock issue though.
Create and clean them up.
052af5e
to
3c619bf
Compare
Another thing that can be done is to change the definition of a fatal error: or even change the fatal error boolean predicate (isFatalError) for the test, so that you can do some logging or something on the fatal error (obviously have to be careful because you are running out of memory in some cases and JVM behavior is not well defined). |
@@ -18,7 +20,8 @@ object TestOutput { | |||
ZLayer.fromZIO( | |||
for { | |||
executionEventPrinter <- ZIO.service[ExecutionEventPrinter] | |||
outputLive <- TestOutputLive.make(executionEventPrinter) | |||
// If you need to enable the debug output to diagnose flakiness, set this to true | |||
outputLive <- TestOutputLive.make(executionEventPrinter, debug = false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a way to configure, for example, the size of generators. Perhaps we should add the flag there, generalizing that mechanism so you can configure test settings in a uniform way?
cc @adamgfraser ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that gets added at a lower level, basically configuring aspects of how individual tests operate rather than configuring the test framework itself.
I also think we probably need to do a little more use of this feature internally and make sure it is helpful for us in diagnosing CI flakiness before rolling it out more broadly.
The goal here is to assist with debugging catastrophic errors, eg OOM or when Github interrupts our tests for any reason, such as a non-terminating test.
We can't report these failures through the normal test mechanisms, so to work around this, we are making entries in a file for each test as it starts, and removing it when it completes. If anything kills the JVM before we complete, we can browse these files to get a short list of the tests that were executing at the time.
For this initial version, there's a simple boolean flag that must be turned on in order to produce these files. We can decide on a more official way in the future.