A powerful fluent API for testing Flogger log statements, and more.
<dependency>
<groupId>net.goui.flogger.testing</groupId>
<artifactId>junit4</artifactId> <!-- or "junit5" -->
<version>1.0.12</version>
<scope>test</scope>
</dependency>
And if you are using Log4J2
:
<dependency>
<groupId>net.goui.flogger.testing</groupId>
<artifactId>log4j</artifactId>
<version>1.0.12</version>
<scope>test</scope>
</dependency>
Note While this API can be used without Flogger installed, and will provide most of the functionality you need for good logs testing, it works much better when dealing with Flogger, and especially with the use of logging contents to permit forced logging and other advanced features.
As such it is recommended that you also install the following dependencies and start getting the best out of your debug logging.
<!-- https://mvnrepository.com/artifact/com.google.flogger/flogger-xxx-backend -->
<dependency>
<groupId>com.google.flogger</groupId>
<artifactId>flogger-xxx-backend</artifactId> <!-- xxx = "log4j2" or "system" -->
<version>0.8</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.flogger/flogger-grpc-context -->
<dependency>
<groupId>com.google.flogger</groupId>
<artifactId>flogger-grpc-context</artifactId>
<version>0.8</version>
</dependency>
JUnit 4
@RunWith(JUnit4.class)
public class MyJUnit4Test {
@Rule
public final FloggerTestRule logs = FloggerTestRule.forClassUnderTest(INFO);
// Tests ...
}
JUnit 5
public class MyJUnit5Test {
@RegisterExtension
public final FloggerTestExtension logs = FloggerTestExtension.forClassUnderTest(INFO);
// Tests ...
}
logs.assertLogs().withMessageContaining("Expected log message").matchCount().isAtLeast(1);
var debugStart = logs.assertLogs()
.withMessageContaining("Start Task", "[DEBUG]")
.withMetadata("debug_id", TEST_ID)
.getOnlyMatch();
assertLogs(after(debugStart).inSameThread()).always().haveMetadata("debug_id", TEST_ID);
The testing of debug log statements and logging behaviour has never really had a single, comprehensive, testing API. In many cases users end up writing brittle assertions in unit tests (e.g. by assuming that log statements correspond to specific indices within some list). The usually results in users:
- Writing brittle tests for their logs, which creates an ongoing maintenance burden.
- Writing few or no tests for their logs, which risks severe logging issues going unnoticed.
- Writing logs tests with ad-hoc logger specific APIs, which complicates any work to update or migrate logging libraries.
We've all seen at least one (if not all) the situations described above.
As Flogger's creator and API designer, I decided to do something about it for Flogger users, but I didn't want to stop there, so I made a library which can work with any logging API.
While at Google, I worked extensively with Java code to improve debug logging. I've seen many thousands of log statements, and the unit tests which attempt to make assertions about them.
Over this time I came to realize that there is no "one size fits all" approach to testing debug logs.
- What one team cared deeply about, another was happy to ignore.
- While one team wanted to test a lot of details about what was logged, another wanted to make assertions about what wasn't logged.
Good logs testing ends up being a combination of several "best practice" principles, and those principles need to be supported properly in a good logs testing API:
In order to test logs at all, you need to know which logs are in scope for testing. Typically, but not always, these are the logs created by the code-under-test.
Most of the time you should avoid testing logs generated by code outside the class
(or at least
package
) you are testing. Creating implicit test dependencies on the logging behaviour of other
classes is "spooky action at a distance", and can easily result in hard to diagnose test failures.
This principle is supported with the following features:
Convenience methods for the commonest cases:
public final FloggerTestExtension logs = FloggerTestExtension.forClassUnderTest(INFO);
public final FloggerTestExtension logs = FloggerTestExtension.forPackageUnderTest(INFO);
Flexible methods for unusual situations:
public final FloggerTestExtension logs = FloggerTestExtension.forLevelMap(LEVEL_MAP);
Matching on attributes of log entries:
logs.assertLogs().withLevelAtLeast(WARNING)...
Comparative filtering relative to other log entries:
logs.assertLogs(after(otherEntry))...
logs.assertLogs(isSameThreadAs(otherEntry))...
When you set up a test fixture for logs capture, such as FloggerTestRule
or
FloggerTestExtension
, the test library automatically "forces" log statements at the requested
levels. Forced log statements bypass any rate-limiting checks and are guaranteed to appear in test
logs.
// Log statements at INFO and above for the class-under-test will always be emitted, even if
// they are rate limited (e.g. via 'atMostEvery(...)' or 'every(...)'). All captured logs at
// or above INFO will have the additional "force=true" metadata key-value pair.
public final FloggerTestExtension logs = FloggerTestExtension.forClassUnderTest(INFO);
While it's tempting to test for the exact contents of a log statement (after all, you can just "cut & paste" the log message into the test from the code or logs), it's generally a bad idea.
Debug log messages are expected to be human-readable, and you need to be able to update them as needed during debugging. You don't want someone who makes a minor change to some log statement to break dozens of your tests. You need to test the important information in log statements, while avoiding testing precise grammar or punctuation.
You may choose to explicitly avoid testing the details of "fine" log statements in your code, or at least test them less. Fine logs exist to provide content to other logs, and they are more likely to be added, removed or otherwise modified during debugging. Don't allow changes like these to break existing logging tests.
However, it's also important that you execute "fine" log statements in some way during at least some
tests. Otherwise, you risk only discovering that your log statements aren't working (e.g. because
a toString()
method is throwing an exception) at the time you need them most.
This principle is supported with the following features:
This API encourage testing simple attributes of log entries, rather than everything about them.
There is no method to test that a log message is exactly equal to a string, only that it contains some sequence or fragments, or matches a regular expression.
logs.assertLogs()
.withMessageContaining("Debug start", "user-ID", TEST_USER_ID)
.always()
.haveMetadata("debug_id", DEBUG_ID);
expresses its intent far more clearly, and is far less brittle than:
assertLogged(
String.format("[INFO] Debug start (timestamp=%s) user-ID='%s' [CONTEXT debug_id=%s ]",
FAKE_TIMESTAMP,
TEST_USER_ID,
DEBUG_ID);
Other examples of well targeted testing APIs include:
- Simplifying log site information to avoid brittle testing of synthetic class/method names.
- No ability to test line numbers of log statements.
- Encouraging testing the relative order of log entries, rather than using precise indices.
- Using only log levels common to all supported backends.
If you want to ensure that "fine" logs are being testing but don't want to have to deal with their
output in every test, use the @SetLogLevel
annotation on test methods to add additional logging
for selected tests:
// Sets the default logging level for most tests.
@Rule public final FloggerTestRule logs = FloggerTestRule.forClassUnderTest(INFO);
@Test
public void testNormalLogging() {
// Only "normal" logging is captured in this test.
runCodeUnderTest(...);
}
@Test
@SetLogLevel(scope = CLASS_UNDER_TEST, level = FINE)
public void testDebugLogging() {
// Debug logging is captured in this test.
runCodeUnderTest(...);
}
Note: Even though different tests are producing different output, the use of Flogger's "scoped logging contexts" and a unique metadata ID per tests means that it's perfectly safe to run these tests in parallel.
The other principle of logs testing (which is often ignored) is to ensure that unwanted logs, perhaps from other libraries, are not caused by your code.
This is quite a different use case and seems, at first, to violate the principle of testing only logs under your control. However, testing that a log does not occur is far, far less brittle assertion than testing something about it.
Unwanted logs are typically only high-severity logs ("warning", "severe" etc.) which are likely to cause issues if they start appearing, so it's valuable to make sure your code isn't triggering something by mistake. And, in cases where your code is expected to cause a high-severity log from another library, that's an example of a time when it might be acceptable to test it directly.
This principle is supported with the following features:
If you want to test that certain logs are not created, it's simple to do:
logs.assertLogs().withLevel(SEVERE).doNotOccur();
But you can also test the non-existence of logs before or after a certain log event:
LogEntry problemStart = logs.assertLogs()
.withCause(IOException.class)
.withMessageContaining("Bad file access")
.getOnlyMatch();
logs.assertLogs(before(problemStart).inSameThread()).withLevelAtLeast(WARNING).doNotOccur();
Any assertion you can write in your test can be set as a verification for one or all tests:
// No JUnit5 tests will cause warning logs from the class we are testing...
@RegisterExtension
public final FloggerTestExtension logs =
FloggerTestExtension.forClassUnderTest(INFO)
.verify(logs -> logs.withLevelAtLeast(WARNING).doNotOccur());
@Test
public void testSomeTask() {
// Assert that everything logged in this test has the expected metadata.
logs.verify(logs -> logs.always().haveMetadata("task_id", TASK_ID));
runTestTask(TASK_ID);
logs.assertLogs(...);
...
}
This principle comes from a core Flogger design principle, which is that it's essentially an anti-pattern to inject loggers into code for any reason.
A common, but in my opinion undesirable, pattern is to inject a mock logger in unit tests for testing logs. This mimics the use of mocks for testing other aspects of code, and is often easy when you've already got a dependency injection system set up. However, it comes with several non-trivial downsides.
- Injecting logger implementations creates maintenance burden. Teams cannot migrate easily to different logging APIs, and in a large codebase you eventually end up injecting multiple logger implementations simultaneously to satisfy the needs of all your code.
- Injecting logger implementations encourages bad logging habits. An injected logger cannot be used in static methods without passing it in explicitly. Passing in a logger instance breaks encapsulation, especially for non-private methods, and not allowing debug logging in static methods is limiting.
- Testing the log statement (via a mock) is less useful than testing the log output. Log output is what's useful about debug log statements, so you should be testing that as closely as reasonably possible, including any effects of formatting the logged arguments. Using mocked loggers to test logging APIs creates brittle, hard to maintain tests.
- Injecting logger implementations is never necessary. All commonly used logging libraries provide a way to test their log output at some reasonable level. This ensures that what you test is "close" to what will appear in log files.
I think that the difficulty in setting up good logs testing in unit tests is one of the primary reasons people choose to inject loggers in their code at all. With a well-designed logs testing library, this problem no longer exists, and so injecting loggers at all no longer has any real benefits.
This principle is supported by the core design of this library, which is agnostic to the underlying logger implementation, and carefully designed to minimize implicit dependencies on specific implementation behaviour.
If you use this library and wish to migrate your logging (e.g. from Log4J to Flogger, or JUnit4 to JUnit5), your tests will remain essentially unchanged.
The Flogger logs testing API supports the above principles with a clean, orthogonal, fluent API
which can be used with JDK's logging libraries or Log4J2
, and in either JUnit4
or Junit5
.
Pick the dependencies which suit your needs, and start writing better logging tests today!