Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate std.log with std.testing #5738

Open
andrewrk opened this issue Jun 28, 2020 · 10 comments
Open

Integrate std.log with std.testing #5738

andrewrk opened this issue Jun 28, 2020 · 10 comments
Labels
accepted This proposal is planned. contributor friendly This issue is limited in scope and/or knowledge of Zig internals. proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. standard library This issue involves writing Zig code for the standard library.
Milestone

Comments

@andrewrk
Copy link
Member

andrewrk commented Jun 28, 2020

Inspired by #5734 (comment)

The goal of the std.log API is to allow developers to leave log statements in finished code, without causing any performance issues or unwanted stderr/stdout for API consumers.

This harmonizes with testing! Here's an example:

const std = @import("std");
const testing = std.testing;
const log = std.log;

test "make sure an uncommon branch is hit" {
    testing.expect(myCoolCode(3, 4) == 7);
    testing.expect(myCoolCode(999, 100) == 100);
    testing.expectLog("a special case occurred: 999\n");
}

fn myCoolCode(a: i32, b: i32) i32 {
    if (a > 100) {
        log.info(.scope, "a special case occurred: {}\n", .{a});
        return b;
    }
    return a + b;
}

Much like the test runner clears the testing allocator and checks for leaks between tests, it can also specify a custom log handler, which buffers the log statements during tests, and provides the API to make sure log messages did or did not occur.

It could provide an API for exact matches, substring matches, or checking for the absence of a log message.

An open question would be how the "scope" parameter fits into this. It could be part of the API for checking, or maybe there would be a testing scope, which by default would never get printed and only would be activated for tests.

@andrewrk andrewrk added standard library This issue involves writing Zig code for the standard library. proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. labels Jun 28, 2020
@andrewrk andrewrk added this to the 0.8.0 milestone Jun 28, 2020
@PavelVozenilek
Copy link

This is called white box testing. I'd implemented once in C (inspired by http://akkartik.name/post/tracing-tests ). Here are my recommendations:

  1. To reduce volume of logs a test should at the start express what messages it is interested in (e.g. specifying string prefixes), only those would be recorded.
  2. Compiler should guard against typos as much as possible.
  3. To reduce nuisance factor, the compiler should auto-generate logs for functions (e.g. if I am interested whether function foo was invoked, the compiler would add invisible log at the start of foo function).

@andrewrk andrewrk added accepted This proposal is planned. contributor friendly This issue is limited in scope and/or knowledge of Zig internals. labels Jul 8, 2020
andrewrk added a commit that referenced this issue Jul 9, 2020
by default the test runner will only print logs with "warning" or
higher. this can be configured via the std.testing API.

See #5738 for future plans
@nmichaels
Copy link
Contributor

I'm unclear on what you mean by 2 on your list. Are you saying the string in the test and the string in the log needn't match exactly? That seems like a misfeature.

I think number 3 is probably better done by code coverage tools than looking at logs in tests.

@heidezomp
Copy link
Contributor

An open question would be how the "scope" parameter fits into this. It could be part of the API for checking, or maybe there would be a testing scope, which by default would never get printed and only would be activated for tests.

I would propose not using a special testing scope, since the log statements to test for might also be useful outside of testing. I have several log.debug statements in my own projects that I could write tests for with this proposal, but are also still useful for general debugging; when using a special testing scope, I'd have to duplicate them.

E.g. in SpriteAtlas.init I have log.debug("Creating texture atlas with size {}x{}", .{ texture_size.w, texture_size.h });. The total texture size is never saved afterward, so I could use log testing to make sure it is correct for a given collection of sprites, but I'd still like to be able to see it in my general debug log too.

@heidezomp
Copy link
Contributor

A subset of @PavelVozenilek's number 3 could be achieved by having the tracy integration perform some logging as well, maybe? It's not automatic, but if you already have the tracy statements added anyway, they could enable you to test whether these are called in the way you expect them to be.

@LewisGaul
Copy link
Contributor

Under this proposal, will it still be possible to use std.log to write log messages in test code (i.e. use it for its regular logging purpose)? It sounds like the plan is for log messages to be captured when going through the test_runner.zig log() implementation, but it would be nice if it was possible to declare a test scoping that doesn't get captured.

@leighmcculloch
Copy link

Under this proposal, will it still be possible to use std.log to write log messages in test code (i.e. use it for its regular logging purpose)?

It might be valuable to introduce a scope specific to testing – or a different function to log via – that gets ignored by expectations/assertions so that they do not interfere with the test expectations, but is still outputted when the test runs. At least one other language has had success with this: Go, where test logs get slurped up and accumulated with the test, and where it is trivial as a test author to add additional details that help with debugging or understanding what is happening in a test.

@matu3ba
Copy link
Contributor

matu3ba commented May 28, 2022

It could provide an API for exact matches, substring matches, or checking for the absence of a log message.

This is underspecified and the memory costs can become nontrivial. Should there be a dedicated api for bounded and unbounded memory backed files user space/Kernel buffers(pipes) ?
What are the semantics of message_exists(mmapped_file) in case the user configures to use ringbuffer (once read, messages are gone) ?

@matklad
Copy link
Contributor

matklad commented Mar 13, 2023

Another subfeature here would be:

  • Allow controlling "fail on log.err calls" behavior for tests

Failing on error logged is probably the right default, but one also sometimes want to write the tests for error conditions! After all, the error handling paths contain most bugs. Today, I use the following work-around for that:

const log = if (builtin.is_test)
    // Downgrade `err` to `warn` for tests.
    // Zig fails any test that does `log.err`, but we want to test those code paths here.
    struct {
        const base = std.log.scoped(.clock);
        const err = warn;
        const warn = base.warn;
        const info = base.info;
        const debug = base.debug;
    }
else
    std.log.scoped(.clock);

@aherrmann
Copy link
Contributor

[ ] Allow controlling "fail on log.err calls" behavior for tests

I just ran into this issue myself. The use case is indeed failure testing, i.e. I want to test the unhappy path to be sure that it's handled correctly.

Thanks @matklad for the workaround, I may go that route for now. The alternatives seem to be: Just don't log (not great), or take some kind of ?*Diagnostic and fill that in (more complexity).


Side note, this behavior of the test runner seems contrary to the documented meaning of std.log.err.

Error: something has gone wrong. This might be recoverable or might be followed by the program exiting.

If it may be recoverable, then it should not imply automatic failure. The current behavior of the test runner also prevents any testing of code that performs such error recovery. Because, despite the recovery, the fact that something invoked std.log.err will trigger a test failure.

@matu3ba
Copy link
Contributor

matu3ba commented Jan 15, 2024

error recovery

Error recovery sounds very much like a component test (kill and restart a component) and not for unit tests, similar to what a panic test would do.

he current behavior of the test runner also prevents any testing of code that performs such error recovery.

The current test runner, including server and client, is designed as unit test runner. If you intend to change it, then you have to also take into account how the unhappy path (crashing) with user-expected output parsing of the failure message/logs should work inside test blocks.
My PR so far has POC indicates that the current API/syntax is not good for "dynamically generated expected error messages". See #15991. So this should be solved separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted This proposal is planned. contributor friendly This issue is limited in scope and/or knowledge of Zig internals. proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. standard library This issue involves writing Zig code for the standard library.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants