Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small improvements to unit_tester_runner + run.d #9348

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

wilzbach
Copy link
Member

A few small follow-ups to #9333
See the individual commits.

CC @jacob-carlborg

@dlang-bot
Copy link
Contributor

Thanks for your pull request, @wilzbach!

Bugzilla references

Your PR doesn't reference any Bugzilla issue.

If your PR contains non-trivial changes, please reference a Bugzilla issue or create a manual changelog.

Testing this PR locally

If you don't have a local development environment setup, you can use Digger to test this PR:

dub fetch digger
dub run digger -- build "master + dmd#9348"

@@ -336,5 +336,6 @@ int main(string[] args)
buildStrtold();
execute(dmdPath, "@" ~ cmdfilePath);

stderr.writefln("[unit] Starting to run %s test files", nrOfFiles);
return spawnProcess(outputPath).wait();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably also prepend the output with a prefix here as it currently looks like this:

[unit] Starting to build all test files
 ... runnable/test9271.d            -fPIC ()
 ... runnable/printargs.d           -fPIC ()
...
 ... runnable/testfile.d            -fPIC ()
 ... runnable/b18034.d              -O -fPIC (-inline -release -g)
 ... runnable/test_dip1006.d        -check=in=off -check=out=off -check=invariant=off -fPIC ()
[unit] Starting to run all test files
1 tests, 0 failures
 ... runnable/mod1.d                -fPIC ()
 ... runnable/test15862.d           -O -release -fPIC ()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn’t that what your change is doing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. The log is already from this change. We probably need to create our own Pipe and do sth. like:

while (!pid.tryWait.terminated) {
  // if pipe has new data, read it from the buffer, split by line and append `[unit]`
  sleep(1.msecs);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I'm not following.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spawnProcess directly forwards all stdout, stderr etc. We can't add [unit] prefix to the lines. This change is only added it to the log commands, not things like:

1 tests, 0 failures

Though for the user to know where the message came from imho it probably would be better to at least do sth. like:

[unit] 1 tests, 0 failures

For this we need to either add the prefix in the actual runner or e.g. read line-wise as discussed above from the stdout pipe of the spawned process.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok, now I get it.

@wilzbach
Copy link
Member Author

BTW thinking more about the name unit_tests, it might be that someone gets confused that it doesn't run the unittests within DMD?

@@ -321,6 +321,9 @@ int main(string[] args)
if (missingTestFiles(givenFiles))
return 1;

const nrOfFiles = givenFiles.length ? givenFiles.length.to!string : "all";
stderr.writefln("[unit] Starting to build %s test files", nrOfFiles);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why stderr?

@jacob-carlborg
Copy link
Contributor

BTW thinking more about the name unit_tests, it might be that someone gets confused that it doesn't run the unittests within DMD?

What about “functional” or “integration”? Do we need to have it at all when we have -u? It was mostly to be compatible with make.

test/tools/unit_test_runner.d Outdated Show resolved Hide resolved
test/tools/unit_test_runner.d Outdated Show resolved Hide resolved
@@ -67,6 +67,7 @@ Examples:
./run.d runnable/template2962.d # runs a specific tests
./run.d runnable/template2962.d fail_compilation/fail282.d # runs multiple specific tests
./run.d fail_compilation # runs all tests in fail_compilation
./run.d unit_tests # runs all unit tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should not document it since the -u flag exists as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but it's documented in the README.md:

The unit tests will automatically run when all tests are run using ./run.d or
make. To only run the unit tests the ./run.d unit_tests command can be used.
For a more finer grain control over the unit tests the ./run.d -u command can
be used:

I guess as long as we're consistent I'm fine either way. Other thoughts.

Btw currently there's no easy way in to run just the integration tests (i.e. compilable, runnable, fail_compilation). Maybe it's worth adding tests as a shortcut?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but it's documented in the README.md

Yes, but we can remove that as well. It's a bit inconsistent to have unit_tests and the -u flag.

Btw currently there's no easy way in to run just the integration tests (i.e. compilable, runnable, fail_compilation). Maybe it's worth adding tests as a shortcut?

Perhaps.

@wilzbach
Copy link
Member Author

What about “functional” or “integration”?

I like "functional" as all the other tests (e.g. runnable) are integration tests too.
Other opinions?

@jacob-carlborg
Copy link
Contributor

I like "functional" as all the other tests (e.g. runnable) are integration tests too.

I would say that runnable, and the others, are end-to-end tests. But apparently this site [1] defines "functional" and "end-to-end" to be the same thing.

[1] https://codeutopia.net/blog/2015/04/11/what-are-unit-testing-integration-testing-and-functional-testing/

@kinke
Copy link
Contributor

kinke commented Feb 26, 2019

I can live with 'functional' (can't think of something better); anything is better than the current name, which I found really confusing. ;)

As a side note, this runner will need some adapation to be useful for LDC, due to the amount of hardcoded paths.

@jacob-carlborg
Copy link
Contributor

I can live with 'functional' (can't think of something better);

"integration" is the only other thing I can think of.

anything is better than the current name, which I found really confusing. ;)

It all depends on what you defined as a unit 😃.

@Geod24
Copy link
Member

Geod24 commented Mar 31, 2020

This seem to have been forgotten. What's the status ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants