-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ExUnit] Execute only one test with the line filter, per file with multiple test modules #11949
[ExUnit] Execute only one test with the line filter, per file with multiple test modules #11949
Conversation
It is now being called with :infinity atom as width
In releases, the final step for stripping chunks of out BEAM files is to gzip compress the file. This change lets users pass `compress: true` to opt-in. Somewhat unintuitively, turning compression off here enables better compression at a later step. This is due to the archive compressors being able to process all BEAM files rather than restarting compression for each individual BEAM file. For some Nerves devices, reducing the total number of bytes sent to update software is much more important than reducing the on-disk usage. This is due to their network connections being metered. Given enough devices, even small size reductions can result in meaningful savings. Here are more details: The default Nerves configuration puts all of the BEAM files in one SquashFS archive. While Nerves also uses gzip for the SquashFS archive and SquashFS (mostly) individually compresses files as well, letting SquashFS perform the compression results in 2.5% file size reduction. This is assumed to be due to using a higher compression level with building the SquashFS archive. As a side benefit, moving gzip compression to SquashFS removes the decompression steps from the BEAM load and it resulted in a ~500ms boot time improvement on GRiSP 2 hardware for a small Nerves app. Presumably the Linux kernel's gzip decompression is faster than Erlang's or removing the Erlang gunzip calls completely added up. When it's possible to compress all BEAM files together (for example, when making tar files), not compressing BEAM files also helps. This is because the BEAM files have a lot of similar strings that are now visible to the archive compressor. It's also possible to use better compression methods than gzip. A similar use is the way that delta firmware updates are handled with Nerves. Delta firmware updates are a way to send down the difference between the firmware image on a device and the firmware image you want. These are generated across the full image and should benefit from being able to work off uncompressed .beam files.
Co-authored-by: antedeguemon <antedeguemon@users.noreply.github.com>
…und (elixir-lang#11937) Previously this would be silently ignored
Co-authored-by: José Valim <jose.valim@dashbit.co>
…ng#11948) first_in_iso_days and last_in_iso_days are both integers. This commit updates the Date.Range typespec to reflect that.
… correct match on line in file
@@ -103,7 +104,9 @@ defmodule ExUnit.Runner do | |||
|
|||
# Slots are available, start with async modules | |||
modules = ExUnit.Server.take_async_modules(available) -> | |||
running = spawn_modules(config, modules, running) | |||
tests_per_file = tests_per_file(ExUnit.Server.get_all_modules()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this is problematic because modules are loaded dynamically. So there is a chance I have two modules in the same file, the first one is loaded, the second one is not yet loaded, so get_all_modules
won't return the second one and still run more tests than it should. :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I can't come up with any other ideas either. the best option i can think of is to look into the test manifest and tranform line: ...
into line: ..., max_line: ...
. But even then is a bit tricky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get this; I just wasn't sure if maybe all test cases from one file are loaded at once. That should solve this, but it would still be pretty ugly because it would be very implicit.
Another thought I had (which might be what you're saying) is to do it independently from the current line
tag (so keep the line
tag behavior as it is).
And to introduce a set of min_line
and max_line,
and execute all the tests from the line range if both are passed.
So something similar to the first attempt at solving this, but independent from the current behavior and in mix, this could be passed, like mix test test_file.exs:10-30
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So something similar to the first attempt at solving this, but independent from the current behavior and in mix, this could be passed, like mix test test_file.exs:10-30
Oh, I like this approach! It may give additional features too.
Hi @studzien! I just read the blog post (awsome job!) and I am wondering if a better solution to this problem is for us to introduce:
(where This means the original root issue is no longer that relevant to you and that you no longer need one module per test, but one per file, and that should compile much faster too. Would you like to work on such PR? |
Yup, sounds good! Sure! |
The only thing I am not sure is what to do with the |
@josevalim I've been looking into a possible implementation for this. I see two paths in the current
Which one would be preferred? (or an alternative approach) |
I like suggestion 2. Maybe we could push |
I made some progress on this, but feel a little blocked in the implementation currently. Code here: main...drtheuns:elixir:main I've set it up as follows: When However, I've run into the following problem: the async_loop may be finished with processing the async modules faster than the module process can push the tests into ExUnit.Server. As a result, the async_loop will block until the async modules are finished (the I've thought about introducing an Another issue i've though about, is with the max_cases. If the |
What if we have been approaching this issue the wrong way? Maybe, instead of swapping it per module, maybe we should swap the whole suite to run tests async via a flag? This way we don't need to consider the interplay of those two options and it should simplify the implementation. In both cases, the concurrency is controlled with WDYT? |
Do you mean something like? ExUnit.configure(async_mode: :per_test | :per_module) which would then determine how the runner will treat In the For the Is this what you had in mind? |
Yes, that's what I had in mind. :) |
Closing this one for now, due to lack of activity. Thank you. |
Intro
This PR is the continuation of #11863.
This PR changes the behavior of ExUnit's filter.
It makes sure that only the test referenced in the filter is executed in case more than one test module is defined in one test file (see the example below).
Defining multiple test modules in one file is handy when parallelizing tests aggressively, such as defining each test as a separate async: true test module.
Example
Given the following test module:
Currently, in
main,
when we run the test with the line filtermix test test/multiple_test_modules_in_file.exs:14
, both tests will be executed. The tests in a file are filtered on a module by module basis -- the first test from every module that starts before the line14
is executed:The changes proposed in this PR consider both the first and last line of every test and make sure that only the test is executed if the filter is in that range:
Considerations