-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
REP-001: Various rez-tests enhancements #665
Comments
This all sounds good to me, and in fact I'll add some more rez-test-
related feature requests.
RE default target list. Yes I agree with this. However, it would also serve
a further purpose. Right now, there's no way to configure rez to run the
unit tests before performing a package release, but that's something that's
been asked for multiple times. This default target list would also be used
for the unit tests that get run automatically, pre-release. I think this
should be a separate package attribute `default_tests (list of str)`.
RE multiple tests on cli - agreed, however this feature would be disallowed
in combo with custom test args.
RE custom test args - agreed.
To that end I see four action items here:
* Add default_tests attrib
* Add pre-release hook to run default tests and abort release on any
failures
* Add ability to run multiple tests with one cli call
* Add ability to pass args to tests
How does that sound?
A
…On Mon, Jul 15, 2019 at 6:17 AM Jean-Christophe Morin < ***@***.***> wrote:
Hey everybody, here are some ideas regarding rez-test. At RodeoFX we are
heavily relying on it and we found that it's lacking some features. There
is multiple proposal in this issue because I didn't want to pollute the
issue tracker with too much noise. I basically want to get some input and
feedback on my proposals and see if there is any traction. We can split
into different issues later if necessary.
Default test/target
Right now, when no test name is given to rez-test, it will run all tests.
I propose to add a new built-in key in the tests dict inside a package
definition. The key could be called default or default_test and it could
contain a string (the name of a test or all) or a list (a list of test).
The reason behind this is that sometimes you might have super slow tests
(e.g complex integration tests) that you don't want to run by default, but
you still want to run the other tests without having to run them one by one
or specify them in the command line.
I have to note that this is a little dangerous to add, since someone could
already be using the name we will choose. The name of the key would have to
be carefully chosen. Or we might even need to introduce a new package
attribute to make sure we don't clash with what users of rez already have.
Specify multiple tests to run in the command line
I basically feel the need to be able to run multiple tests in one shot
without running them all.
For example, we have package that has 3 tests: test1, test2, test3, I
want to be able to run rez-test package test1 test2 or rez-test package
all
Pass custom arguments when running a test
My colleagues and myself always have to modify the commands of the tests
while developing a package. There is multiple reasons why we do so, but
mainly we like to increase the verbosity of our tests only when developing
and keep the verbosity lower in our CI tests. The other common reason is to
choose a custom target inside a particular test so we can iterate faster
and avoid running unnecessary tests.
For example, let's say we have:
tests = {
'unit': {
'command': 'pytest {root}/tests/unit_tests'
'requires': 'pytest'
}
}
where tests/unit_tests contains multiple test files and each file
contains tests with markers (pytest jargon. A marker is basically a way to
group the tests so that you can selectively choose which one to run in the
command line). I'd be great if we could do rez-test package unit -- -v -m
myMarker that would run pytest {root}/tests/unit_tests -v -m myMarker.
The immediate benefit is that it's faster and more intuitive. The second
is that it can avoid mistakenly committing the modifications to the tests
variable.
Doing this could be challenging with the two previous proposals. Because
of each test uses a different tool, then we can't easily know if the
additional arguments will work or not. We could simply say that we don't
care since the developer probably knows what he is doing. Or we just raise
if we are running multiple tests and we receive custom tests arguments.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#665>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMOUSSI6DUCN4IWTVVCBJDP7OCUZANCNFSM4IDR35YA>
.
|
Sounds good to me. Do you have any preference on the order? I would personally go with:
in this order. And in how many PRs would you like them? I think I would go with 2, one for all the rez-test inprovements and another for the hook? |
What do you think about trying this out as a REP workflow? (ie Rez
Enhancement Proposal).
We could:
* Create the associated issue, eg REP101;
* For the sub-tasks outlined above, create issues such as "REP101 -
default_tests"
* Once all tasks are completed, the REP ticket would be baked off into the
Wiki, and the ticket closed.
I think they should all be separate PRs - smaller PRs are easier to track
and faster to merge.
Thoughts?
A
…On Wed, Jul 17, 2019 at 9:25 AM JeanChristopheMorinRodeoFX < ***@***.***> wrote:
Sounds good to me. Do you have any preference on the order? I would
personally go with:
- Add default_tests attrib
- Add ability to pass args to tests
- Add ability to run multiple tests with one cli call
- Add pre-release hook to run default tests and abort release on any
failures
in this order.
And in how many PRs would you like them? I think I would go with 2, one
for all the rez-test inprovements and another for the hook?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#665>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMOUSSA74476XFOT63OW6LP7ZKGDANCNFSM4IDR35YA>
.
|
Yep, that's ok for me. Should we start at 001 instead of 101 maybe? And I guess I can simply rename this issue and I'll update the description? We can keep track of the changes in the description now in Github (and same for comments/replies). As for the PRs, if you prefer to have them separated I'm ok with that. |
001 is good.
Cheers
A
…On Wed, Jul 17, 2019 at 11:21 AM JeanChristopheMorinRodeoFX < ***@***.***> wrote:
What do you think about trying this out as a REP workflow? (ie Rez
Enhancement Proposal).
Yep, that's ok for me. Should we start at 001 instead of 101 maybe? And I
guess I can simply rename this issue and I'll update the description? We
can keep track of the changes in the description now in Github (and same
for comments/replies).
As for the PRs, if you prefer to have them separated I'm ok with that.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#665>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMOUSTFQ3N6PWEIYWCFQJTP7ZXZHANCNFSM4IDR35YA>
.
|
Possibly related feature request to add to discussion: Would be good to add the ability to specify that a given rez-test always runs after rez-build. Doesn't do much good when rez-releasing, but it gives a way for people to efficiently run tests against compiled code reliably/quickly, rather than needing to string together multiple commands. Perhaps for a rez-release scenario, it would be possible for a test failure to cause a release hook not to run, and perhaps attempt to scrub the built package version from disk (Can't be guaranteed to work in all cases, but probably covers all decent package possibilities where someone isn't being hacky with sudo). |
Just updated the description and the title. I rephrased some sentences to make them more generic and remove mentions of RodeoFX. @maxnbk regarding running the tests at build time, how would you like to tell rez to do this?
I'm not too sure what you mean here. Are you asking if it's gonna deploy the code even if the hook make the release to fail or something? |
I mean that when I run rez-release, if the result of test_built_package (which runs the default tests as spec'd above by you) returns a success after release, then the release hook runs as normal, but in the case of a test failure (though I'm not entirely sure if a single variant failure should result in a whole package failure), the rez-released package path is erased. Basically, a way to help prevent bad releases from going out a door into the wide world. Let's say that could be controlled by a |
I think the pre-release hook would install the package to a temp location first no? rez-test would use that temporary install - thus if anything fails, nothing's been released yet. We should be able to avoid building twice - instead it'd be one build followed by two installs, one temp, one real (if tests pass). |
RE build time tests, this I think is actually a more general case of post-build hooks. I don't recall the details just now but I think there were some things to work out to get that to happen. In theory though I agree that hooks should also be able to be specified post-build, pre-release, post-release. |
Not always possible when dealing with compiled packages. |
Yeah I guess it depends on the build as to whether you can cleanly install
the same build into different locations. Perhaps we'd need a new package
attrib to describe this - ie whether a full rebuild is necessary for any
given package install.
In any case, I can't see that we could actually do a release as part of a
pre-release hook(!) - even if it's cleaned up after, there's a window of
time where that package existed and may get picked up (unless we also
update how the `.bulding` marker files work, which is yet more complexity).
I don't think we can avoid having to do a temp install in order to test.
…On Wed, Jul 17, 2019 at 10:28 PM maxnbk ***@***.***> wrote:
I think the pre-release hook would install the package to a temp location
first no? rez-test would use that temporary install - thus if anything
fails, nothing's been released yet. We should be able to avoid building
twice - instead it'd be one build followed by two installs, one temp, one
real (if tests pass).
Not always possible when dealing with compiled packages.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#665>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMOUSRTXH4QERYJ7WPP2BTP74F7BANCNFSM4IDR35YA>
.
|
Let's look at breaking this into tickets and getting started post-SIGGRAPH. |
@JeanChristopheMorinPerso colleague here |
Yeah I've considered something similar. Basically it'd be two features:
1: "rez-test mypkg mytest --inplace". Ie, just run the test in the current
env, if the current env provides all the packages needed.
2: "rez-test mypkg mytest --interactive". Rez-env into the specified test
env.
I suppose (2) could be done with rez-env instead, but I'd worry that that
gets a bit confusing.
A
…On Sat, Sep 21, 2019 at 4:34 AM Charles Flèche ***@***.***> wrote:
@JeanChristopheMorinPerso <https://github.com/JeanChristopheMorinPerso>
colleague here
We just discussed that it'd be interesting to be able to rez env [testenv]
so during development running tests is faster without the rez env
resolution overhead.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#665>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMOUSUG4DLSCQV5RK4AQF3QKUJS3ANCNFSM4IDR35YA>
.
|
I'm updating the proposal shortly wrt slack chat. Specifically:
Note that a name change to something more generic (ie rez-command rather than rez-test) could make sense, but will be left as a separate exercise for another time. |
Thanks for the additions and suggestions! Here is some of my thoughts:
I'm wondering if
I'm wondering if we could make So maybe a workflow like this:
or even
I'm thinking out loud here of course. I've also added a little description in the shell prompt once inside the test env. I'm also implicitly implying that once you enter the env of a specific target, the Last thing, with Note: I'll modify point 2 of the REP. We can already pass multiple test into the |
I think this way of handling it (rather than via rez-env itself) is important because each test can have a wildly different resolve due to the additional requires property a test can have, requiring someone to select a specific test to make the environment deterministic. That said, you might also consider some way for a superset test env which tries to add together all the test types packages into a single test. Could fail, but might be useful for a CI system to be able to run multiple tests at the same time if desired.
Not bad - My two cents - consider that rez-env's "-i / input" flag can point to a context, rez-context's "-i / --interpret" and a rez-tests "-i / --interactive" would all have an essentially common-ground in terms of getting you a shell in the desired environment (though I guess inplace and interactive need to fight over "-i" for rez-test first). Makes for a nice synchronicity in my mind. |
I just found issue #449 that listed some rez-test enhancements.
About the last point, I know a lot of developers that would be happy to see the addition of |
Something I'm not seeing when running rez-test, is any variables that make it into the testing environ that I could use to parameterize the test from within. Such as, REZ_TEST_PACKAGE_ROOT, ideally, the path for the variant currently under test, Unless I've missed an obvious way of handling these as-is (I see how I could maybe handle two of the three variables from the test command, but not the third, and they make such a nice little bundle). Okay, sorry, I guess I'm an idiot. The source specifies that variable expansion happens in the command, so this works:
Still, I think this would be a good default to include without needing to engineer it. |
Hey all, I've been thinking about the "multiple build problem", as I need to tackle it shortly. "multiple build problem" = how do we test a package (or a variant specifically), and abort the release if the test fails? The simplest way to do it is to install to a temp repo and test the package there, however then you need a second install to the correct location (or copy the temp install, which is problematic also). I think I have a solution. During rez-release:
This actually solves two problems:
Thoughts? |
I like the idea and i think this would work well. The only fear i have is that this can lead to orphaned files and folders from failed builds. Cleaning up needs to be a high priority. |
I have this working, and failed builds are cleaned up. PR requires #807 so waiting on that before submitting. See https://github.com/nerdvegas/rez/tree/rep001-2-hooks |
…ademySoftwareFoundation#665). Signed-off-by: Ben Hawkyard <bhawkyard1@gmail.com>
…ademySoftwareFoundation#665). Signed-off-by: Ben Hawkyard <bhawkyard1@gmail.com>
…ademySoftwareFoundation#665). Signed-off-by: Ben <bhawkyard1@gmail.com>
…ademySoftwareFoundation#665). Signed-off-by: Ben <bhawkyard1@gmail.com>
…ademySoftwareFoundation#665). Signed-off-by: Ben <bhawkyard1@gmail.com>
…). (#1523) Signed-off-by: Ben <bhawkyard1@gmail.com>
Point number 3 (passing custom arguments) has been added by @bhawkyard1 in #1523. |
Proposal to improve rez-test.
--interactive
option--inplace
optionNote: Some of these features clearly change rez-test into something a bit more general, so the name may have to change (to rez-task for eg). However we will consider this as a separate issue.
1. Add "run_on" field for tests.
This would define when the test gets run. Possible values:
The param would be a list, so a test can run on multiple of these events.
For pre-install/release, what would happen is:
2. Run all tests defined by a package
I feel the need to be able to run all tests defined by a package, no matter what their
run_on
is set to.That would introduce a "reserved" target name
all
, but I think we should still let anyone refine its ownall
target, hence the quotes around thereserved
word ;)3. Pass custom arguments when running a test
It is common to have to modify the commands of the tests for a package while developing that said package. There is multiple reasons why we do so. For example, you might want to increase the verbosity of your tests only when developing and keep the verbosity lower in your CI tests. The other common reason is to choose a custom target inside a particular test so you can iterate faster and avoid running unnecessary tests.
For example, let's say we have:
where
tests/unit_tests
contains multiple test files and each file contains tests with markers (pytest jargon. A marker is basically a way to group the tests so that you can selectively choose which one to run in the command line).I propose to add a functionality to make
rez-test package unit -- -v -m myMarker
valid and that would runpytest {root}/tests/unit_tests -v -m myMarker
.The immediate benefit is that it's faster and more intuitive. The second is that it can avoid mistakenly committing the modifications to the
tests
variable.Note that passing custom arguments should only be possible if only one test is being run.
4. More execution options
Specifically:
rez-test --interactive
option. Puts you into an interactive shell for the nominated test. From here you could then use...rez-test --inplace
option. Ie, run the tests in the current env, if it provides all the required packages. If not, give an error.5. Variant iteration options
Many times, it makes sense to run a test on all variants of a package. Currently that doesn't happen (variants aren't iterated over at all) because there's not currently a reliable way to resolve to an explicit variant in rez. However, until such time as we have the required feature to do this, we could get pretty close, and it's probably worth doing (the code needed to do this is minimal).
Some things to consider:
How to express this? Perhaps a new
variant
tests field. For eg, withvariant: True
, the test will be run on every variant. If False/not present, it will just run on one variant (matching current behaviour). If a dict containing "requires", it will run only on variants overlapping with the given request list; furthermore, if the requires list conflicts with a variant, that variant is skipped.For example, given:
The text was updated successfully, but these errors were encountered: