-
-
Notifications
You must be signed in to change notification settings - Fork 267
Unit test "Divide to rule" ? #125
Comments
I agree with @gurneyalex that it is a pain to configure those INCLUDE and EXCLUDE and as far as I know, there is no way to do some hack in simple yaml tests as we could do in python tests when creating objects to check if a required field must be set for example because we know it is added in another module. I'm in favor of activating separated tests when we have multiple conflicts. |
following up on @dreispt comment #124 (comment) here I disagree with the "incompatible with the rest of the repo"" bit. These modules are not incompatible (and as a matter of fact we are deploying most of them together for a customer these days). Just their tests are designed to work when the module is installed by himself and not with his neighbors. It is not a unique case in the Odoo ecosystem. Try the following thing:
You'll get a failure, because the tests of sale cannot pass once sale_stock is installed as sale_stock among other things changes the workflow of sale orders. Is this a reason not to have both Maintaining a list of exclude/include is not worth it in my opinion:
You get little extra benefit at the cost of increased complexity and sometimes tests failing once 2 PR have been merged whereas Travis was green on both PRs. So my vote is to not stop people from using UNIT_TEST=1 in the travis setup (or whatever we chose to call the option). |
Apologies for the long message. |
@pedrobaeza your opinion is welcome |
Another difference I was not aware of is that with UNIT_TEST, you get an early failure, so in the case of OCA/purchase-workflow#51 you get an early Red travis as soon as one module's tests have failure (which is an issue because we have one module with failing tests in the 7.0 branch) |
My comments:
I see: it's the tests that are not designed to run together, and It's not work the effort to work around that. That's OK.
I don't disagree with with supporting arguments for this. But I don't see why not, alongside the single module tests, to keep doing "integrated" test runs for all the others modules that can be tested jointly with the other modules.
+100
Yes, this seemed the sensible thing to do, but given this use case it's something that could perfectly be changed. Let's work on that. As an example, focusing specifically on OCA/purchase-workflow#51, with my comments the config for purchase-workflow could be:
In conclusion:
|
These are the lines to focus on when implementing this: https://github.com/OCA/maintainer-quality-tools/blob/master/travis/test_server#L193-L195 |
@gurneyalex I see your points. Ideal design would be to not make any previous test to fail, but sometimes, this is not possible because there is a great change on the behaviour, but as @dreispt said, there is no need to be one way or another: we can test together without UNIT_TEST option all the modules that can be tested together, and also performs UNIT_TEST for each possibility (odoo and OCB) for the rest. So, the Travis lines would be:
Do you agree on this? |
@pedrobaeza I just don't see the point... Could you provide me with a scenario where the setup you propose would be legitimately red (i.e. point a bug in a module) when UNIT_TEST=1 would be green? |
Yeah, for example:
|
ok... but I'm not happy with maintaining a potentially huge list of environments with all valid combinations of exclude |
No, you only need 2 (well, 4 counting OCB+odoo):
This way, we can at least test together the rest of the modules not excluded. |
@gurneyalex pedro is right, you just need too keep a single EXCLUDE list. No complex combinations needed. |
I may be missing something there. For me, conflicts are generated by pairs of modules. If I have 6 modules in my repository, then I have 6*5 = 30 possible conflicts. If out of the 6 modules of the hypothetical repo I have the following install time conflicts:
what do you suggest I exclude? |
Well, in the scenario you describe, you'll have to exclude the 6 modules, but the intention of this check is precisely to discover these incompatibilities, try to solve them, and if not, put on exclude. For example, the issue I have commented is solvable putting position="attributes" and making invisible instead of "replace" (in my original comment, I made a mistake with this). That way, second module is not going to fail. In summary, this will help to assure good maintenance practises developing modules. |
That example is far from being the general case. For that extreme case, you probably would be better having single tests only. |
The UNIT_TEST=1 build take lots of time. We are reverting the merge in vertical-ngo, because we found this to be inconvenient in the long run. I've withdrawn the PR n the projects where it had not been merged already (and commented on the PR on the projects which did merge it). |
So I'm closing this: we don't want to generalize UNIT_TEST=1. OTOH, I'd gladly see a script to generate parallel unit tests, but this is another story. |
@gurneyalex About "unit test" taking lots of time, could that be an issue specific to ngo-vertical tests? I checked project-service and the interval between modules tests is below one minute. |
[REF] autopep8: Delete vim comment
I open this issue as the talk is a bit splitted everywhere.
The question is do we want to enable UNIT_TEST=1 (*param might be renamed this is another issue) in all OCA repo where tests are conflicting.
Here the talk was started in OCA/purchase-workflow#51 (comment) by @pedrobaeza
And here it continues:
#124
It impacts following PRs:
The text was updated successfully, but these errors were encountered: