Skip to content
This repository has been archived by the owner on May 16, 2024. It is now read-only.

Unit test "Divide to rule" ? #125

Closed
yvaucher opened this issue Nov 5, 2014 · 20 comments
Closed

Unit test "Divide to rule" ? #125

yvaucher opened this issue Nov 5, 2014 · 20 comments

Comments

@yvaucher
Copy link
Member

yvaucher commented Nov 5, 2014

I open this issue as the talk is a bit splitted everywhere.

The question is do we want to enable UNIT_TEST=1 (*param might be renamed this is another issue) in all OCA repo where tests are conflicting.

Here the talk was started in OCA/purchase-workflow#51 (comment) by @pedrobaeza

And here it continues:
#124

It impacts following PRs:

@yvaucher
Copy link
Member Author

yvaucher commented Nov 5, 2014

I agree with @gurneyalex that it is a pain to configure those INCLUDE and EXCLUDE and as far as I know, there is no way to do some hack in simple yaml tests as we could do in python tests when creating objects to check if a required field must be set for example because we know it is added in another module.

I'm in favor of activating separated tests when we have multiple conflicts.

@gurneyalex
Copy link
Member

following up on @dreispt comment #124 (comment) here

I disagree with the "incompatible with the rest of the repo"" bit. These modules are not incompatible (and as a matter of fact we are deploying most of them together for a customer these days). Just their tests are designed to work when the module is installed by himself and not with his neighbors. It is not a unique case in the Odoo ecosystem.

Try the following thing:

  • setup a a plain odoo instance
  • install sale_stock
  • run the tests for sale

You'll get a failure, because the tests of sale cannot pass once sale_stock is installed as sale_stock among other things changes the workflow of sale orders.

Is this a reason not to have both sale and sale_stock in the same repository? Certainly not.

Maintaining a list of exclude/include is not worth it in my opinion:

  • you could well have tests which work only if an unmentionned dependency is installed. E.g. module foo depends on sale only but the tests expect the workflow for sale orders installed by sale_stock, which happens to be a dependency of module bar in the same repo: travis is green but installing foo alone will break. Testing each module individually will give a red Travis.
  • if you start excluding some modules and including them separately, you will end up with one configuration which makes the tests green (hopefully). Is it the only supported configuration? Likely it is not. So we add some burden on the developpers for no clear result (except maybe some speed gain on the running of the Travis tests).

You get little extra benefit at the cost of increased complexity and sometimes tests failing once 2 PR have been merged whereas Travis was green on both PRs.

So my vote is to not stop people from using UNIT_TEST=1 in the travis setup (or whatever we chose to call the option).

@gurneyalex
Copy link
Member

Apologies for the long message.

@gurneyalex
Copy link
Member

@pedrobaeza your opinion is welcome

@gurneyalex
Copy link
Member

Another difference I was not aware of is that with UNIT_TEST, you get an early failure, so in the case of OCA/purchase-workflow#51 you get an early Red travis as soon as one module's tests have failure (which is an issue because we have one module with failing tests in the 7.0 branch)

@dreispt
Copy link
Sponsor Member

dreispt commented Nov 5, 2014

My comments:

I disagree with the "incompatible with the rest of the repo"" bit. These modules are not incompatible

I see: it's the tests that are not designed to run together, and It's not work the effort to work around that. That's OK.

Maintaining a list of exclude/include is not worth it in my opinion

I don't disagree with with supporting arguments for this. But I don't see why not, alongside the single module tests, to keep doing "integrated" test runs for all the others modules that can be tested jointly with the other modules.
That means keeping an "exclude" list, but that's it.

So my vote is to not stop people from using UNIT_TEST=1 in the travis setup

+100

Another difference I was not aware of is that with UNIT_TEST, you get an early failure,

Yes, this seemed the sensible thing to do, but given this use case it's something that could perfectly be changed. Let's work on that.

As an example, focusing specifically on OCA/purchase-workflow#51, with my comments the config for purchase-workflow could be:

env:
  - VERSION="7.0" ODOO_REPO="odoo/odoo" EXCLUDE="framework_agreement,purchase_landed_costs"
  - VERSION="7.0" ODOO_REPO="OCA/OCB" EXCLUDE="framework_agreement,purchase_landed_costs"
  - VERSION="7.0" ODOO_REPO="odoo/odoo" UNIT_TEST="1"
  - VERSION="7.0" ODOO_REPO="OCA/OCB" UNIT_TEST="1"

In conclusion:

  • In general, we would do "integrated" tests excluding a few modules, and in parallel individual tests for all of them.
  • We also need to change UNIT_TESTs asap so that they don't exit on first test failure.

@dreispt
Copy link
Sponsor Member

dreispt commented Nov 5, 2014

We also need to change UNIT_TESTs asap so that they don't exit on first test failure

These are the lines to focus on when implementing this: https://github.com/OCA/maintainer-quality-tools/blob/master/travis/test_server#L193-L195

@pedrobaeza
Copy link
Member

@gurneyalex I see your points. Ideal design would be to not make any previous test to fail, but sometimes, this is not possible because there is a great change on the behaviour, but as @dreispt said, there is no need to be one way or another: we can test together without UNIT_TEST option all the modules that can be tested together, and also performs UNIT_TEST for each possibility (odoo and OCB) for the rest. So, the Travis lines would be:

env:
  - VERSION="x.x" ODOO_REPO="odoo/odoo" EXCLUDE="xxx"
  - VERSION="x.x" ODOO_REPO="OCA/OCB" EXCLUDE="xxx"
  - VERSION="x.x" ODOO_REPO="odoo/odoo" UNIT_TEST=1
  - VERSION="x.x" ODOO_REPO="OCA/OCB" UNIT_TEST=1

Do you agree on this?

@gurneyalex
Copy link
Member

@dreispt I'm handling this part (changing UNIT_TESTs asap...) see #129

@gurneyalex gurneyalex changed the title Unit test "Divid to rule" ? Unit test "Divide to rule" ? Nov 6, 2014
@gurneyalex
Copy link
Member

@pedrobaeza I just don't see the point... Could you provide me with a scenario where the setup you propose would be legitimately red (i.e. point a bug in a module) when UNIT_TEST=1 would be green?

@pedrobaeza
Copy link
Member

Yeah, for example:

  • You have one module that removes an element of an inherited view with position="replace".
  • You have another module that uses that element as reference for another inherited view.
    Testing both separately, they will be both green, but testing together, second module will fail (although it depends on the resolution order, but can be red).

@gurneyalex
Copy link
Member

ok... but I'm not happy with maintaining a potentially huge list of environments with all valid combinations of exclude

@pedrobaeza
Copy link
Member

No, you only need 2 (well, 4 counting OCB+odoo):

  • One for all the modules we can have together, and excluding conflicting ones (being 1 or a hundred).
  • Another with UNIT_TEST.

This way, we can at least test together the rest of the modules not excluded.

@dreispt
Copy link
Sponsor Member

dreispt commented Nov 6, 2014

@gurneyalex pedro is right, you just need too keep a single EXCLUDE list. No complex combinations needed.

@gurneyalex
Copy link
Member

I may be missing something there. For me, conflicts are generated by pairs of modules. If I have 6 modules in my repository, then I have 6*5 = 30 possible conflicts.

If out of the 6 modules of the hypothetical repo I have the following install time conflicts:

  • m1 and m2
  • m2 and m3
  • m4 and m5

what do you suggest I exclude?

@pedrobaeza
Copy link
Member

Well, in the scenario you describe, you'll have to exclude the 6 modules, but the intention of this check is precisely to discover these incompatibilities, try to solve them, and if not, put on exclude. For example, the issue I have commented is solvable putting position="attributes" and making invisible instead of "replace" (in my original comment, I made a mistake with this). That way, second module is not going to fail.

In summary, this will help to assure good maintenance practises developing modules.

@dreispt
Copy link
Sponsor Member

dreispt commented Nov 6, 2014

That example is far from being the general case.
In many project all modules can be tested together, and in most that is doable for the majority of modules in the repo.

For that extreme case, you probably would be better having single tests only.
But that's an exceptional solution for an exceptional case.

@gurneyalex
Copy link
Member

The UNIT_TEST=1 build take lots of time. We are reverting the merge in vertical-ngo, because we found this to be inconvenient in the long run. I've withdrawn the PR n the projects where it had not been merged already (and commented on the PR on the projects which did merge it).

@gurneyalex
Copy link
Member

So I'm closing this: we don't want to generalize UNIT_TEST=1.

OTOH, I'd gladly see a script to generate parallel unit tests, but this is another story.

@dreispt
Copy link
Sponsor Member

dreispt commented Nov 19, 2014

@gurneyalex About "unit test" taking lots of time, could that be an issue specific to ngo-vertical tests?

I checked project-service and the interval between modules tests is below one minute.
On vertical-ngo it is above 2 minutes.
And at the end of each run I can see the message Initiating shutdown (...) Hit CTRL-C again or send a second signal to force the shutdown. I get the impression that something in the tests is delaying the tear down, and this least to long test runs.

yaniaular referenced this issue in vauxoo-dev/maintainer-quality-tools May 16, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants