Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] date_range: Remove useless post_install flag for tests #32

Merged
merged 1 commit into from
Oct 17, 2018

Conversation

lmignon
Copy link
Contributor

@lmignon lmignon commented Oct 12, 2018

No description provided.

@lmignon lmignon force-pushed the 12.0-data_range_no_post-install_tests-lmi branch from 0f27495 to ddee77a Compare October 12, 2018 13:50
Copy link
Member

@pedrobaeza pedrobaeza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not useless, as discussed in the other place

@lmignon
Copy link
Contributor Author

lmignon commented Oct 12, 2018

@pedrobaeza This PR starts with WIP! I try to reproduce your problem! May be I could find a better way to solve your problem. I don't want to keep this hack that could lead to other problems.

@pedrobaeza
Copy link
Member

But that's the question: this is not a hack! It's the official way to get rid off these problems, and it doesn't provoke any side effect (we are using this in all modules we make for 3 versions). Have you faced any problem ever? You can search across OCA repositories and you'll find a lot of tests with these variables (or decorators).

@lmignon
Copy link
Contributor Author

lmignon commented Oct 12, 2018

@pedrobaeza If the default behaviour should be to run the tests at post install, then this should be the default in Odoo. However, this is not the case. The only use cases I know of are for testing report tests or tests using controllers,

The logic you are testing in a module 'a' could be modified by a child module 'b'. Therefore, if the tests of module 'a' are run at post_install, they will fail since the expected behaviour has been modified by module 'b'

https://odoo-development.readthedocs.io/en/latest/qa/python/at_install-post_install.html

You can search across OCA repositories and you'll find a lot of tests with these variables (or decorators).

I'm not saying you should never use these decorators, but their uses must remain limited and perhaps there are other places in OCA where this is not justified.
If I see code that doesn't seem good to me, I won't write the same code in the same way just because a lot of people write it that way.

@pedrobaeza
Copy link
Member

The logic you are testing in a module 'a' could be modified by a child module 'b'. Therefore, if the tests of module 'a' are run at post_install, they will fail since the expected behavior has been modified by module 'b'

This is the case I told you that then the module b is the incorrect one, as you should control when you want to test module A behavior or module B behavior. You have the same problem if you are modifying core behavior. post_install test doesn't prevent this. This is how it should be done for testing exactly what you want and be totally resilient to any behavior modification on top of this:

https://github.com/OCA/bank-payment/blob/3480a7da8da68952b68834e974edd9c47c0a25fb/account_payment_transfer_reconcile_batch/models/payment_order.py#L28-L29

And you can check that in this case, tests are not post_install:

https://github.com/OCA/bank-payment/blob/8.0/account_payment_transfer_reconcile_batch/tests/test_account_payment_transfer_reconcile_batch.py

But I got the same problem on top module (account_banking_payment_transfer) without putting this.

@lmignon
Copy link
Contributor Author

lmignon commented Oct 12, 2018

This is the case I told you that then the module b is the incorrect one, as you should control when you want to test module A behavior or module B behavior

But without the 'post_install' module A will be tested BEFORE the installation of module B and the tests for module will be OK and also for module B
The case is:

Module A
Module B depends A

odoo -i A,B --test-enable

With post_install=False the process will be
install A
test A -> OK
intall B
test B -> OK

With post_install=True the process will be
install A
install B
test A -> NOK
test B -> OK

@sebastienbeau
Copy link
Member

Hi, I hit the same issue in Shopinvader repo (fail to create a company because some field where mandatory), I have done some hack (dropping the constraint, but it's fucking ugly and not a correct solution).

I am searching for a more elegant solution, because we are all concern by this issue.
I do not think that the correct solution is to run the test after the installation because running this test will be impacted by other module, so we may just have other new problem and make new hack to solve them

After thinking twice, the min issue is that we run test with a bunch a module preinstalled and this is the reason that broke the test. Try to solve this with hack will just generate new issue and add complexity.
The best will to test all module separately, but this is not good for global warming.
So for me the best solution is to exclude the module that raise such an exception and run them in a separated test when an issue occur on OCA.

What do you think?

@pedrobaeza
Copy link
Member

But again, why do you think this is a hack? I will look for the issue where Odoo expresses that this is the way to avoid the constraint problem. Isolating the tests only provokes more Travis time and ignore problems on the module itself.

You won't have new problems with post_install. You can check on the example I put that the problem is the same without putting post_install.

@lmignon
Copy link
Contributor Author

lmignon commented Oct 12, 2018

@sebastienbeau I fully agree with you. That's what we do on our internal projects. We split our tests in separate runs when some are not compatible with others due to dependencies or the non predictability of the order in which tests with the same level of dependencies are ran.
ex
Module A,
Module B depends Module A
Module C depends Module A

odoo -i A --stop-after-init
odoo -i B,C --stop-after-init --test-enable

The order in which the tests are performed may vary from time to time. If C fails after B or B fails after C we split the tests into separate runs
odoo -i A --stop-after-init
odoo -i B --stop-after-init --test-enable
odoo -i A --stop-after-init
odoo -i C --stop-after-init --test-enable

@lmignon
Copy link
Contributor Author

lmignon commented Oct 12, 2018

@pedrobaeza

You won't have new problems with post_install. You can check on the example I put that the problem is the same without putting post_install.

Yes, we're gonna have other problems. as explained....

Module A
Module B depends A

odoo -i A,B --test-enable

With post_install=False the process will be
install A
test A -> OK
intall B
test B -> OK

With post_install=True the process will be
install A
install B
test A -> NOK
test B -> OK

@lmignon
Copy link
Contributor Author

lmignon commented Oct 13, 2018

@pedrobaeza @yajo I've created a test addon 'date_range_child' to illustrate the problem with post_install.This addon depends of date_range. In this addon I disable a constains defined on the model date.range
As you can see in #34 I the tests are ran in post install, the tests for the module date_rangefails. It's not the case if the tests in the normal test flow as illustrated by #33
We must therefore remove the post_install attribute since it will introduce unexpected behaviour.

@lmignon
Copy link
Contributor Author

lmignon commented Oct 13, 2018

cc @sebastienbeau

@rousseldenis
Copy link
Contributor

I agree with @lmignon. Instinctively, tests shouldn't implement such option unless very particular cases. If so, there should have problem in code/tests implementation (that have to be solved before thinking about using post_install option). This is a general remark as I'm not aware of this use case.

@lmignon lmignon changed the title [WIP][FIX] date_range: Remove useless post_install flag for tests [FIX] date_range: Remove useless post_install flag for tests Oct 15, 2018
@yajo
Copy link
Member

yajo commented Oct 16, 2018

#33 and #34 are very interesting, thanks!

The failure point in all of this is a tricky situation that Odoo does this:

  • required=True creates a SQL constraint.
  • at_install tests are executed in a narrowed environment where only the module's dependency chain is loaded, but that's only for Python code; the underlying database remains untouched.
  • The same would happen with any other SQL constraint.

I've been analyzing all possible solutions for this problem. Sadly, as you can see below, all of them have pros and cons, there's no silver bullet here:

Solution Pros Cons Workarounds for cons
Install and test addons in separate databases Corner-case proof. Slooow Parallelize tests per addon. Hard to do and still slower than one job to test all.
Install dependencies of wanted addons in a separate step Should work for most cases Guessing such addons list is hard. Still can fail if you want 2 addons, which are unaware one from the other, and one adds SQL constrains.  *1
Install and test separately addons that are known to not conflict Should work Requires manual intervention and human knowledge *1 
Add @post_install to tests that should fail in integrated environments Easy, short, works for most cases A child addon can break a parent addon tests *1
Avoid creating data in tests. Put it in demo files. Test data can be reused for human testing. Unrelated addons can get installed in unknown order, so SQL constraints can still fail. Data modifications in tests can still trigger SQL constrains too, although not much common. Patch odoo to use a deterministic addon install order? Then be able to modify it downstream in integration environments. Would require manual intervention. To enforce demo data state, you can also use the odoo.tools.convert_file() method, which loads a data file in the mode and namespace you want, and you can even force noupdate=False.
Be able to skip tests if an addon is installed Knowledge shared in the OCA common codebase. Less coverage when that addon is installed. Harder to patch, since you have to inspect one by one all the non-dependency chain.  
Have a decorator/context manager to disable constraints temporarily Common fix for all problems What if you're actually testing a constraint, and while it's being tested there's another constraint that you don't want to test but breaks the test? If, after all, the transaction fails, the test shouldn't notice, right?

*1: Disable addons that fail in such way, downstream in integration environments. Requires manual intervention.


🤔 As you can see, the only bullet-proof way of testing is the slowest one: 1 db per addon, separately. It's hard that we converge there, so let's move on for the next ones that make most sense.

I think the best one would be odoo/odoo#24833. It marks clearly that, if a given addon is installed, a given test (or suite) is skipped. Skipping tests lowers coverage, but most likely it's not a problem on integration environments (where usually only private code coverage counts) or OCA (where usually these conflicts don't happen, or we can enable or disable addon combinations separately). Integration environments where the blacklisted addon doesn't exist remain unaffected, and code is explicit.

@lmignon
Copy link
Contributor Author

lmignon commented Oct 16, 2018

@yajo
The normal process into OCA for a repository is

  1. install dependencies of addons to test.
    https://travis-ci.org/OCA/account-analytic/jobs/442037231#L602
    image

  2. run tests for addons to test
    https://travis-ci.org/OCA/account-analytic/jobs/442037231#L1382
    image

The process is by default the one described in the second row of your table.

Do you agree that post_install must be removed?

@yajo
Copy link
Member

yajo commented Oct 16, 2018

The process is by default the one described in the second row of your table.

Exactly. I was just pointing out that it is not a perfect solution, nor easy to implement outside of MQT.

Do you agree that post_install must be removed?

As long as we have an alternative... Would you agree on using odoo/odoo#24833?

@lmignon
Copy link
Contributor Author

lmignon commented Oct 16, 2018

@yajo If I understand your problem is that in your integration tests, it is not possible for you to preinstall the dependencies of the modules to be tested. Right?

It is not healthy to add code to tests to try to solve problems that are not related to the addon tested. Sooner or later, this will introduce other problems. If in your case this allows you to avoid errors, there are other integrators who could come with the opposite need because in their case it breaks their tests. That's why we must remove this kind of hack.

For a long time now, we have been testing all our projects without using MQT. We use the same principle as in MQT: 1 installation of dependencies, 2 run of tests. In this context we have developed a set of utility commands (project for our internal needs). One of these allows us to have the list of dependencies for a given list of addons. (https://github.com/acsone/acsoo#acsoo-addons)

@yajo
Copy link
Member

yajo commented Oct 16, 2018

How could odoo/odoo#24833 break anything? 🤔

@lmignon
Copy link
Contributor Author

lmignon commented Oct 16, 2018

@yajo it's not the right way to fix this kind of issue into OCA. We can't predict which external addons could break the tests. Therefore we have no way to properly maintain this list. Tests are not designed to be run into integrations tests outside of the current repository.
The problem you encounter is specific to your own context. OCA is not the place to fix problem related to a specific context if the context can differ for other contributors. You must fix the way you run your tests.

@sebastienbeau
Copy link
Member

On my side, I really do not like the concept of adapting the test depending of other module that are not dependency.

If we have "testing" incompatibility because of the way we run the test (test all module), we should not solve it by adapting the python test.
If we do this, whatever solution we use decorator, post_install..., we do it wrong, because we adapt the module test for an external reason that can change again and again.

For me if we have testing incompatibility, we should test the module separately (we can do it manually if we do not have a lot of OCA case), and if we have a lot of case we should always test it separately.
It slower, yes, but it avoid to create useless additional code with hell dependency issue.

I prefers to wait 5 minutes drinking a tea (or doing some code) instead of losing hours to solve incompatibility issue

@sebastienbeau
Copy link
Member

I think we should discuss of this point with more people, @gurneyalex @jgrandguillaume @guewen @StefanRijnhart

@NL66278
Copy link

NL66278 commented Oct 16, 2018

In my point of view in principle test should be run post_install. Otherwise continuous integration becomes impossible. The whole idea of continuous integration, for instance but not exclusively, with buildout + runbot is that all of the code you are going to deliver is able to pass all of the tests.

If a module b that is installed after module a breaks a test in module a it means either module b is faulty, because incompatible with a module that is also needed, or the test in a is faulty, because to restrictive on the outcome of the test.

Let us give a theoretical example.
Suppose we have some test in module a that add a firstname field to res.partner (actually of course we have such a module). There is a test that the full name is firstname + space + lastname. Such a test would immediately break on a module b that would add a title to the fullname. On the other hand if module b contains a test that supposes the fullname is title + lastname, it will also break.

Should we now say we run the tests for a with only a installed and the tests for b with only b installed? NO! Your test will run, but the solution you deliver to your customer will not really be tested and will likely contain bugs.

The tests in both a and b in this case are to restrictive. Module a should test wether firstname is in the fullname and lastname is in the fullname - instead of doing an exact string comparison. Likewise module b should test wether title and lastname are in the fullname.

So as far as I am concerned: post_install everywhere. Or better, it should be the default, but that is up to Odoo SA.

@NL66278
Copy link

NL66278 commented Oct 16, 2018

To comment on the example of a module that depends on date_range, but disables a constraint: either the constraint is really necessary for a consistent database, then it should never be disabled, or the constraint is to restrictive. In both cases the unit tests if run post_install will tell us there is a problem, an incompatibility, and we have to change our logic somewhere.

@sebastienbeau
Copy link
Member

Post install testing on all odoo module is just impossible.
Module like

  • sale_exception
  • purchase_exception
  • purchase_order_approval_block
  • ....
    Change the behaviour of the flow of validation. And so will always break post install test in the purchase and sale module. And adapting the purchase module and sale module test to be compatible will be just the start of a huge night mare.
    If we want to run full integration test for our customer the only solution is to write our own custom test that test the combination of every flow. Running existing test in post install will just fail

@pedrobaeza
Copy link
Member

@sebastienbeau that's not true, as sale and purchase modules will be tested at install time. We are talking about OCA in this time, although @NL66278 and me prefer post_install to all (even Odoo core modules). For what you say, I move you to one of my first comments in #32 (comment), where I explain that the module should be adapted to work on that case. Having that, you have total control on your code and your tests to what to do on each case.

@yvaucher
Copy link
Member

I join @sebastienbeau on

On my side, I really do not like the concept of adapting the test depending of other module that are not dependency.

IMO a module that breaks other modules tests without being in the dependency chain should be adapted or be isolated.

For instance, if a module changes a workflow, using test-enable odoo.config param and a context key it can test itself but needs to clear the path for other addons to test with the standard workflow.

Nevertheless, while I would like to drop post_install, I still use the that decorator.
Because there are cases with demo data and required fields which occurs and I want to have fast tests. And to be fast, creating a single database and testing everything on it, helps. Plus I don't know an other way than isolation or post_install to solve it.

Something is really fishy in the way Odoo does it especially with required values when you have the requirement active but the default value is not set.

At Camptocamp what we do it in a slightly different way.
(step 0): We install all dependencies. (this is not relevent as used for cache use)
https://github.com/camptocamp/docker-odoo-project/blob/a1b2a49a0bdf81fdb20f08e0cff0b36c2726fa40/bin/runtests#L75

step 1: We install the list of addons to test with -i.
step 2: We update the addons with -u and --test-enable test a defined list of modules.

https://github.com/camptocamp/docker-odoo-project/blob/a1b2a49a0bdf81fdb20f08e0cff0b36c2726fa40/bin/runtests#L83-L84

But we can still have issues in the module list we are testing and @post_install solves most of them.

@yvaucher
Copy link
Member

@sebastienbeau for modules like sale_exception modules I think we could just create the blocking exceptions of demo data as inactives and reactivate them in the unit tests on exceptions. It does the trick to by pass them.

@sebastienbeau
Copy link
Member

@pedrobaeza I was talking of the case of using "post install" everywhere, so this is why I have used this example.
On my side I do not like to adapt my test to other module, it just make so hard to maintain. Every new module can break your module. And it's will make the migration more complex, and we will start to ask us useless complex question like "should I remove or port the check that made compatible (only for the test) my module as the other module is not migrated yet and maybe will be never migrated?"
I understand what you want (running some test or all test at this end) but in my point of view it's a terrible idea because the number of adaptation is not "limited" "defined" it's depend of external variable (module that you test). But this is my point of view, let's see the point of view of other ;)

@pedrobaeza
Copy link
Member

@sebastienbeau please see my approach on the comment I have pointed previously. It doesn't depend on others, but itself. You can call it a "sanity check", and it's worth to put this whenever you overwrite an standard behavior. It acts like a behavior switch off/on.

@sbidoul
Copy link
Member

sbidoul commented Oct 16, 2018

I think adapting tests (and worse the code) to cope for the eventual presence of other modules in other repos is a terrible idea. It's risky and potentially endless...

When integrating OCA modules in a project, one must rely on the fact that they have been properly tested in their own repo. So installing dependencies (without running the tests) before running project tests is definitely the way to go, as explained by Yannick.

Tests we have in OCA are unit tests not integration tests. If one wants to run integration tests, these must be specific tests for the specific integration at hand. Trying to repurpose unit tests as integration tests can only lead to headaches.

So 👍 for this PR.

@NL66278
Copy link

NL66278 commented Oct 17, 2018

I think that when a new module B is introduced that depends on module A, it is on the shoulders of the developers of module B to make their extensions compatible. If their extension breaks a test in A, it means it breaks functionality in A. The point of tests is not to run tests, but to ensure functionality is not broken.

Having said that, the original developers of module A can greatly increase the usability of their module by writing it with extendability and reuse in mind. And this then should be reflected in the test they write for their module.

And I know that the tests in Odd/OCA are called unit tests. But in reality most of them test functionality, user workflows etc. So, whatever their name, they are more component tests then unit tests.

@sebastienbeau
Copy link
Member

@pedrobaeza I have read you example. I think we had a language issue (sorry I bad in english), when I mean "depend", I mean for example that you have adapted "date_range" module for solving test of an other module (this do not mean that date_range depend of the module X).

In the example you give, what I dislike is that the code of the module is impacted by the fact you are running the test or not.
This is a conceptual bad practice (whatever the quality of the code, the elegance of it), just in a conceptual point of view, the code executed depend of the case you are running or not the test, this mean that we do not test that exactly run in production, but this is an other debate, let's focus on
post_install

On my side I totally join Stephan on this point

I think adapting tests (and worse the code) to cope for the eventual presence of other modules in other repos is a terrible idea. It's risky and potentially endless...

So I approve this PR, test should be run at_install

@yajo
Copy link
Member

yajo commented Oct 17, 2018

There's the interesting point that we're integrating OCA addons with other OCA addons.

It's not like integrating OCA addons with private ones, which is totally uninteresting to all other OCA devs.

But what if several of us do integration tests of the same addons? Do we all have to separately go fixing the same stuff in clumsy ways? Are we doomed to have nowhere to share integration test efforts?

Integration testing is important. Why shouldn't we have a place to share efforts on that?

@rousseldenis
Copy link
Contributor

@yajo Of course integration testing is important. But, that's the integrator job. You cannot express every environment combinations in OCA repositories testing processes.

Doing this you introduce strong coupling between all OCA modules which is not sustainable.

You'll lose the advantage of modularity as each module has just to be aware of what he depends on (the 'depends' key).

IMHO introducing this kind of coupling would lead to developers nightmares. And, personally I want to sleep at night 😄

@sebastienbeau
Copy link
Member

I agree with @rousseldenis that integration test should be done by the integrator. This part always depend of the customer and it's combination of module that is most of the time uniq for him.

And I also agree with @yajo that we should found a way to share the integration test or at least efficient way to write such test.

One interesting point is the robot framework : https://github.com/brain-tec/odoo-robot-framework.
You play with the ERP of your customer save the scenario (this generate code that you can modify) and you have your test and the additional feature is that you can to some screen capture to generate automatically the picture of your documentation.

I really like this approach and I hope I will have enough time to help brain-tec on v12 (@BT-aestebanez @BT-cserra @BT-fgarbely @BT-kaberer @BT-ojossen ). If you are interested, give a try to their work, it's worth.

@rousseldenis
Copy link
Contributor

rousseldenis commented Oct 17, 2018

@sebastienbeau Ok. This should lead to a new conversation on another place. That's a good point indeed to help some developers to make their integrations more easy.

This content could take place on OCA website or in maintainer tools or ...

Could we move forward on this and let this merged ?

@yvaucher
Copy link
Member

@yajo integration tests as said by @rousseldenis are the responsibility of the integrator, you can have a dedicated module just for that. Consolidating your dependencies in your private projects with additional tests your complete workflow. The idea to share integration tests is a good idea, but in practice might be difficult because it will vary a lot from one project to another as it is the opposite of the modularity of the unit tests. We attempted to do so with oerpscenario using gherkin syntax, but finally writting python directly without the gerkin syntax layer is simpler.

Other than that, about explaining how we use how we do in private projets I understand it's not interesting for OCA devs, I was trying to express that testing all modules of a OCA repository is quite similar to how you do it on you specific modules privately with the same issues.

  • You are not launching the unit tests of the dependencies.
  • You can have conflicts inside the group of module you are testing.

As @sbidoul stated we consider that unit tests are run where the code is pulled from. Official addons are tested in odoo/odoo, oca addons are tested in OCA repositories and specifics are tested in your local project.

Still, as I don't want to isolate each tests to gain time using a single database I would find it useful to see a way to not have partially loaded module with at_install that makes a mayhem in the tests.


That said, the current state of this PR LGTM.
Travis is green, so we can drop those @post_install

@yvaucher
Copy link
Member

@pedrobaeza do you agree that the current state without the post_install is ok? It seems to be the consensus on this?

If an other module shows up and breaks those tests the new module can be isolated.

Those were added here but it would help to know why:
75f1b4a

If any module adds a required field on company does anyone knows which module?

@pedrobaeza
Copy link
Member

pedrobaeza commented Oct 17, 2018

There's a funny thing right now: that variables are useless in v12. They have no effect, as now things go through tags. The way to make tests work post install is through the old decorators or the new tag decorator.

So we can merge this for "cleaning" the module as there's no effect, although the debate is there.

When more and more v12 modules comes to OCA, sooner or later an incompatibility will arise through the known and already mentioned SQL constraint due to required fields. You are all saying that it's the duty of the integrator, but sometimes you can't do things outside the scope of the module itself. We have the same problem with Odoo, complaining that they don't allow us to have hooks, or the way they do certain things.

My colleague is trying to normalize how Odoo handles tests and installed modules context in odoo/odoo#27883, and that way we will have a regular behavior that all can predict and know. Meanwhile, what I beg you is if some integrator (like Therp or us) asks OCA to include post_install because they have a problem in their integration tests, that you allow it because it's something that can't be resolved other way, and if everything is green because that integrator has worried about making all work, then there's no problem inside OCA.

What I don't want to do is to start isolating builds for resolving that problems, because:

  • This is something inside OCA CI process. Other CI pipelines don't have that isolation nor want it. I know this is again the same "integrator responsibility", but if we can avoid it, please let do it and let's good contributors to coexist with OCA. If not, you will push to go outside OCA.
  • Make everything slower.
  • It's to make up the real problem.

@yajo yajo merged commit 7b22274 into OCA:12.0 Oct 17, 2018
@sebastienbeau
Copy link
Member

Thanks @pedrobaeza for the update on 12.

One little last point, I see that the initial case that generate this incompatibility is that in the test we try to create a new company in other to test multi-company case (see commit here : 75f1b4a). I hit the same issue in shopinvader repo.

After think twice maybe a good alternative solution that solve this issue (sql constrainte on company), will to propose to Odoo to add in the demo test a new company that is not a child of the existing company. As the demo company will be there we can use it without issue. It's not perfect but it will simplify the testing in multi-company.

@lmignon lmignon deleted the 12.0-data_range_no_post-install_tests-lmi branch October 17, 2018 12:43
@pedrobaeza
Copy link
Member

@sebastienbeau well, that will be only a solution in this specific case, because maybe you need to create 3 or 4 companies instead of 2, or the problem is creating a res.partner (like the case depicted in odoo/odoo#27883), so I will go more for having that PR merged, and build all around this consistent behavior (that will avoid to use post_install at the end, as default values will be loaded).

@yvaucher
Copy link
Member

@pedrobaeza I do think that post_install can be used when there is no other choice, this just has to come with reasons and not propagating the idea that post_install is the only option and must be used everywhere.

Thanks for the insight on the decorators in v12.

@lmignon
Copy link
Contributor Author

lmignon commented Oct 17, 2018

There's a funny thing right now: that variables are useless in v12. They have no effect, as now things go through tags. The way to make tests work post install is through the old decorators or the new tag decorator.

@pedrobaeza I know. As you also know this discussion started on #28 where @astirpe proposed to fix this problem by indroducing the decoractors. I had taken care to solve this problem in my PR #34 to highlight issues introduced by such an approach. 93af5d0

Thank you to @yajo to propose a fix for the SQL constrains. Nevertheless am I right by saying that even with this fix, tests will continue to fails if you test an addon on a database where an incompatible one is already installed? That's why in some cases we will still have no other choice than to split the tests.

@pedrobaeza
Copy link
Member

Thank you to @yajo to propose a fix for the SQL constrains. Nevertheless am I right by saying that even with this fix, tests will continue to fails if you test an addon on a database where an incompatible one is already installed? That's why in some cases we will still have no other choice than to split the tests.

Yes, you will have the same problem depicted on that PR, but as said, I prefer the approach of https://github.com/OCA/bank-payment/blob/3480a7da8da68952b68834e974edd9c47c0a25fb/account_payment_transfer_reconcile_batch/models/payment_order.py#L28-L29 (touching the module and making it compatible with others) than to split tests here in Travis builds.

@yvaucher
Copy link
Member

@pedrobaeza I also like this approach being nice with other modules. And if you need to test with that feature in a module that obviously inherit it, you can always reactivate it with the use of the context.

@yajo
Copy link
Member

yajo commented Oct 18, 2018

Nevertheless am I right by saying that even with this fix, tests will continue to fails if you test an addon on a database where an incompatible one is already installed?

Indeed, there are still 3 corner cases where this can fail:

  1. 2 modules marked as "to update", the 1st needs some defaults from the 2nd, but the 1st is tested before the 2nd is in its module graph. Workaround: Test them independently, although it can be done sequentially in the same database.
  2. A module adds a required field without adding a default value for it. Workaround: either patch tests in the other module, add a default value on the culprit module, or use demo data. This wouldn't be fixed with post_install neither anyways.
  3. 2 actually incompatible modules are installed together. This would fail because it should fail. Workaround: remove one of the 2, and say thank you to your integration tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants