-
-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] date_range: Remove useless post_install flag for tests #32
[FIX] date_range: Remove useless post_install flag for tests #32
Conversation
0f27495
to
ddee77a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are not useless, as discussed in the other place
@pedrobaeza This PR starts with WIP! I try to reproduce your problem! May be I could find a better way to solve your problem. I don't want to keep this hack that could lead to other problems. |
But that's the question: this is not a hack! It's the official way to get rid off these problems, and it doesn't provoke any side effect (we are using this in all modules we make for 3 versions). Have you faced any problem ever? You can search across OCA repositories and you'll find a lot of tests with these variables (or decorators). |
@pedrobaeza If the default behaviour should be to run the tests at post install, then this should be the default in Odoo. However, this is not the case. The only use cases I know of are for testing report tests or tests using controllers, The logic you are testing in a module 'a' could be modified by a child module 'b'. Therefore, if the tests of module 'a' are run at post_install, they will fail since the expected behaviour has been modified by module 'b' https://odoo-development.readthedocs.io/en/latest/qa/python/at_install-post_install.html
I'm not saying you should never use these decorators, but their uses must remain limited and perhaps there are other places in OCA where this is not justified. |
This is the case I told you that then the module b is the incorrect one, as you should control when you want to test module A behavior or module B behavior. You have the same problem if you are modifying core behavior. And you can check that in this case, tests are not But I got the same problem on top module (account_banking_payment_transfer) without putting this. |
But without the 'post_install' module A will be tested BEFORE the installation of module B and the tests for module will be OK and also for module B Module A odoo -i A,B --test-enable With post_install=False the process will be With post_install=True the process will be |
Hi, I hit the same issue in Shopinvader repo (fail to create a company because some field where mandatory), I have done some hack (dropping the constraint, but it's fucking ugly and not a correct solution). I am searching for a more elegant solution, because we are all concern by this issue. After thinking twice, the min issue is that we run test with a bunch a module preinstalled and this is the reason that broke the test. Try to solve this with hack will just generate new issue and add complexity. What do you think? |
But again, why do you think this is a hack? I will look for the issue where Odoo expresses that this is the way to avoid the constraint problem. Isolating the tests only provokes more Travis time and ignore problems on the module itself. You won't have new problems with post_install. You can check on the example I put that the problem is the same without putting |
@sebastienbeau I fully agree with you. That's what we do on our internal projects. We split our tests in separate runs when some are not compatible with others due to dependencies or the non predictability of the order in which tests with the same level of dependencies are ran. odoo -i A --stop-after-init The order in which the tests are performed may vary from time to time. If C fails after B or B fails after C we split the tests into separate runs |
Yes, we're gonna have other problems. as explained.... Module A odoo -i A,B --test-enable With post_install=False the process will be With post_install=True the process will be |
@pedrobaeza @yajo I've created a test addon 'date_range_child' to illustrate the problem with |
I agree with @lmignon. Instinctively, tests shouldn't implement such option unless very particular cases. If so, there should have problem in code/tests implementation (that have to be solved before thinking about using post_install option). This is a general remark as I'm not aware of this use case. |
#33 and #34 are very interesting, thanks! The failure point in all of this is a tricky situation that Odoo does this:
I've been analyzing all possible solutions for this problem. Sadly, as you can see below, all of them have pros and cons, there's no silver bullet here:
*1: Disable addons that fail in such way, downstream in integration environments. Requires manual intervention. 🤔 As you can see, the only bullet-proof way of testing is the slowest one: 1 db per addon, separately. It's hard that we converge there, so let's move on for the next ones that make most sense. I think the best one would be odoo/odoo#24833. It marks clearly that, if a given addon is installed, a given test (or suite) is skipped. Skipping tests lowers coverage, but most likely it's not a problem on integration environments (where usually only private code coverage counts) or OCA (where usually these conflicts don't happen, or we can enable or disable addon combinations separately). Integration environments where the blacklisted addon doesn't exist remain unaffected, and code is explicit. |
@yajo
The process is by default the one described in the second row of your table. Do you agree that post_install must be removed? |
Exactly. I was just pointing out that it is not a perfect solution, nor easy to implement outside of MQT.
As long as we have an alternative... Would you agree on using odoo/odoo#24833? |
@yajo If I understand your problem is that in your integration tests, it is not possible for you to preinstall the dependencies of the modules to be tested. Right? It is not healthy to add code to tests to try to solve problems that are not related to the addon tested. Sooner or later, this will introduce other problems. If in your case this allows you to avoid errors, there are other integrators who could come with the opposite need because in their case it breaks their tests. That's why we must remove this kind of hack. For a long time now, we have been testing all our projects without using MQT. We use the same principle as in MQT: 1 installation of dependencies, 2 run of tests. In this context we have developed a set of utility commands (project for our internal needs). One of these allows us to have the list of dependencies for a given list of addons. (https://github.com/acsone/acsoo#acsoo-addons) |
How could odoo/odoo#24833 break anything? 🤔 |
@yajo it's not the right way to fix this kind of issue into OCA. We can't predict which external addons could break the tests. Therefore we have no way to properly maintain this list. Tests are not designed to be run into integrations tests outside of the current repository. |
On my side, I really do not like the concept of adapting the test depending of other module that are not dependency. If we have "testing" incompatibility because of the way we run the test (test all module), we should not solve it by adapting the python test. For me if we have testing incompatibility, we should test the module separately (we can do it manually if we do not have a lot of OCA case), and if we have a lot of case we should always test it separately. I prefers to wait 5 minutes drinking a tea (or doing some code) instead of losing hours to solve incompatibility issue |
I think we should discuss of this point with more people, @gurneyalex @jgrandguillaume @guewen @StefanRijnhart |
In my point of view in principle test should be run post_install. Otherwise continuous integration becomes impossible. The whole idea of continuous integration, for instance but not exclusively, with buildout + runbot is that all of the code you are going to deliver is able to pass all of the tests. If a module b that is installed after module a breaks a test in module a it means either module b is faulty, because incompatible with a module that is also needed, or the test in a is faulty, because to restrictive on the outcome of the test. Let us give a theoretical example. Should we now say we run the tests for a with only a installed and the tests for b with only b installed? NO! Your test will run, but the solution you deliver to your customer will not really be tested and will likely contain bugs. The tests in both a and b in this case are to restrictive. Module a should test wether firstname is in the fullname and lastname is in the fullname - instead of doing an exact string comparison. Likewise module b should test wether title and lastname are in the fullname. So as far as I am concerned: post_install everywhere. Or better, it should be the default, but that is up to Odoo SA. |
To comment on the example of a module that depends on date_range, but disables a constraint: either the constraint is really necessary for a consistent database, then it should never be disabled, or the constraint is to restrictive. In both cases the unit tests if run post_install will tell us there is a problem, an incompatibility, and we have to change our logic somewhere. |
Post install testing on all odoo module is just impossible.
|
@sebastienbeau that's not true, as |
I join @sebastienbeau on
IMO a module that breaks other modules tests without being in the dependency chain should be adapted or be isolated. For instance, if a module changes a workflow, using Nevertheless, while I would like to drop Something is really fishy in the way Odoo does it especially with required values when you have the requirement active but the default value is not set. At Camptocamp what we do it in a slightly different way. step 1: We install the list of addons to test with -i. But we can still have issues in the module list we are testing and |
@sebastienbeau for modules like |
@pedrobaeza I was talking of the case of using "post install" everywhere, so this is why I have used this example. |
@sebastienbeau please see my approach on the comment I have pointed previously. It doesn't depend on others, but itself. You can call it a "sanity check", and it's worth to put this whenever you overwrite an standard behavior. It acts like a behavior switch off/on. |
I think adapting tests (and worse the code) to cope for the eventual presence of other modules in other repos is a terrible idea. It's risky and potentially endless... When integrating OCA modules in a project, one must rely on the fact that they have been properly tested in their own repo. So installing dependencies (without running the tests) before running project tests is definitely the way to go, as explained by Yannick. Tests we have in OCA are unit tests not integration tests. If one wants to run integration tests, these must be specific tests for the specific integration at hand. Trying to repurpose unit tests as integration tests can only lead to headaches. So 👍 for this PR. |
I think that when a new module B is introduced that depends on module A, it is on the shoulders of the developers of module B to make their extensions compatible. If their extension breaks a test in A, it means it breaks functionality in A. The point of tests is not to run tests, but to ensure functionality is not broken. Having said that, the original developers of module A can greatly increase the usability of their module by writing it with extendability and reuse in mind. And this then should be reflected in the test they write for their module. And I know that the tests in Odd/OCA are called unit tests. But in reality most of them test functionality, user workflows etc. So, whatever their name, they are more component tests then unit tests. |
@pedrobaeza I have read you example. I think we had a language issue (sorry I bad in english), when I mean "depend", I mean for example that you have adapted "date_range" module for solving test of an other module (this do not mean that date_range depend of the module X). In the example you give, what I dislike is that the code of the module is impacted by the fact you are running the test or not. On my side I totally join Stephan on this point
So I approve this PR, test should be run at_install |
There's the interesting point that we're integrating OCA addons with other OCA addons. It's not like integrating OCA addons with private ones, which is totally uninteresting to all other OCA devs. But what if several of us do integration tests of the same addons? Do we all have to separately go fixing the same stuff in clumsy ways? Are we doomed to have nowhere to share integration test efforts? Integration testing is important. Why shouldn't we have a place to share efforts on that? |
@yajo Of course integration testing is important. But, that's the integrator job. You cannot express every environment combinations in OCA repositories testing processes. Doing this you introduce strong coupling between all OCA modules which is not sustainable. You'll lose the advantage of modularity as each module has just to be aware of what he depends on (the 'depends' key). IMHO introducing this kind of coupling would lead to developers nightmares. And, personally I want to sleep at night 😄 |
I agree with @rousseldenis that integration test should be done by the integrator. This part always depend of the customer and it's combination of module that is most of the time uniq for him. And I also agree with @yajo that we should found a way to share the integration test or at least efficient way to write such test. One interesting point is the robot framework : https://github.com/brain-tec/odoo-robot-framework. I really like this approach and I hope I will have enough time to help brain-tec on v12 (@BT-aestebanez @BT-cserra @BT-fgarbely @BT-kaberer @BT-ojossen ). If you are interested, give a try to their work, it's worth. |
@sebastienbeau Ok. This should lead to a new conversation on another place. That's a good point indeed to help some developers to make their integrations more easy. This content could take place on OCA website or in maintainer tools or ... Could we move forward on this and let this merged ? |
@yajo integration tests as said by @rousseldenis are the responsibility of the integrator, you can have a dedicated module just for that. Consolidating your dependencies in your private projects with additional tests your complete workflow. The idea to share integration tests is a good idea, but in practice might be difficult because it will vary a lot from one project to another as it is the opposite of the modularity of the unit tests. We attempted to do so with oerpscenario using gherkin syntax, but finally writting python directly without the gerkin syntax layer is simpler. Other than that, about explaining how we use how we do in private projets I understand it's not interesting for OCA devs, I was trying to express that testing all modules of a OCA repository is quite similar to how you do it on you specific modules privately with the same issues.
As @sbidoul stated we consider that unit tests are run where the code is pulled from. Official addons are tested in odoo/odoo, oca addons are tested in OCA repositories and specifics are tested in your local project. Still, as I don't want to isolate each tests to gain time using a single database I would find it useful to see a way to not have partially loaded module with That said, the current state of this PR LGTM. |
@pedrobaeza do you agree that the current state without the If an other module shows up and breaks those tests the new module can be isolated. Those were added here but it would help to know why:
|
There's a funny thing right now: that variables are useless in v12. They have no effect, as now things go through tags. The way to make tests work post install is through the old decorators or the new tag decorator. So we can merge this for "cleaning" the module as there's no effect, although the debate is there. When more and more v12 modules comes to OCA, sooner or later an incompatibility will arise through the known and already mentioned SQL constraint due to required fields. You are all saying that it's the duty of the integrator, but sometimes you can't do things outside the scope of the module itself. We have the same problem with Odoo, complaining that they don't allow us to have hooks, or the way they do certain things. My colleague is trying to normalize how Odoo handles tests and installed modules context in odoo/odoo#27883, and that way we will have a regular behavior that all can predict and know. Meanwhile, what I beg you is if some integrator (like Therp or us) asks OCA to include post_install because they have a problem in their integration tests, that you allow it because it's something that can't be resolved other way, and if everything is green because that integrator has worried about making all work, then there's no problem inside OCA. What I don't want to do is to start isolating builds for resolving that problems, because:
|
Thanks @pedrobaeza for the update on 12. One little last point, I see that the initial case that generate this incompatibility is that in the test we try to create a new company in other to test multi-company case (see commit here : 75f1b4a). I hit the same issue in shopinvader repo. After think twice maybe a good alternative solution that solve this issue (sql constrainte on company), will to propose to Odoo to add in the demo test a new company that is not a child of the existing company. As the demo company will be there we can use it without issue. It's not perfect but it will simplify the testing in multi-company. |
@sebastienbeau well, that will be only a solution in this specific case, because maybe you need to create 3 or 4 companies instead of 2, or the problem is creating a res.partner (like the case depicted in odoo/odoo#27883), so I will go more for having that PR merged, and build all around this consistent behavior (that will avoid to use post_install at the end, as default values will be loaded). |
@pedrobaeza I do think that Thanks for the insight on the decorators in v12. |
@pedrobaeza I know. As you also know this discussion started on #28 where @astirpe proposed to fix this problem by indroducing the decoractors. I had taken care to solve this problem in my PR #34 to highlight issues introduced by such an approach. 93af5d0 Thank you to @yajo to propose a fix for the SQL constrains. Nevertheless am I right by saying that even with this fix, tests will continue to fails if you test an addon on a database where an incompatible one is already installed? That's why in some cases we will still have no other choice than to split the tests. |
Yes, you will have the same problem depicted on that PR, but as said, I prefer the approach of https://github.com/OCA/bank-payment/blob/3480a7da8da68952b68834e974edd9c47c0a25fb/account_payment_transfer_reconcile_batch/models/payment_order.py#L28-L29 (touching the module and making it compatible with others) than to split tests here in Travis builds. |
@pedrobaeza I also like this approach being nice with other modules. And if you need to test with that feature in a module that obviously inherit it, you can always reactivate it with the use of the context. |
Indeed, there are still 3 corner cases where this can fail:
|
No description provided.