Join GitHub today
Prepare for release of Plone 5.2a1 with python 3 support #2592
Here we discuss and collect all tasks that need to be done before a 5.2a1 release.
Please add additional tasks that are a hard requirement for a release of Plone 5.2a1.
The failing tests in plone.restapi are all from
I skipped the tests in
@pbauer wow, ok. That's great news! Never thought that the work I started many years ago actually would come to an end at some point. :)
Though unfortunately being able to run "bin/test --all" just means that we currently do not see test isolation issues in the specific order we run the tests. The test isolation issues are still present.
If we now start to fix the problems of one package in another package, we risk that their test fixtures are tightly coupled to each other and we might even not be able to run their tests successfully individually any longer. Since Jenkins only runs "bin/test --all", we might not even detect that package tests don't pass individually any longer.
Therefore keeping alltests might be a better option than writing tightly coupled test fixtures. I don't have a single simple answer to this complex problem. Maybe the way zope.testrunner optimizes the test fixture is just wrong in the first place...
I've looked into the
The failure in
This is the difference I'm seeing (all from inside a waitress request handler thread):
Passing test (just running the
(Pdb) sm = getSiteManager() (Pdb) print list(sm.registeredHandlers()) [HandlerRegistration(<BaseGlobalComponents test-stack-3>, [IPubStart], u'', mark_as_api_request, File ".../plone/rest/configure.zcml", line 8.2-11.6)]
=> Site manager contains the event subscriber registration. Good.
Failing test (first the robot test from
(Pdb) sm = getSiteManager() (Pdb) print list(sm.registeredHandlers()) 
=> Registration is missing.
When I further inspect which exact stacked component registry instance we're working with, I see something strange:
(Pdb) sm <BaseGlobalComponents test-stack-3> (Pdb) from plone.testing import zca (Pdb) zca._REGISTRIES [<BaseGlobalComponents base>, <BaseGlobalComponents test-stack-1>, <BaseGlobalComponents test-stack-2>, <BaseGlobalComponents test-stack-3>] (Pdb) zca._REGISTRIES[-1] is sm True (Pdb) id(zca._REGISTRIES[-1]), id(sm) (4586995088, 4586995088)
=> The current site manager is the last one from
(Pdb) sm <BaseGlobalComponents test-stack-3> (Pdb) from plone.testing import zca (Pdb) zca._REGISTRIES [<BaseGlobalComponents base>, <BaseGlobalComponents test-stack-1>, <BaseGlobalComponents test-stack-2>, <BaseGlobalComponents test-stack-3>] (Pdb) zca._REGISTRIES[-1] is sm False (Pdb) id(zca._REGISTRIES[-1]), id(sm) (4652502480, 4583929552)
=> The current site manager is a different one than the last one from
If we actually poke the last component registry from zca's stack (during the failing test), that's where our subscriber registration would be:
(Pdb) print list(zca._REGISTRIES[-1].registeredHandlers()) [HandlerRegistration(<BaseGlobalComponents test-stack-3>, [IPubStart], u'', mark_as_api_request, File ".../plone/rest/configure.zcml", line 8.2-11.6)]
This leads be to believe that there's an isolation issue around the component registry with the
My hypothesis is this:
The reference to the current site manager is stored on the
That would then render
In an attempt to confirm this hypothesis, I changed
@tisto I'm trying to get to a starting point for adding daily and weekly test runs to shake the testing stack harder to start finding those out. Additionally running each layer in parallel already passes and that's a pretty decent starting point, as the layer setups happen in isolation with both the
On the short to medium term I'm planning on doing something like:
Don't worry, we'll get there.
Currently we're still dealing with basic stuff like hitting ulimit as the Robot test setups leak the FD of the socket, though.
@gforcada can we do those as pipelines, which share the buildout filesystem? If so, that'd sound like the way to parallelise (and we could get rid of the weight plugin that way too). Don't know if the default should be to split the runs per package or per layer (or both).
But that's something for later, as we'd need to figure out the basics of running our tests reliably in any manner.
@tisto thank you for your neat and simple Robot tests, they clued me into
Does not help to trivially make the Plone tests quicker, though, as most of our Robot suites are very short and we'll close the browser latest at the end of a Suite with how we currently run the tests and spawn the selenium server on demand per suite.
This can be split into two more keywords for suite setup and suite teardown:
And then every suite definition should amend to run those new keywords for suite setup and suite teardown.
And we'll need to rearchitect and redesign and reprogram all the Robot tests to have very long suites instead of many short suites, if we want speed.
Or we go with some other technology than Robot. There's only about 100 end to end browser tests in total and the effort level of making them sane to run is about the same that way too.
This ticket has become a repository dump of miscellaneous testing trivia at this point.