New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dummy
as a known platform, and use it in tests
#1534
Conversation
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Notes:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So - I get the intent here; but the use of the factory argument is at least partially intentional. Over the years, I've learned the hard way that global variables and singletons are always the the hardest things to test, and area always the things that get in the way when you least expect it. Explicit arguments and passing of context is a 100% reliable way of avoiding those problems.
As case in point - you haven't made any changes to tests, and the changes you have made looks like they should be completely passive unless you're in the dummy module... and yet Windows and GTK tests are failing. There's an import side effect that is causing those tests to start using the dummy module as the backend.
That said - I completely understand that:
- Remembering to pass in the factory every time is inconvenient.
- The current platform factory setup doesn't make it easy to explicitly choose a platform.
While I'm not supportive of the approach taken here to set the dummy platform as an import side effect, I would be in favour of making it easier to set the current factory as a side effect of TestCase setup (or a pytest-style fixture).
Global variables and singletons: can't live with them, can't live without them. Setting the current factory will always be a global operation: in the end, we're talking about overriding Using TestCase's setUp and tearDown to set and reset it would still break the platform backend tests, if those happen to run in parallel with the dummy backend tests. Oh well. |
I don't think running tests in parallel is something we need to support: this would break the many existing tests that override global state.
I'm not very familiar with pytest yet, but the cleanest way I can think of doing this is to use an "autouse" fixture. Some restructuring may be necessary to make sure the fixture covers the correct set of tests. |
…ctory. Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Here is a solution using setUp and tearDown. Sorry for all the commits. I'm failing to keep the CI happy. I can't figure out how to load toga_gtk |
|
…s unavailable Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
I don't know how to make codecov happy. |
Codecov is a coverage tool. When it runs the test suite, it's verifying every line of code that is actually executed - and if a line of code you've added in a patch isn't executed by the test suite, then it's untested code. In 2f6bca4, the blocks of code aren't ever run because you're literally telling them not to run - so, why are they in the test at all? Of course, they're in the test because that's the feature that you're adding, so they need to run. As I noted previously, the toga_gtk backend isn't available because it isn't installed in the CI environment. You don't fix that by skipping the parts of the test that require toga_gtk - you fix it by installing toga_gtk in the CI environment. That's what the configuration files in the Of course, it's not quite that simple, because if the test cases require toga-gtk is installed, then no developer on Windows or macOS will be able to run the test suite. So, you'll need to work out a version of the test case that verifies this behaviour in a platform independent way. |
That's a pretty tall order, given that the tests in question are checking that the native platform still works after my changes. The only feasible approach I can see would be to figure out a tox+github CI configuration that installs toga_gtk only for the CI run, not on developer machines. Sounds OK to you? |
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
... or I could use mocks. I'm mocking toga_gtk if it fails to import. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So - mocking is definitely one approach you could take; however, in this case it's incomplete. If you're on a Mac, and you run the test suite through tox
, the tests fail - because the test environment will only install toga-core and toga-dummy, and the test is only working around the absence of toga-gtk.
Looking at the code a little closer - a simpler approach doesn't require mocking at all. The tests that are failing in CI (and in tox) are failing because get_platform_factory()
is unable to resolve the "default" platform factory. For the purposes of the test, that's perfectly acceptable behavior; you just need to know that it hasn't picked up the dummy factory. On that basis, you can change the test_get_platform_factory()
test to:
def test_get_platform_factory(self):
try:
factory = get_platform_factory()
self.assertIsNotNone(factory)
self.assertNotEqual(factory, toga_dummy.factory)
except ModuleNotFoundError:
# The default platform factory couldn't be imported;
# It can't be the dummy, and the dummy isn't ever the default platform.
pass
Either get_platform_factory()
does return a factory, and that factory isn't the dummy; or it fails to return a factory - either way, it isn't the dummy factory, which is the behavior you need to verify (although it might be nicer if the error was a Toga-specific "UnsupportedPlatformError" rather than a generic ModuleNotFoundError, so we can rule out other edge cases such as an import problem inside a backend.
In terms of this specific PR, the other change that is required is proof that this actually works in a test - you've added the plumbing to support the dummy platform being picked up without explicitly using a factory
argument, but then you're not actually using it anywhere. I don't know if you were intending to attack this in a future PR, or in a future commit to this PR; but at the very least, having one example of the new behavior to prove it actually works as intended would be desirable.
That said - I'm starting to wonder if there's a bigger problem here. This entire problem exists because get_platform_factory()
uses sys.platform
to identify which backend should be loaded. This make overriding the default behavior difficult (as testing demonstrates); however, there are other complications. For example, a third party can't write a toga-qt backend without us adding some sort of qt-detecting behavior to get_platform_factory()
I'm wondering if we can address this testing issue in a different way - by exploiting distutils extension points (e.g., how Briefcase registers it's backends). If Toga backends registered their factories as a distutils entry point, Toga would be able to look for the backend that has registered, and just use that. In toga-core's tests, toga-dummy is the only backend that is present; on most app installs, there will be a single installed backend. This would allow users to write their own backends if they wanted, or use a backend that isn't the "toga sys.platform default". I'm not sure how distutils responds to having multiple modules register the same entry point, but that should probably be an error case regardless. I can't think of an obvious reason why you'd need multiple backends in a single virtual environment; if there is a good reason, I'm sure we can work out some sort of override behavior.
Now that's interesting. I have no experience with "distutils extension points", but I'll look into it. Apparently, |
For discovering the entry_points at runtime, importlib.metadata is now recommended instead of pkg_resources. But both APIs use the same data, which ultimately comes from setup.cfg. |
@mhsmith That's a good point; I've modified the implementation use by Briefcase in beeware/briefcase#840. |
In my dev environment, I actually have both toga_core and toga_dummy. That's because I installed my dev environment following these instructions: Where, at some point, it reads:
toga_dummy is needed in order to run tests and coverage according to the instructions. Using entrypoints (and having only one backend installed) will force the use of tox for running the tests. I ran tox for the first time today!
The entrypoints would be something like this (keys taken from briefcase's setup.cfg):
Maybe FYI: There is no uniqueness in this metadata. If two packages are defined with:
and
You'll end up with
|
That's why I said there was likely a need to work out some sort of disambiguation behavior.
The current development instructions work with the current development code. The development instructions may need to be updated if we change how we register backends.
Ideally, it wouldn't force the use of tox, but if that turns out to be a requirement, I'm not going to lose a whole lot of sleep over it.
If you modify L20 of the tox.ini to read FWIW, there are some other tox-config-related fixes that are required; for example, we're not currently using tox for the tests in the individual platform backends, and we probably should be. |
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
This is the "merged" list of entry points:
The keys were taken from briefcase, but:
I also removed most references to |
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
I guess I need to up my environment variables game. Here I was constantly using I found out that in bash/zsh I can set an environment variable just for the current command. I'm happy, as a developer, doing this:
And I think it's the easiest suggestion to add in the docs. I'm concerned about the winforms backend. It keeps timing out, and I don't know if that's my fault. |
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
The problem was platform selection - the tests were running on winforms, but the test suite was trying to run the core test suite, so the core tests were running with the winforms backend. One of the tests called I've modified the CI configuration to be explicit about the platform selection, and turned up verbosity when running the tests so it's a little more obvious in the logs when a single test is hanging. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of TOGA_PLATFORM that you've provided addresses the immediate issue - being able to run the core test suite when you've got a platform-specific backend installed - but I'm not sure it's the right approach for the bigger problem.
What you've effectively provided is a way to override sys.platform
so you can set the platform to 'dummy'. However, it won't let me resolve the case where I have 2 candidate backends for the same platform (e.g., a Qt backend and a GTK backend installed on Linux; or a cocoa and swiftui backend on macOS). I'd suggest a more direct TOGA_BACKEND=toga_dummy
will be more useful in the long run.
Development documentation also needs to be updated; it still advises that you can install dummy and cocoa/gtk/winforms and run pytest.
…f platforms Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
Signed-off-by: Bruno Rino <bruno.rino@gmail.com>
I've changed the implementation to use There's a couple of changes I am not sure of:
|
# Conflicts: # .github/workflows/ci.yml # src/android/setup.cfg # src/cocoa/setup.cfg # src/core/setup.cfg # src/core/src/toga/platform.py # src/dummy/setup.cfg # src/gtk/setup.cfg # src/iOS/setup.cfg # src/web/toga_web/app.py
# Conflicts: # docs/how-to/contribute.rst # src/core/src/toga/fonts.py # src/core/src/toga/platform.py # src/core/tests/command/test_command.py # src/core/tests/command/test_commands_set.py # src/core/tests/widgets/test_selection.py
Codecov Report
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made a couple of small tweaks (including addressing the points you've raised in your most recent comment); I think this is now ready to go. Thanks for the excellent work on this, and apologies for the delay in getting it merged!
Signed-off-by: Bruno Rino bruno.rino@gmail.com
When creating a widget in a TestCase, the factory always needs to be specified, as in:
My suggestion is to add
dummy
to the list of know platforms, and manually set the platform in the TestCase subclass. This would avoid the repetitivefactory=toga_dummy.factory
.I tested this against a single widget, and the test passed as intended.
PR Checklist: