Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing test case #465

Closed
kbabioch opened this issue Jan 17, 2019 · 24 comments
Closed

Failing test case #465

kbabioch opened this issue Jan 17, 2019 · 24 comments

Comments

@kbabioch
Copy link
Contributor

Describe the bug
While trying to build the package in our build service (https://build.opensuse.org/package/show/devel:languages:python/python-pyfakefs), I've realized that the build fails for some architectures and distributions due to a failing unit test:

[   28s] ======================================================================
[   28s] FAIL: test_import_path_from_pathlib (pyfakefs.tests.fake_filesystem_unittest_test.TestPatchingImports)
[   28s] ----------------------------------------------------------------------
[   28s] Traceback (most recent call last):
[   28s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6/pyfakefs/tests/fake_filesystem_unittest_test.py", line 162, in test_import_path_from_pathlib
[   28s]     pyfakefs.tests.import_as_example.check_if_exists3(file_path))
[   28s] AssertionError: False is not true
[   28s] 
[   28s] ----------------------------------------------------------------------
[   28s] Ran 1794 tests in 8.708s

This only happens with the latest release, which, according to #462, did some changes in regards to pathlib.

You can find the full build log here

Let me know what other information you need in order to get this debugged.

How To Reproduce
See above (OBS build service)

Your enviroment
See above (OBS build service)

@mrbean-bremen
Copy link
Member

Hm, currently I can only see one failing test (openSUSE_Tumbleweed, x86_64), but I can see a few more failures in the build histories - probably they have been rebuild after you removed the tests.
I can't or don't know how to access the older failing builds, so I currently can only see the one failing build log, and I'm even not sure if the failure is in Python2 or Python3 (I can see pyfakefs is installed for both 2.7 and 3.6, but don't see which one is used for running the tests).
So, while I have no real idea yet, it would be helpful to know at least for which Python versions the test failed (hopefully it is always the same).

@mrbean-bremen
Copy link
Member

And thanks for the reports - as we only test 3 specific systems (using Travis.CI and AppVeyor), this is really helpful!

@kbabioch
Copy link
Contributor Author

Hm, currently I can only see one failing test (openSUSE_Tumbleweed, x86_64), but I can see a few more failures in the build histories - probably they have been rebuild after you removed the tests.

Yeah, there have been different causes for that and overall the testsuite seems to be somewhat flaky. In the original build log I've attached to this issue, it was the x86_64 architecture that failed, but after a rebuild was triggered, it seems to work now.

However now i586 are problematic with the same test failing. Here is the build log for that, with the interesting lines being:

[   70s] FAIL: test_import_path_from_pathlib (pyfakefs.tests.fake_filesystem_unittest_test.TestPatchingImports)
[   70s] ----------------------------------------------------------------------
[   70s] Traceback (most recent call last):
[   70s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6/pyfakefs/tests/fake_filesystem_unittest_test.py", line 162, in test_import_path_from_pathlib
[   70s]     pyfakefs.tests.import_as_example.check_if_exists3(file_path))
[   70s] AssertionError: False is not true

and I'm even not sure if the failure is in Python2 or Python3 (I can see pyfakefs is installed for both 2.7 and 3.6, but don't see which one is used for running the tests).

That's right, within this spec file the package is built for python2 as well as for python3. The unit test error is happening during the testing of python3, which is invoked in this way:

[   55s] + /usr/bin/python3 setup.py test

In the case of python2 tests are skipped anyway:

[   54s] + /usr/bin/python2 setup.py test
[   55s] running test
[   55s] running egg_info
[   55s] writing pyfakefs.egg-info/PKG-INFO
[   55s] writing top-level names to pyfakefs.egg-info/top_level.txt
[   55s] writing dependency_links to pyfakefs.egg-info/dependency_links.txt
[   55s] writing entry points to pyfakefs.egg-info/entry_points.txt
[   55s] reading manifest file 'pyfakefs.egg-info/SOURCES.txt'
[   55s] reading manifest template 'MANIFEST.in'
[   55s] writing manifest file 'pyfakefs.egg-info/SOURCES.txt'
[   55s] running build_ext
[   55s] 
[   55s] ----------------------------------------------------------------------
[   55s] Ran 0 tests in 0.000s

Not sure why the tests are skipped, need to further investigate this.

So, while I have no real idea yet, it would be helpful to know at least for which Python versions the test failed (hopefully it is always the same).

From the build log:

[   13s] [110/154] cumulate python-base-2.7.15-4.1

And

[   13s] [114/154] cumulate python3-base-3.6.5-3.3

Is there anything else I can check up on that would help you to narrow this down?

@mrbean-bremen
Copy link
Member

mrbean-bremen commented Jan 18, 2019

If the failure happens in Python 3, and pathlib2 is installed in that test, I have an idea - maybe the import order is wrong in that test. I'm not sure how to test this, though, as I don't see the problem in our tests. I guess your build process works only with PyPi packages, so to test this we have to make a new release - or are you able to test an unreleased version?

@kbabioch
Copy link
Contributor Author

I guess your build process works only with PyPi packages, so to test this we have to make a new release - or are you able to test an unreleased version?

No, we are also able to test unreleased versions (from basically any source archive, etc.) and/or apply separate patches on top of some sources. So if you have a git commit / branch, just let me know, I will test it and come back to you.

However I'm still not sure why the test suite is skipped for Python 2 altogether:

[   54s] + /usr/bin/python2 setup.py test
[   55s] running test
[   55s] running egg_info
[   55s] writing pyfakefs.egg-info/PKG-INFO
[   55s] writing top-level names to pyfakefs.egg-info/top_level.txt
[   55s] writing dependency_links to pyfakefs.egg-info/dependency_links.txt
[   55s] writing entry points to pyfakefs.egg-info/entry_points.txt
[   55s] reading manifest file 'pyfakefs.egg-info/SOURCES.txt'
[   55s] reading manifest template 'MANIFEST.in'
[   55s] writing manifest file 'pyfakefs.egg-info/SOURCES.txt'
[   55s] running build_ext
[   55s] 
[   55s] ----------------------------------------------------------------------
[   55s] Ran 0 tests in 0.000s

Any idea on what is going on here? I don't think I'm doing anything special here in regards to Python 2.

mrbean-bremen added a commit to mrbean-bremen/pyfakefs that referenced this issue Jan 18, 2019
@mrbean-bremen
Copy link
Member

Ok, bad idea - this breaks this test consistently.

mrbean-bremen added a commit that referenced this issue Jan 18, 2019
- ensures running tests using setup under Python 2
- see #465
@mrbean-bremen
Copy link
Member

mrbean-bremen commented Jan 18, 2019

I'm still not sure what the problem is. I hopefully fixed the problem with tests not running in Python 2: the test suite was not defined in setup.py, but Python 3 seems to do the test discovery correctly anyway. I actually hadn't thought about running tests using setup.py, so this has never been tested. We run tests via scripts in Travis.CI / AppVeyor.
As for the failing test, I have no solution yet. It seems to have to do with pathlib2, which is installed at the failing system. It would be interesting to see if it also fails with Python 2 on the same system (which also uses pathlib2).

If the problem persists in Python 2, then there is something wrong with patching pathlib2 on these systems, though I have no idea why. We could disable testing pathlib2 if running tests from setup.py, but that would also mean that patching pathlib2 will probably not work on these systems.
OTOH, if it doesn't work only with Python >= 3.6, we could just disable testing it for these versions, as it is not needed there and will not be used in real software (in the Travis.CI tests, we test both versions - running the tests both using pathlib and pathlib2).

@mrbean-bremen
Copy link
Member

@kbabioch - can you please check with master? I want to see what the result of the Python 2 tests is to decide where to go.

@kbabioch
Copy link
Contributor Author

I've created a new project within our build service. I'm trying to build master (4e93a2b) there, which fails in the following way:

[   77s] test_value (pyfakefs.tests.fake_filesystem_unittest_test.TestTempFileReload) ... ok
[   77s] 
[   77s] ======================================================================
[   77s] FAIL: test_append_mode_tell_linux_windows (pyfakefs.tests.fake_os_test.RealOsModuleTest)
[   77s] ----------------------------------------------------------------------
[   77s] Traceback (most recent call last):
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/fake_os_test.py", line 1221, in test_append_mode_tell_linux_windows
[   77s]     self.check_append_mode_tell_after_truncate(tell_result)
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/fake_os_test.py", line 1213, in check_append_mode_tell_after_truncate
[   77s]     self.assertEqual(tell_result, f1.tell())
[   77s] AssertionError: 5 != 7
[   77s] 
[   77s] ======================================================================
[   77s] FAIL: test_fdatasync_pass (pyfakefs.tests.fake_os_test.RealOsModuleTest)
[   77s] ----------------------------------------------------------------------
[   77s] Traceback (most recent call last):
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/fake_os_test.py", line 1765, in test_fdatasync_pass
[   77s]     self.os.fdatasync, test_fd + 10)
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/test_utils.py", line 73, in assert_raises_os_error
[   77s]     self.assertEqual(subtype, exc.errno)
[   77s] AssertionError: 9 != 22
[   77s] 
[   77s] ======================================================================
[   77s] FAIL: test_fsync_pass_posix (pyfakefs.tests.fake_os_test.RealOsModuleTest)
[   77s] ----------------------------------------------------------------------
[   77s] Traceback (most recent call last):
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/fake_os_test.py", line 1737, in test_fsync_pass_posix
[   77s]     self.os.fsync, test_fd + 10)
[   77s]   File "/home/abuild/rpmbuild/BUILD/pyfakefs-3.5.6git.1547849785.4e93a2b/pyfakefs/tests/test_utils.py", line 73, in assert_raises_os_error
[   77s]     self.assertEqual(subtype, exc.errno)
[   77s] AssertionError: 9 != 22
[   77s] 
[   77s] ----------------------------------------------------------------------
[   77s] Ran 1793 tests in 10.772s
[   77s] 
[   77s] FAILED (failures=3, skipped=469, expected failures=2)
[   77s] Test failed: <unittest.runner.TextTestResult run=1793 errors=0 failures=3>
[   77s] error: Test failed: <unittest.runner.TextTestResult run=1793 errors=0 failures=3>
[   77s] error: Bad exit status from /var/tmp/rpm-tmp.J4znmQ (%check)

I'm attaching the full build log here. You can find it online here, but it might become unavailable in the future.

Is this output more helpful? Is there something else I can provide you with?

@mrbean-bremen
Copy link
Member

Thanks for that! I downloaded the logs to have a closer look later. The failures seen in the log above are tests of the real file system - they are just there to check if the real fs behaves as we expect (and emulate). On the one hand, we could just disable these tests if running from setup.py, on the other hand they are interesting to check for different system behavior. These are small differences (different subtype of OSError raised in 2 cases, slightly different behavior in file buffer handling in the other one) and for the time being can safely be ignored.
There is at least one other problem with pathlib not loaded in some other tests, I have yet to understand these.

I will not have much time before the weekend, so it may take a few days before I will come back to this. I'm not sure what is the best solution to solve the packaging problem (the easiest certainly would be to disable all tests :), and what your time constraints are...

@kbabioch
Copy link
Contributor Author

Thanks for looking into this and all of your support.

I will not have much time before the weekend, so it may take a few days before I will come back to this. I'm not sure what is the best solution to solve the packaging problem (the easiest certainly would be to disable all tests :),

I would prefer to not disable the tests. These are the only way to have some confidence in our packages and to not introduce regressions with version updates, etc. You should also be interested to have reliable tests that can be run on all platforms.

and what your time constraints are...

I really don't have any time constrains. I'm just packaging this for (open)SUSE and make it available for others to use. I appreciate all of your help, but I really don't have any time constrains here ...

@mrbean-bremen
Copy link
Member

I would prefer to not disable the tests.

Same here - that wasn't meant as a real solution. Though I'm thinking about disabling the real OS tests in this case, as they don't test the framework, and I don't see that we will support small differences in the behavior of different systems (other than the ones we already test with CI), at least not until someone really needs it.
As for the pathlib problems - I haven't looked into it yet (weekend almost gone...), but I certainly will.

mrbean-bremen added a commit that referenced this issue Feb 5, 2019
- tests are enabled by an environment variable, set in CI tests
- see #465
@mrbean-bremen
Copy link
Member

@kbabioch - can you please have another run with master? I'm not sure that the pathlib problem is fixed, but the last commit shall change the behavior. Also, I switched off the real fs tests by default.

@kbabioch
Copy link
Contributor Author

kbabioch commented Feb 8, 2019

Here are the latest results from my project within our build service. I'm trying to build master (332ff71) there, which succeeds to build now. You'll find the build log here: https://build.opensuse.org/build/home:kbabioch:branches:devel:languages:python/openSUSE_Tumbleweed/i586/python-pyfakefs/_log

For archiving purposes I'm also uploading it: build.log

Let me know if there is something else you need.

@mrbean-bremen
Copy link
Member

Thanks - this looks ok now! What about the other systems that have been failing? As far as I remember, there have been failures on other architectures as well...
Anyway, as soon as you give the green light, I will make yet another release. That pathlib problem was a regression that needs to be fixed in a release.

@kbabioch
Copy link
Contributor Author

kbabioch commented Feb 8, 2019

As far as I can tell it works on all architectures now (s390x, ppc, aarm64, etc.), so I would suggest to tag a new release. Thank you very much for taking care of this.

@mrbean-bremen
Copy link
Member

Ah, thanks - I will do this then!

@mrbean-bremen
Copy link
Member

Ok, done.

mrbean-bremen added a commit to mrbean-bremen/pyfakefs that referenced this issue Feb 16, 2019
- exclude pathlib tests if pathlib is not available
- moved pytest tests put of tests to not be discovered
- see pytest-dev#465
mrbean-bremen added a commit to mrbean-bremen/pyfakefs that referenced this issue Feb 16, 2019
- exclude pathlib tests if pathlib is not available
- moved pytest tests put of tests to not be discovered
- see pytest-dev#465
mrbean-bremen added a commit to mrbean-bremen/pyfakefs that referenced this issue Feb 16, 2019
- exclude pathlib tests if pathlib is not available
- moved pytest tests put of tests to not be discovered
- see pytest-dev#465
mrbean-bremen added a commit to mrbean-bremen/pyfakefs that referenced this issue Feb 16, 2019
- exclude pathlib tests if pathlib is not available
- moved pytest tests put of tests to not be discovered
- see pytest-dev#465
@mrbean-bremen
Copy link
Member

I have seen some tests still failing - all have to do with pathlib under Python 2.
The errors in both Leap builds look as if pathlib2 was not detected, though it was installed - this may be fixed with current master.
The other failing builds (ARM and PowerPC) I don't really understand - they seem to detect pathlib2, but then 2 tests are run (and failing) that should be excluded under pathlib2. Not sure if this is fixed in master...

@mrbean-bremen
Copy link
Member

It would be nice if you could trigger yet another master build...

@mrbean-bremen
Copy link
Member

@kbabioch - I'm trying to follow your builds, but I don't really understand them. Some of the builds sometimes fail and sometimes succeed (like the Leap builds) - are these the same builds, or or there any changes in the configuration?

@mrbean-bremen
Copy link
Member

@kbabioch - looks like all builds are passing now. Is that correct, can we close the issue?

@kbabioch
Copy link
Contributor Author

kbabioch commented Apr 2, 2019

@mrbean-bremen Yup, for now all is fine and dandy as far as I can tell. Thank you very much for your time and effort, I will be back once there are issues again ;-).

@kbabioch kbabioch closed this as completed Apr 2, 2019
@mrbean-bremen
Copy link
Member

mrbean-bremen commented Apr 2, 2019

That's good, thanks :) I'm still unsure about the other issue (rpmlint warning), which still appears in one build - I have no idea how that is possible...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants