-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wheels for musl Linux #6245
Wheels for musl Linux #6245
Conversation
7276af8
to
610bc62
Compare
Here's a run of the test suite on Alpine as it is: alpine.log |
I take it this would be a reduced workflow (compared to the full-blown CI) that avoids installing everything from |
Definitely we would have to skip |
@rokm Can you remember what these mean? I'm sure I've seen them before but I can't remember where. |
These are harmless. They are caused by I've checked and on my Fedora system, the extensions are also not linked against python shared library - so perhaps the |
We might want to capture the |
Ahh it's |
28feffe
to
fb677ac
Compare
This should make life |
Hmm, let's force the CI to |
f968737
to
02b0ce8
Compare
Provided the rest of CI doesn't have any hidden surprises in store, that's this ready for review. CC @Legorooj so that you're aware of what's going on. To summarise the important bits:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! However I'm not comfortable autocompiling the bootloaders in the wheel builder pipeline like that. No objections to building them in CI, just not like that.
Can we drop it for this PR and I'll work up something different afterwards, that works for building all the bootloaders?
My reason for building only those bootloaders here was because those are the ones we don't check into the repository so building them here means that we don't have to juggle those bootloaders between workflows. But sure, I'll drop it for now if you have other plans. |
… cross compiler Dockerfile.
As of 7bca923, -std=gnu90 (the default for older gcc versions) is sufficient.
02b0ce8
to
5f81e4f
Compare
Regarding the mystery of 40e81dc: does it help if you specify the logger we are supposed to get the message from? I.e., change the pyinstaller/tests/unit/test_hookutils.py Line 122 in 2e00f45
into with caplog.at_level(logging.DEBUG, logger='PyInstaller.utils.hooks'): |
This will allow us to keep musl and glibc Linux bootloaders separate in the repository without them clobbering each other.
Yes it does. And it wasn't specific to musl after all - I can reproduce the error on normal Linux. So mystery solved... |
These wheels can be uploaded to PyPI and ran on Alpine or OpenWRT.
Musl does not use ldconfig cache at all. The ldconfig executable, strictly speaking, is not supposed to exist on a traditional musl system but a dummy executable which complains if it receives any arguments may be put in its place for compatibility. Running this dummy should be avoided because it writes a visible error message to stderr. Update the corresponding test to reflect that LDCONFIG_CACHE will always be empty on musl.
Namely, findLibrary("libc") would pick any library matching "libc*" which may be libc.so or it may be some other library such as libcrypto.so.
... due to the fact that said function relies on ldconfig which on musl either doesn't exist or does nothing. Note that find_library() does have a fallback to using gcc to find libraries if gcc is installed but gcc should not be a test time dependency.
Tcl and musl don't appear to get on well. Both `python -m tkinter` and `git gui` (which uses tcl) raise segfaults. They both work fine in ubuntu containers and gtk applications work fine in alpine containers so the issue is specific to musl + tcl.
The failure can be reproduced only by running at least one functional test which invokes PyInstaller main() before the offending logger test: pytest tests/functional/test_basic.py::test_absolute_python_path[onedir] tests/unit/test_hookutils.py
I take it that we're all agreed that there's not much point testing musl on all Python versions rather than just 3.9? I think the chances of finding a libc implementation and Python version specific quirk that derails PyInstaller is too low to warrant clogging CI up with more jobs. |
Yeah, I agree. |
This is the setting up for building PyPI friendly wheels for musl.
PyInstaller/bootloader/Linux-64bit-intel-musl
(assumingx86_64
) folder.python setup.py wheel_linux_64bit_intel_musl
andpython setup wheel_linux_64bit_arm_musl
.Unfortunately, the build system for producing wheels is getting more and more complicated. Because
dockcross
don't (yet) support themusllinux
family of build images, we have to use the official PyPA non-cross variants which require docker buildx + qemu to virtualise aarch64 inside x86_64. I'll be quite surprised if WSL2 can do this so we may need to shunt this to CI or build locally on a real Linux machine.In any case, it adds the following two commands to be ran before
python setup.py bdist_wheels
:I'd also like everyone's thoughts on whether we should add the testing on alpine dockerfile to CI? Without installing everything from
tests/requirements-libraries.txt
it only takes about 20 minutes to run the entire test suite without parallelising and about 10 with so I'd vote yes. There are a few failures which all boil down to the same cause - namely thatldconfig -p
doesn't work with musl's variant ofldconfig
soctypes.util.find_library()
yields nothing (although it has a fallback to using gcc to resolve libraries if that's installed) andPyInstaller.depend.utils.load_ldconfig_cache()
also yields an empty mapping. So we'd have to mop those up before making musl testing routine.