-
-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.17.0: Segmentation fault after modifying RPATH #446
Comments
I ran into what I think is the same problem, also with a Python 3.10 executable. PR #447 seems to fix it for me. |
Hello All, We are having the same issue with >=v0.17.0 (i.e. including latest release 0.17.2).
We have the similar use case - as part of the NEURON project (simulator used in computational neuroscience community), we distribute python wheels. These wheels contain standalone binary files that are updated by patchelf (via auditwheel) for similar reasons mentioned above. We are trying to update our wheels building pipeline with the latest [root@0998f73e5778]# ./modlunit
Segmentation fault (core dumped)
[root@0998f73e5778]# ldd ./modlunit
/usr/bin/ldd: line 116: 15156 Segmentation fault (core dumped) LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION=$verify_out LD_VERBOSE= "$@" This issue doesn't appear if we are using older releases like I locally build the patchelf and used As a reproducer, you can run following script (thank you, @bastimeyer!): #!/usr/bin/env bash
IMAGES=(
# 2022-11-14 - patchelf 0.16.1 : using quay.io/pypa/manylinux2014_x86_64@sha256:005826a6fa94c97bd31fccf637a0f10621304da447ca2ab3963c13991dffa013
neuronsimulator/reprod_patchelf_0160
# 2022-11-19 - patchelf 0.17.0 using quay.io/pypa/manylinux2014_x86_64@sha256:383c6016156c94d7dbd10696c15f2444288b99a25927239b7b024e1cc6ca6a81
neuronsimulator/reprod_patchelf_0170
)
SCRIPT=$(cat <<'EOF'
patchelf --version
patchelf --print-rpath /tmp/modlunit
patchelf --remove-rpath /tmp/modlunit
# just add some additional rpaths
patchelf --force-rpath --set-rpath \$ORIGIN/123456:\$ORIGIN/11_22_33_44_555 /tmp/modlunit
patchelf --print-rpath /tmp/modlunit
ldd /tmp/modlunit
EOF
)
for image in "${IMAGES[@]}"; do
echo "Running ${image}"
docker run -i --rm "${image}" <<< "${SCRIPT}"
echo $'\n\n\n'
done and it produces output as: ./run.sh
Running neuronsimulator/reprod_patchelf_0160
patchelf 0.16.1
$ORIGIN/../lib:/opt/nvidia/hpc_sdk/Linux_x86_64/22.11/compilers/lib:/opt/rh/devtoolset-11/root/usr/lib/gcc/x86_64-redhat-linux/11/../../../../lib64
$ORIGIN/123456:$ORIGIN/11_22_33_44_555
linux-vdso.so.1 => (0x00007ffc0cad3000)
libnvhpcatm.so => not found
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fa75aa0a000)
libnvomp.so => not found
libdl.so.2 => /lib64/libdl.so.2 (0x00007fa75a806000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa75a5ea000)
libnvcpumath-avx2.so => not found
libnvc.so => not found
libc.so.6 => /lib64/libc.so.6 (0x00007fa75a21c000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fa75a006000)
libm.so.6 => /lib64/libm.so.6 (0x00007fa759d04000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa75ad12000)
Running neuronsimulator/reprod_patchelf_0170
patchelf 0.17.0
$ORIGIN/../lib:/opt/nvidia/hpc_sdk/Linux_x86_64/22.11/compilers/lib:/opt/rh/devtoolset-11/root/usr/lib/gcc/x86_64-redhat-linux/11/../../../../lib64
$ORIGIN/123456:$ORIGIN/11_22_33_44_555
/usr/bin/ldd: line 116: 16 Segmentation fault (core dumped) LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION=$verify_out LD_VERBOSE= "$@" The docker images are created from a simple Dockerfile such as: # 0.16.1
#FROM quay.io/pypa/manylinux2014_x86_64@sha256:005826a6fa94c97bd31fccf637a0f10621304da447ca2ab3963c13991dffa013
# 0.17.0
FROM quay.io/pypa/manylinux2014_x86_64@sha256:383c6016156c94d7dbd10696c15f2444288b99a25927239b7b024e1cc6ca6a81
COPY modlunit /tmp/ (i.e. on top of standard manylinux pypa image, I have included the I hope this will help to find the root cause. If anything else is needed to debug the issue, I will be more than happy to help! Thank you! |
Looking at the process header table of the program that crashes, we have the following two loads:
This is a bit weird because the start/end addresses do not respect alignment.
Well, there is an overlap there with mixed access rights.
Which gives the answer as to what the kernel decided to do. It chose the safest access right for the clashing addresses. This also explains why the commit 42394e8 introduced the issue. After that patch, the The original working binary had a unaligned segment entry already:
but when rounded up and down, it didn't clash with any other segment. So apparently Patchelf is doing it's thing by reordering/inserting/moving segments but in doing so, it's creating a segment clash. |
With the PR mentioned above, |
I met the similar issue and I tried the latest version (0.18.0) but it didn't help. After I downgraded patchelf to 0.16.1, every things worked well. Platform: Centos 6/Ubuntu 22.04 |
Freetz downgraded to v15: Freetz-NG/freetz-ng@eb0f8b6 see https://github.com/Freetz-NG/freetz-ng/issues/740 |
:Release Notes: mke2fs.real, mkfs.ext2.real, mkfs.ext3.real, mkfs.ext4.real are indentical binary with multiple hardlinks and we end calling patchelf-uninative 4 times even when the interpreter is already set correctly from the build :Detailed Notes: To avoid corrupted binaries created on 18.04 ubuntu avoid calling patchelf-uninative multiple times and in this case don't call it at all. It might be related to: NixOS/patchelf#492 or NixOS/patchelf#446 but the later was already included in patchelf-0.17.2 used in uninative-3.9 This was submitted to upstream in: https://lists.openembedded.org/g/openembedded-core/message/183314 but wasn't merged yet (so it cannot be in meta-webos-backports-* layer) and it might take a while until it's backported to kirkstone. :Testing Performed: Only build tested. :QA Notes: No change to image. :Issues Addressed: [WRP-19053] CCC: Various build fixes [WRP-17893] mkfs.ext4 segfaults with uninative 3.10 and newer [WRP-6209] Update jenkins slaves to use Ubuntu 20.04 or 22.04 Change-Id: Ied1e0965423c660bca375c1e8deac7500014cc03
Due to bugs causing executables to be corrupted, see pypa/manylinux#1421 and NixOS/patchelf#446 Fixes panda3d/panda3d#1504
There is a bug that is being triggered by recent updates, causing `iree-tracy-capture` to be corrupted during auditwheel. Pinning to the old version of patchelf, which it depends on clears the issue. Here is the patchelf issue where others have gotten stung: NixOS/patchelf#446
There is a bug that is being triggered by recent updates, causing `iree-tracy-capture` to be corrupted during auditwheel. Pinning to the old version of patchelf, which it depends on clears the issue. Here is the patchelf issue where others have gotten stung: NixOS/patchelf#446
:Release Notes: mke2fs.real, mkfs.ext2.real, mkfs.ext3.real, mkfs.ext4.real are indentical binary with multiple hardlinks and we end calling patchelf-uninative 4 times even when the interpreter is already set correctly from the build :Detailed Notes: To avoid corrupted binaries created on 18.04 ubuntu avoid calling patchelf-uninative multiple times and in this case don't call it at all. It might be related to: NixOS/patchelf#492 or NixOS/patchelf#446 but the later was already included in patchelf-0.17.2 used in uninative-3.9 This was submitted to upstream in: https://lists.openembedded.org/g/openembedded-core/message/183314 but wasn't merged yet (so it cannot be in meta-webos-backports-* layer) and it might take a while until it's backported to kirkstone. :Testing Performed: Only build tested. :QA Notes: No change to image. :Issues Addressed: [WRP-19053] CCC: Various build fixes [WRP-17893] mkfs.ext4 segfaults with uninative 3.10 and newer [WRP-6209] Update jenkins slaves to use Ubuntu 20.04 or 22.04 Cherry-picked-from-commit: d3e0606 Cherry-picked-from-branch:
0.18 has issues with some patched binaries segfaulting NixOS/patchelf#446
Describe the bug
Just encountered an issue with patchelf 0.17.0...
I build AppImages for my Python application, and in order to do that I'm using Python's official manylinux docker images where I copy one of the pre-built python environments and modify the RPATHs of all binaries and their dependencies using patchelf. The RPATHs get set to
$ORIGIN
(and other relative paths), so that when the AppImage's squashfs gets mounted on the user's system upon execution, Python can properly be run on unknown/arbitrary mount points. So far, this has all been working flawlessly.The patchelf 0.17.0 upgrade however has introduced segmentation faults of modified executables after modifying their RPATH. Patchelf 0.16.1 was working fine.
When looking at the recent git commit history, there was a big change in 2cb863f in regards to the ELF header file with lots of changed constants. I have 0% knowledge of any internals here and how ELF files and dynamic linking works, but that's what stood out to me.
Steps To Reproduce
Here's a short BASH script for reproducing the issue in two manylinux docker containers. One with patchelf 0.16.1 and the next one right after patchelf was upgraded to 0.17.0. There are other changes included between those two image versions, but those are unrelated and the issue can also be reproduced by simply building patchelf 0.17.0 on the older image.
Log output
The text was updated successfully, but these errors were encountered: