Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preadv2 not found during linking #87

Open
NobodyXu opened this issue Apr 20, 2024 · 18 comments · May be fixed by #90
Open

preadv2 not found during linking #87

NobodyXu opened this issue Apr 20, 2024 · 18 comments · May be fixed by #90

Comments

@NobodyXu
Copy link
Contributor

Ref: rust-lang/cc-rs#1039

On older glibc version, the preadv2 symbol isn't available.

I think it might makes sense to use syscall directly to avoid this link-time issue.

@NobodyXu
Copy link
Contributor Author

Switching to using syscall would also enable it to be used on musl.

@NobodyXu
Copy link
Contributor Author

NobodyXu commented Apr 20, 2024

From preadv64v2.c and preadv2 from glibc:

ssize_t result = SYSCALL_CANCEL(preadv2, fd, vector, count,
				   LO_HI_LONG(offset), flags);

@NobodyXu
Copy link
Contributor Author

NobodyXu commented Apr 20, 2024

For LO_HI_LONG:

// x86_64
#define LO_HI_LONG(val) (val), 0

// x32
#define LO_HI_LONG(val) (val)

// Other
/* Provide a macro to pass the off{64}_t argument on p{readv,writev}{64}.  */
#define LO_HI_LONG(val) \
 (long) (val), \
 (long) (((uint64_t) (val)) >> 32)

I scrapped it from https://elixir.bootlin.com/glibc/glibc-2.39/source/sysdeps/unix/sysv/linux/sysdep.h#L94 , https://elixir.bootlin.com/glibc/glibc-2.39/source/sysdeps/unix/sysv/linux/x86_64/sysdep.h#L386 , and https://elixir.bootlin.com/glibc/glibc-2.39/source/sysdeps/unix/sysv/linux/x86_64/x32/sysdep.h#L27

@weihanglo
Copy link
Member

Thanks for the investigation.

Before implementing this, I would suggest we re-evaluate the value we gain from this Linux specific optimization. If it is pretty beneficial, then we can consider adding it, otherwise this might introduce a bit too many complexity and compatibility issues, and I'd rather remove preadv2.

@weihanglo
Copy link
Member

By re-evaluations, I mean something like listing out in which situation preadv2 would help, and how common they are, as well as looking into the performance side like benchmarking, especially real-world benchmarks.

@ofek
Copy link

ofek commented Apr 20, 2024

Can we please start off by reverting so that builds everywhere start passing once again?

weihanglo added a commit to weihanglo/jobserver-rs that referenced this issue Apr 20, 2024
This optimization has introduced many compatibility issues,
so remove it for now until we find it worthy the investment.

See also

* <rust-lang/cc-rs#1039>
* <rust-lang#87>
@NobodyXu
Copy link
Contributor Author

By re-evaluations, I mean something like listing out in which situation preadv2 would help, and how common they are,

preadv2 would help in situations where procfs isn't present and jobserver is given a pipe instead of fifo.

It's possible that some dev envs block access to procfs due to safety/sandboxing reason.

cc @the8472 since he might have more knowledge of this and how to handle it

Mote that preadv2 is currently disabled on musl due to musl not supporting it yet.

@weihanglo
Copy link
Member

It's possible that some dev envs block access to procfs due to safety/sandboxing reason.

Is jobserver completely unusable without this optimization for this situation? That being said, if a simple syscall can resolve the issue without breaking old kernel/glibc, I am pretty okay with it. We already have similar work like:

jobserver-rs/src/unix.rs

Lines 70 to 72 in 7b96b41

static PIPE2_AVAILABLE: AtomicBool = AtomicBool::new(true);
if PIPE2_AVAILABLE.load(Ordering::SeqCst) {
match libc::syscall(libc::SYS_pipe2, pipes.as_mut_ptr(), libc::O_CLOEXEC) {

@the8472
Copy link
Member

the8472 commented Apr 20, 2024

preadv2 also helps in the sense that the FD does not have to be set to O_NONBLOCK which allows the acquire path continue to use blocking reads which are more efficient on pipes than the poll + non-blocking read dance.

That being said, if a simple syscall can resolve the issue without breaking old kernel/glibc, I am pretty okay with it. We already have similar work like:

jobserver-rs/src/unix.rs

Lines 70 to 72 in 7b96b41

static PIPE2_AVAILABLE: AtomicBool = AtomicBool::new(true);
if PIPE2_AVAILABLE.load(Ordering::SeqCst) {
match libc::syscall(libc::SYS_pipe2, pipes.as_mut_ptr(), libc::O_CLOEXEC) {

Yes, the other calls should have used the same pattern.

petrochenkov pushed a commit that referenced this issue Apr 20, 2024
This optimization has introduced many compatibility issues,
so remove it for now until we find it worthy the investment.

See also

* <rust-lang/cc-rs#1039>
* <#87>
joseluisq added a commit to static-web-server/static-web-server that referenced this issue Apr 21, 2024
due to builds issues as reported on rust-lang/jobserver-rs#87
this workaround can be removed once rust-lang/jobserver-rs#88 is released
@ofek
Copy link

ofek commented Apr 22, 2024

Is there anything I can do to help expedite the release? It looks like others are increasingly affected by this.

@groovybits
Copy link

Is there anything I can do to help expedite the release? It looks like others are increasingly affected by this.

I wonder how many don't even understand what the issue is when it happens. I luckly traced it back through the cc crate, but it was really a stretch to have to note the update 3 hours before and then people stating it is this jobserver-rs. This for me is an application that should build fine with cargo-c as a popular utility used in gstreamer rs. Which currently breaks without a wedge like I inserted forcing a special version of cargo-c downgrading cc temporarily and effectively bypassing this for now. Not a pretty thing to have to do :/ for many it just would result in thinking something is broken with gstreamer or anything based on cargo-c building. It is a "break" that is a huge show stopper if using cargo-c and not knowing how to shim it to work. (maybe I am being too dramatic, yet I was up 3am about done with a huge RPM build that broke suddenly without explanation or logic till I tracked this down taking a 2-3 hours side-track task. Surprised I figured it out and kept going forward with my work, didn't expect such a thing to happen, feels like a flaw to be able to break the stable release through something like this plus having such a convoluted path for testing/confirming staging with such changes that can do this sort of breakage.).

@NobodyXu
Copy link
Contributor Author

It's unfortunate, I didn't realise that the preadv2 API is way too new that glibc still don't have it.

@NobodyXu
Copy link
Contributor Author

Is there anything I can do to help expedite the release? It looks like others are increasingly affected by this.

Already pinged petrochenkov on zulip, hopefully a new release will be published soon

@workingjubilee
Copy link

It's unfortunate, I didn't realise that the preadv2 API is way too new that glibc still don't have it.

The dilemma is that our platform support looks like this, so we support glibc 2.17: https://doc.rust-lang.org/rustc/platform-support.html

In fact, our baseline for Linux kernels definitely don't support this!

@weihanglo
Copy link
Member

The preadv2 usage is reverted in jobserver@0.1.31, so everything should be fine now. Thank you for your patience people :)

@sunshowers
Copy link

Thanks -- could jobserver 0.1.30 be yanked so downstream consumers get the right message?

sunshowers added a commit to nextest-rs/nextest that referenced this issue Apr 24, 2024
@ofek
Copy link

ofek commented Apr 25, 2024

Everything works for me now, thanks so much!

@NobodyXu
Copy link
Contributor Author

NobodyXu commented May 5, 2024

I think we should also run CI on musl, on linux musl and gnu have enough differences for them to be considered as two different targets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants