Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

os: StartProcess ETXTBSY race on Unix systems #22315

Open
rsc opened this issue Oct 18, 2017 · 25 comments
Open

os: StartProcess ETXTBSY race on Unix systems #22315

rsc opened this issue Oct 18, 2017 · 25 comments
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@rsc
Copy link
Contributor

rsc commented Oct 18, 2017

Modern Unix systems appear to have a fundamental design flaw in the interaction between multithreaded programs, fork+exec, and the prohibition on executing a program if that program is open for writing.

Below is a simple multithreaded C program. It creates 20 threads all doing the same thing: write an exit 0 shell script to /var/tmp/fork-exec-N (for different N), and then fork and exec that script. Repeat ad infinitum. Note that the shell script fds are opened O_CLOEXEC, so that an fd being written by one thread does not leak into the fork+exec's shell script of a different thread.

On my Linux workstation, this program produces a never-ending stream of ETXTBSY errors. The problem is that O_CLOEXEC is not enough. The fd being written by one thread can leak into the forked child of a second thread, and it stays there until that child calls exec. If the first thread closes the fd and calls exec before the second thread's child does exec, then the first thread's exec will get ETXTBSY, because somewhere in the system (specifically, in the child of the second thread), there is an fd still open for writing the first thread's shell script, and according to modern Unix rules, one must not exec a program if there exists any fd anywhere open for writing that program.

Five years ago this bit us because cmd/go installed cmd/cgo (that is, copied the binary from a temporary location to somewhere/bin/cgo) and then executed it. To fix this we put a sleep+retry loop around the fork+exec of cgo when it gets ETXTBSY. Now (as of last week or so) we don't ever install cmd/cgo and execute it in the same cmd/go process, so that specific race is gone, although as I write this cmd/go still has the sleep+retry loop, which I intend to remove.

Last week this bit us again because cmd/go updated a build stamp in the binary, closed it, and executed it. The resulting flaky ETXTBSY failures were reported as #22220. A pending CL fixes this by not updating the build stamp in temporary binaries, which are the main ones we execute. There's still one case where we write+execute a program, which is go test -cpuprofile x.prof pkg. The cpuprofile flag (and a few others) cause cmd/go to leave the pkg.test in the current directory for debugging purposes but also run the test. Luckily running the test is currently the final thing cmd/go does, and it waits for any other fork+exec'ed programs to finish before fork+exec'ing the test. So the race cannot happen in this case.

In general this race is going to happen every time anyone writes a program that both writes and executes a program. It's easy to imagine other build systems running into this, but also programs that do things like unzip a zip file and then run a program inside it - think a program supervisor or mini container runtime. As soon as there are multiple threads doing fork+exec at the same time, and one of them is doing fork+exec of a program that was previously open for write in the same process, you have a mysterious flaky problem.

It seems like maybe Go should take care of this, if possible. We've now hit it twice in cmd/go, five years apart, and at least this past time it took the better part of a day to figure out. (I don't remember how long it took five years ago, in part because I don't remember anything about discovering it five years ago. I also don't want to rediscover all this five years from now.)

There are a few hacks we could use:

  • In os.StartProcess, if we see ETXTBSY, sleep 100ms and try again, maybe a few times, up to say 1 second of sleeping. In general we don't know how long to sleep.
  • Arrange with a locking mechanism that close must never complete during a fork+exec sequence. The end of the fork+exec sequence needs to be the point where we know the close-on-exec fds have been closed. Unfortunately there is no portable way to identify that point.
    • If the exec fails and the child tells us and exits, we can wait for the exit. That's easy.
    • If the exec succeeds, we find out because the exec closes the child's end of the status pipe, and we get EOF.
      • If we know that an OS does close-on-exec work in increasing fd order, then we could also track the maximum fd we've opened and move the status pipe above that. Then seeing the status pipe close would mean all other fds are closed too.
      • If the OS had a "close all fds above x", we could use that. (I don't know of any that do, but it sure would help.)
    • It may not be OK to block all closes on a wedged fork+exec (in general an exec'ed program may be loaded from some slow network server).
  • Note that vfork(2) is not a solution. Vfork is defined as the parent does not continue executing until the child is no longer using the parent's memory image. In the case of a successful exec, at least on Linux, vfork releases the memory image before doing any of the close-on-exec work, so the parent continues running before the child has closed the fds we care about.

None of these seem great. The ETXTBSY sleep, up to 1 second, might be the best option. It would certainly reduce the flake rate and in many cases would probably make it undetectable. It would not help exec of very slow-to-load programs, but that's not the common case.

I wondered how Java deals with this, and the answer seems to be that Java doesn't deal with this. https://bugs.openjdk.java.net/browse/JDK-8068370 was filed in 2014 and is still open.

#include <pthread.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <errno.h>
#include <stdint.h>

void* runner(void*);

int
main(void)
{
	int i;
	pthread_t pid[20];

	for(i=1; i<20; i++)
		pthread_create(&pid[i], 0, runner, (void*)(uintptr_t)i);
	runner(0);
	return 0;
}

char script[] = "#!/bin/sh\nexit 0\n";

void*
runner(void *v)
{
	int i, fd, pid, status;
	char buf[100], *argv[2];
	
	i = (int)(uintptr_t)v;
	snprintf(buf, sizeof buf, "/var/tmp/fork-exec-%d", i);
	argv[0] = buf;
	argv[1] = 0;
	for(;;) {
		fd = open(buf, O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0777);
		if(fd < 0) {
			perror("open");
			exit(2);
		}
		write(fd, script, strlen(script));
		close(fd);
		pid = fork();
		if(pid < 0) {
			perror("fork");
			exit(2);
		}
		if(pid == 0) {
			execve(buf, argv, 0);
			exit(errno);
		}
		if(waitpid(pid, &status, 0) < 0) {
			perror("waitpid");
			exit(2);
		}
		if(!WIFEXITED(status)) {
			perror("waitpid not exited");
			exit(2);
		}
		status = WEXITSTATUS(status);
		if(status != 0)
			fprintf(stderr, "exec: %d %s\n", status, strerror(status));
	}
	return 0;
}
@rsc rsc added this to the Go1.10 milestone Oct 18, 2017
@rsc
Copy link
Contributor Author

rsc commented Oct 18, 2017

@gopherbot
Copy link

Change https://golang.org/cl/71570 mentions this issue: cmd/go: skip updateBuildID on binaries we will run

@gopherbot
Copy link

Change https://golang.org/cl/71571 mentions this issue: cmd/go: delete ETXTBSY hack that is no longer needed

@RalphCorderoy
Copy link

Userspace workarounds seem flawed or less than ideal. This is a kernel
problem, like O_CLOEXEC. Perhaps lobby for a O_CLOFORK that's similar
but close on fork instead. The writer would open, write, close, fork,
exec so wouldn't make use of it, but any other thread that forks
wouldn't carry the FD with it so the writer's close would succeeding in
nailing the sole, final, reference to the "open file description", as
POSIX calls it.

@ianlancetaylor
Copy link
Contributor

O_CLOFORK is a good idea. Does anybody want to suggest that to the Linux kernel maintainers? I expect that if someone can get it into Linux it will flow through to the other kernels.

I'm going to repeat a hack I described elsewhere that I believe would work for pure Go programs.

  • record the highest file descriptor returned by syscall.Open, syscall.Socket, syscall.Dup, etc.
  • add a new RWMutex in syscall: forkMutex
  • during syscall.Close, acquire a read lock on forkMutex
  • in syscall.forkAndExecInChild acquire a write lock on forkMutex, and
  • open a pipe in the parent (as we already do if UidMappings is set), and
  • in the child, loop through the descriptors up to the highest one,
  • closing each one that is marked close-on-exec, then close the pipe to the parent
  • in the parent, when the pipe is closed, release the forkMutex lock

The effect of this should be that when syscall.Close returns, we know for sure that there is no forked child that has an open copy of the descriptor.

The disadvantages are that all forks are serialized, and that all forks waste time closing descriptors that will shortly be closed anyhow. Also, of course, forks temporarily block closes, but that is unlikely to be significant.

@RalphCorderoy
Copy link

O_CLOFORK is a good idea. Does anybody want to suggest that to the Linux kernel maintainers?

I'm happy to have a go, but I'm a nobody on that list. I was assuming folks here might have the ear of a Google kernel developer or two in that area that would vet the idea and suggest it to the list if worthy. :-)

during syscall.Close, acquire a read lock on forkMutex

And syscall.Dup2 and Dup3 as they may cause newfd to close.

Do syscall.Open et al also synchronise with forkMutex somehow? I'm wondering if they can be creating more FDs, either above or below the highwater mark, whilst forkAndExecInChild is looping, closing close-on-exec ones.

@ianlancetaylor
Copy link
Contributor

Is there a place to file a feature request against the Linux kernel? I know nothing about the kernel development process. I hear it uses git.

Agree about Dup2 and Dup3.

As far as I can see it doesn't matter if syscall.Open and friends create a new FD while the child is looping, because the child won't see the new descriptor anyhow.

@rsc
Copy link
Contributor Author

rsc commented Oct 18, 2017

@ianlancetaylor thanks, yes, the explicit closes would solve the problem with slow execs, which would be nice. That might make this actually palatable. You also don't even need the extra pipe if you use vfork in this approach.

I agree with @RalphCorderoy that there's a race between the "maintain the max" and "fork", in that Open might create a new fd, then fork runs in a different thread before Open can update the max. But since fds are created lowest-available, it should suffice for the child to assume that max is, say, 10 larger than it is.

Also note that this need not be an RWMutex (and for that matter the current syscall.ForkMutex need not be an RWMutex either). It just needs to be an "either-or" mutex. An RWMutex allows N readers or 1 writer. The mutex we need would allow N of type A or N of type B, just never a mix. If we built that (not difficult, I don't think), then programs that never fork would not serialize any of their closes, and programs that fork a lot but don't close things would not serialize any of their forks.

O_CLOFORK would require having fcntl F_SETFL/F_GETFL support for that bit too, and it would complicate fork a little more than it already is. An alternative that would be equally fine for us would be a "close all fd's above" or "tell me the maximum fd of my process" syscall. I don't know if a new bit or a new syscall is more likely.

@rsc
Copy link
Contributor Author

rsc commented Oct 18, 2017

I should maybe also note that macOS fixes this problem by putting #if 0 around the ETXTBSY check in the kernel implementation of exec. That would be a third option for Linux although probably less likely than the other two.

@RalphCorderoy
Copy link

I've emailed linux-kernel@vger.kernel.org. Will reference an archive once it appears.
If they're unpersuaded, then there's the POSIX folks at Open Group; they have a bug tracker.

@RalphCorderoy
Copy link

linux-kernel mailing-list archive of post: https://marc.info/?l=linux-kernel&m=150834137201488

gopherbot pushed a commit that referenced this issue Oct 19, 2017
On modern Unix systems it is basically impossible for a multithreaded
program to open a binary for write, close it, and then fork+exec that
same binary. So don't write the binary if we're going to fork+exec it.

This fixes the ETXTBSY flakes.

Fixes #22220.
See also #22315.

Change-Id: I6be4802fa174726ef2a93d5b2f09f708da897cdb
Reviewed-on: https://go-review.googlesource.com/71570
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
gopherbot pushed a commit that referenced this issue Oct 20, 2017
This hack existed because cmd/go used to install (write) and then run
cmd/cgo in the same invocation, and writing and then running a program
is a no-no in modern multithreaded Unix programs (see #22315).

As of CL 68338, cmd/go no longer installs any programs that it then
tries to use. It never did this for any program other than cgo, and
CL 68338 removed that special case for cgo.

Now this special case, added for #3001 long ago, can be removed too.

Change-Id: I338f1f8665e9aca823e33ef7dda9d19f665e4281
Reviewed-on: https://go-review.googlesource.com/71571
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
@bradfitz
Copy link
Contributor

What's the plan here for Go 1.10?

@RalphCorderoy, looks like you never got a reply, eh?

@ianlancetaylor ianlancetaylor modified the milestones: Go1.10, Go1.11 Dec 6, 2017
@ianlancetaylor ianlancetaylor added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Jun 27, 2018
@ianlancetaylor ianlancetaylor modified the milestones: Go1.11, Go1.12 Jun 27, 2018
@ianlancetaylor
Copy link
Contributor

Looks like Solaris and macOS and OpenBSD have O_CLOFORK already. Hopefully it will catch on further.

tigrannajaryan pushed a commit to open-telemetry/opentelemetry-collector that referenced this issue Sep 29, 2020
The failures reported in #1848 seem to be caused by the issue described
in golang/go#22315 and the fact that we run
tests in parallel.
@amonakov
Copy link

A colleague pointed me to this bug in context of a wider discussion about O_CLOFORK. When each fork is expected to proceed to exec (as is the case here), it is possible to solve the problem via open file description locks in 4 extra syscalls, without requiring any cooperation between threads.

The high-level algorithm for writing a file for execution is as follows:

  1. open an fd with O_WRONLY | O_CLOEXEC
  2. write into fd
  3. place open file description lock on the fd
  4. close the fd
  5. open a new fd with O_RDONLY | O_CLOEXEC (same path as step 1)
  6. place open file description lock on it
  7. close the fd

If an fd opened in step 1 leaked to another process as a result of concurrent thread issuing a fork(), we wait for it to be closed at step 6. An fd opened at step 5 may also leak, but won't cause ETXTBUSY as it is open read-only.

The diff to the program shown in the opening comment would be just:

@@ -41,6 +44,20 @@ runner(void *v)
                        exit(2);
                }
                write(fd, script, strlen(script));
+               if (flock(fd, LOCK_EX) < 0) {
+                       perror("flock");
+                       exit(2);
+               }
+               close(fd);
+               fd = open(buf, O_RDONLY|O_CLOEXEC, 0777);
+               if(fd < 0) {
+                       perror("open (readonly)");
+                       exit(2);
+               }
+               if (flock(fd, LOCK_SH) < 0) {
+                       perror("flock (readonly)");
+                       exit(2);
+               }
                close(fd);
                pid = fork();
                if(pid < 0) {

@ianlancetaylor
Copy link
Contributor

@amonakov Thanks for the comment. That is an interesting suggestion.

I guess that to make this work automatically in Go we would have to detect when an executable file is opened with write access. Unfortunately this would seem to require an extra fstat system call for every file opened for write access. That is not so great. Perhaps we could restrict it to only calls that use O_CREATE as that is likely the most common case that causes problems.

But then there seems to be a race condition. The fork can happen at any time. If the fork happens after we call open but before we call flock, then it seems that the same problem can occur. In the problematic case the fork doesn't know anything about the file that we are writing. The problem is that the file is held open by the child process. Using the flock technique makes this much less likely to be a problem, but I don't think it completely eliminates the problem.

@amonakov
Copy link

... make this work automatically in Go ...

I don't think that would work: permission bits could be changed independently after close(). In any case, my solution has two assumptions, that file was opened with O_CLOEXEC, and that long-lived forks do not appear. For that reason I'd say it's not appropriate to roll it up into some standard function. It could live as a separate close-like function where the purpose and requirements could be clearly documented.

But then there seems to be a race condition. The fork can happen at any time. If the fork happens after we call open but before we call flock, then it seems that the same problem can occur.

No, forked child shares the open file description with the parent, so a later flock in the parent still affects it.

@ianlancetaylor
Copy link
Contributor

@amonakov Thanks.

For what it's worth, all files opened using the Go standard library have O_CLOEXEC set. And Go doesn't support long-lived forks, as fork doesn't work well with multi-threaded programs, and all Go programs are multi-threaded. So I don't think those are issues.

That said, personally I would not want to add new API to close an executable file. That seems awkward and hard to understand. I'd much rather persuade kernels to support O_CLOFORK. Of course any particular program can use your technique.

andrewrk added a commit to ziglang/zig that referenced this issue Jun 14, 2022
Instead of always using std.testing.allocator, the test harness now follows
the same logic as self-hosted for choosing an allocator - that is - it
uses C allocator when linking libc, std.testing.allocator otherwise, and
respects `-Dforce-gpa` to override the decision. I did this because
I found GeneralPurposeAllocator to be prohibitively slow when doing
multi-threading, even in the context of a debug build.

There is now a second thread pool which is used to spawn each
test case. The stage2 tests are passed the first thread pool. If it were
only multi-threading the stage1 tests then we could use the same thread
pool for everything. However, the problem with this strategy with stage2
is that stage2 wants to spawn tasks and then call wait() on the main
thread. If we use the same thread pool for everything, we get a deadlock
because all the threads end up all hanging at wait() and nothing is
getting done. So we use our second thread pool to simulate a "process pool"
of sorts.

I spent most of the time working on this commit scratching my head trying
to figure out why I was getting ETXTBSY when spawning the test cases.
Turns out it's a fundamental Unix design flaw, already a known, unsolved
issue by Go and Java maintainers:

golang/go#22315
https://bugs.openjdk.org/browse/JDK-8068370

With this change, the following command, executed on my laptop, went from
6m24s to 1m44s:

```
stage1/bin/zig build test-cases -fqemu -fwasmtime -Denable-llvm
```

closes #11818
andrewrk added a commit to ziglang/zig that referenced this issue Jun 14, 2022
Instead of always using std.testing.allocator, the test harness now follows
the same logic as self-hosted for choosing an allocator - that is - it
uses C allocator when linking libc, std.testing.allocator otherwise, and
respects `-Dforce-gpa` to override the decision. I did this because
I found GeneralPurposeAllocator to be prohibitively slow when doing
multi-threading, even in the context of a debug build.

There is now a second thread pool which is used to spawn each
test case. The stage2 tests are passed the first thread pool. If it were
only multi-threading the stage1 tests then we could use the same thread
pool for everything. However, the problem with this strategy with stage2
is that stage2 wants to spawn tasks and then call wait() on the main
thread. If we use the same thread pool for everything, we get a deadlock
because all the threads end up all hanging at wait() and nothing is
getting done. So we use our second thread pool to simulate a "process pool"
of sorts.

I spent most of the time working on this commit scratching my head trying
to figure out why I was getting ETXTBSY when spawning the test cases.
Turns out it's a fundamental Unix design flaw, already a known, unsolved
issue by Go and Java maintainers:

golang/go#22315
https://bugs.openjdk.org/browse/JDK-8068370

With this change, the following command, executed on my laptop, went from
6m24s to 1m44s:

```
stage1/bin/zig build test-cases -fqemu -fwasmtime -Denable-llvm
```

closes #11818
@spoerri
Copy link

spoerri commented Jun 17, 2022

Relatively recent (sad) thread re: linux O_CLOFORK: https://lore.kernel.org/lkml/20200525081626.GA16796@amd/T/

@RalphCorderoy
Copy link

Thanks to reading the linux-kernel thread @spoerri mentions above, I see POSIX has added FD_CLOFORK and O_CLOFORK: https://www.austingroupbugs.net/view.php?id=1318

@ianlancetaylor
Copy link
Contributor

I see that some of the Linux kernel developers are pretty skeptical about the need for this; is anybody reading this issue able to point them to the problem described here? It's not a common problem but it's not at all specific to Go. Thanks.

@RalphCorderoy
Copy link

RalphCorderoy commented Jun 19, 2022

Hi Ian, Yes, I had a go yesterday by telling the linux-kernel list about this Go issue and the Java one to show it wasn't system(3) specific and has a wide long-standing impact. Matthew Wilcox, who implies he's a Googler, has replied so far:

The problem is that people advocating for O_CLOFORK understand its value, but not its cost. Other google employees have a system which has literally millions of file descriptors in a single process. Having to maintain this extra state per-fd is a cost they don't want to pay (and have been quite vocal about earlier in this thread).

Perhaps the first thing is to get agreement it's the kernel's issue to fix and then move on to an implementation with a cost they find acceptable. At the moment, the kernel seems to be faulty but fast.

Edited to add link: https://lore.kernel.org/lkml/20200525081626.GA16796@amd/T/#m5b8b20ea6e4ac1eb3bc5353c150ff97b8053b727

andrewrk added a commit to ziglang/zig that referenced this issue Jul 19, 2022
Instead of always using std.testing.allocator, the test harness now follows
the same logic as self-hosted for choosing an allocator - that is - it
uses C allocator when linking libc, std.testing.allocator otherwise, and
respects `-Dforce-gpa` to override the decision. I did this because
I found GeneralPurposeAllocator to be prohibitively slow when doing
multi-threading, even in the context of a debug build.

There is now a second thread pool which is used to spawn each
test case. The stage2 tests are passed the first thread pool. If it were
only multi-threading the stage1 tests then we could use the same thread
pool for everything. However, the problem with this strategy with stage2
is that stage2 wants to spawn tasks and then call wait() on the main
thread. If we use the same thread pool for everything, we get a deadlock
because all the threads end up all hanging at wait() and nothing is
getting done. So we use our second thread pool to simulate a "process pool"
of sorts.

I spent most of the time working on this commit scratching my head trying
to figure out why I was getting ETXTBSY when spawning the test cases.
Turns out it's a fundamental Unix design flaw, already a known, unsolved
issue by Go and Java maintainers:

golang/go#22315
https://bugs.openjdk.org/browse/JDK-8068370

With this change, the following command, executed on my laptop, went from
6m24s to 1m44s:

```
stage1/bin/zig build test-cases -fqemu -fwasmtime -Denable-llvm
```

closes #11818
r00ster91 pushed a commit to r00ster91/zig that referenced this issue Jul 24, 2022
Instead of always using std.testing.allocator, the test harness now follows
the same logic as self-hosted for choosing an allocator - that is - it
uses C allocator when linking libc, std.testing.allocator otherwise, and
respects `-Dforce-gpa` to override the decision. I did this because
I found GeneralPurposeAllocator to be prohibitively slow when doing
multi-threading, even in the context of a debug build.

There is now a second thread pool which is used to spawn each
test case. The stage2 tests are passed the first thread pool. If it were
only multi-threading the stage1 tests then we could use the same thread
pool for everything. However, the problem with this strategy with stage2
is that stage2 wants to spawn tasks and then call wait() on the main
thread. If we use the same thread pool for everything, we get a deadlock
because all the threads end up all hanging at wait() and nothing is
getting done. So we use our second thread pool to simulate a "process pool"
of sorts.

I spent most of the time working on this commit scratching my head trying
to figure out why I was getting ETXTBSY when spawning the test cases.
Turns out it's a fundamental Unix design flaw, already a known, unsolved
issue by Go and Java maintainers:

golang/go#22315
https://bugs.openjdk.org/browse/JDK-8068370

With this change, the following command, executed on my laptop, went from
6m24s to 1m44s:

```
stage1/bin/zig build test-cases -fqemu -fwasmtime -Denable-llvm
```

closes ziglang#11818
jonner added a commit to jonner/mdevctl that referenced this issue Aug 17, 2022
There is a race that causes callout scripts to occasionally fail to
execute during the test suite. This is caused by the fact that the test
suite is run in multiple threads and the fact that callout scripts are
copied into a temporary test environment directory. See
golang/go#22315 for a thorough description of
the root cause of a similar problem encountered by the 'go' compiler.

Fixes: mdevctl#64.

Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
@bcmills
Copy link
Member

bcmills commented Dec 15, 2022

I observed what I believe is an equivalent race for pipes when investigating #36107.

The race can cause a Write call to a pipe to spuriously succeed (instead of returning EPIPE) after the read side of the pipe has been closed.

The mechanism is the same:

  • The parent creates the pipe and (with syscall.ForkLock held) sets it to be CLOEXEC.
  • The parent process calls os.StartProcess, which forks a child process.
    • (The child process inherits a copy of the file descriptors for the pipe.)
  • Before the child process has reached its exec, the parent process closes the read side of the pipe.
    • (However, the FD for that side remains open in the child process.)
  • The parent process calls Write on the write side, causing the kernel to buffer the write (since the pipe FD is still open in the child). The Write succeeds.
  • Finally, the child process reaches its exec call, closing its copy of the read FD and dropping the buffered bytes.

This race can be observed by running os_test.TestEPIPE concurrently with a call to os.StartProcess.

@gopherbot
Copy link

Change https://go.dev/cl/458015 mentions this issue: os: clean up tests

@gopherbot
Copy link

Change https://go.dev/cl/458016 mentions this issue: os/exec: retry ETXTBSY errors in TestFindExecutableVsNoexec

gopherbot pushed a commit that referenced this issue Dec 16, 2022
I made this test parallel in CL 439196, which exposed it to the
fork/exec race condition described in #22315. The ETXTBSY errors from
that race should resolve on their own, so we can simply retry the call
to get past them.

Fixes #56811.
Updates #22315.

Change-Id: I2c6aa405bf3a1769d69cf08bf661a9e7f86440b4
Reviewed-on: https://go-review.googlesource.com/c/go/+/458016
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
gopherbot pushed a commit that referenced this issue Jan 19, 2023
- Use testenv.Command instead of exec.Command to try to get more
  useful timeout behavior.

- Parallelize tests that appear not to require global state.
  (And add explanatory comments for a few that are not
  parallelizable for subtle reasons.)

- Consolidate some “Helper” tests with their parent tests.

- Use t.TempDir instead of os.MkdirTemp when appropriate.

- Factor out subtests for repeated test helpers.

For #36107.
Updates #22315.

Change-Id: Ic24b6957094dcd40908a59f48e44c8993729222b
Reviewed-on: https://go-review.googlesource.com/c/go/+/458015
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

No branches or pull requests

10 participants