Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

os: RemoveAll fails to remove deeply nested directory tree, triggers OOM Killer #47390

Open
sfowl opened this issue Jul 26, 2021 · 7 comments
Open

Comments

@sfowl
Copy link

@sfowl sfowl commented Jul 26, 2021

What version of Go are you using (go version)?

$ go version go1.16.5 linux/amd64

Does this issue reproduce with the latest release?

Yes

What operating system and processor architecture are you using (go env)?

go env Output
    $ go env
    GO111MODULE=""
    GOARCH="amd64"
    GOBIN=""
    GOCACHE="/home/test/.cache/go-build"
    GOENV="/home/test/.config/go/env"
    GOEXE=""
    GOFLAGS=""
    GOHOSTARCH="amd64"
    GOHOSTOS="linux"
    GOINSECURE=""
    GOMODCACHE="/home/test/go/pkg/mod"
    GONOPROXY=""
    GONOSUMDB=""
    GOOS="linux"
    GOPATH="/home/test/go"
    GOPRIVATE=""
    GOPROXY="direct"
    GOROOT="/usr/lib/golang"
    GOSUMDB="off"
    GOTMPDIR=""
    GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64"
    GOVCS=""
    GOVERSION="go1.16.5"
    GCCGO="gccgo"
    AR="ar"
    CC="gcc"
    CXX="g++"
    CGO_ENABLED="1"
    GOMOD=""
    CGO_CFLAGS="-g -O2"
    CGO_CPPFLAGS=""
    CGO_CXXFLAGS="-g -O2"
    CGO_FFLAGS="-g -O2"
    CGO_LDFLAGS="-g -O2"
    PKG_CONFIG="pkg-config"
    GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build1485272220=/tmp/go-build -gno-record-gcc-switches"

What did you do?

create.go:

package main

import (
	"fmt"
	"log"
	"os"
	"strings"
)

func main() {
	path := os.Args[1]
	os.Mkdir(path, 0755)
	os.Chdir(path)

	// 2^30 is big enough that the process will run out of memory before completing on most machines
	// 2^10 (1024) is a good value to use for profiling memory usage
	// n := 10
	n := 30
	for i := 0; i < (1 << n); i++ {
		dirname := strings.Repeat("x", 255)

		if i%128 == 0 {
			fmt.Println(i)
		}

		err := os.Mkdir(dirname, 0755)
		if err != nil {
			log.Fatal(err)
		}
		os.Chdir(dirname)
	}
}

remove.go:

package main

import (
	"log"
	"os"
)

func main() {
	dir := os.Args[1]
	err := os.RemoveAll(dir)
	if err != nil {
		log.Fatal(err)
	}
}

Create a very deep directory tree (~2.5 million subdirectories)

$ go run create.go /tmp/deep
0
128
...
2473472
Killed

Attempt to remove them with os.RemoveAll()

$ go run remove.go /tmp/deep
Killed

Observe exhaustion of system memory, causing oom-killer to kill the process. Directory tree still remains:

$ ls -d /tmp/deep
/tmp/deep

What did you expect to see?

remove.go successfully remove the target deep directory tree, without exhausting system memory

What did you see instead?

remove.go causes a sharp increase in memory usage, causing OOM killer to kill remove.go before it can complete

@sfowl
Copy link
Author

@sfowl sfowl commented Jul 26, 2021

Profile of os.RemoveAll() with this patch:

patch

diff --git a/src/os/removeall_at.go b/src/os/removeall_at.go
index c1a1b726af..31fb746d56 100644
--- a/src/os/removeall_at.go
+++ b/src/os/removeall_at.go
@@ -9,9 +9,30 @@ package os
 import (
	"internal/syscall/unix"
	"io"
+	"runtime"
+	"strconv"
	"syscall"
 )
 
+var iterations = 0
+
+func PrintMemUsage() {
+	var m runtime.MemStats
+	runtime.ReadMemStats(&m)
+
+	f, _ := OpenFile("/tmp/golang_stats", O_APPEND|O_CREATE|O_WRONLY, 0644)
+
+	f.WriteString("Iterations: " + strconv.Itoa(iterations) + "\n")
+	f.WriteString("StackInUse = " + strconv.FormatUint(m.StackInuse, 10))
+	f.WriteString("\tStackSys = " + strconv.FormatUint(m.StackSys, 10))
+	f.WriteString("\tAlloc = " + strconv.FormatUint(m.Alloc, 10))
+	f.WriteString("\tTotalAlloc = " + strconv.FormatUint(m.TotalAlloc, 10))
+	f.WriteString("\tSys = " + strconv.FormatUint(m.Sys, 10))
+	f.WriteString("\tNumGC = " + strconv.FormatUint(uint64(m.NumGC), 10) + "\n")
+
+	f.Close()
+}
+
 func removeAll(path string) error {
	if path == "" {
		 // fail silently to retain compatibility with previous behavior
@@ -45,6 +66,8 @@ func removeAll(path string) error {
	}
	defer parent.Close()
 
+	PrintMemUsage()
+	iterations = 0
	if err := removeAllFrom(parent, base); err != nil {
		 if pathErr, ok := err.(*PathError); ok {
			  pathErr.Path = parentDir + string(PathSeparator) + pathErr.Path
@@ -52,11 +75,13 @@ func removeAll(path string) error {
		 }
		 return err
	}
+	PrintMemUsage()
	return nil
 }
 
 func removeAllFrom(parent *File, base string) error {
	parentFd := int(parent.Fd())
+	iterations += 1
	// Simple case: if Unlink (aka remove) works, we're done.
	err := unix.Unlinkat(parentFd, base, 0)
	if err == nil || IsNotExist(err) {

$ cat /tmp/golang_stats
Iterations: 0
StackInUse = 294912 StackSys = 294912 Alloc = 908920 TotalAlloc = 908920 Sys = 71388176 NumGC = 0
Iterations: 1025
StackInUse = 884736 StackSys = 884736 Alloc = 9492424 TotalAlloc = 10795872 Sys = 74269704 NumGC = 2

After 1K iterations of removeAllFrom, i.e. for 1000 directories deep, around 10MB of allocations were used. This increased in proportion to the depth of directory tree so that one million iterations used a thousand times more memory, several GB. On a system with 8GB of RAM, OOM Killer was therefore triggered well before the removal of the directory tree could be completed.

In comparison, coreutils’ rm uses significantly less memory and eventually succeeds in removing the deep directory (though a small amount of swap space was needed).

@sfowl sfowl changed the title os.RemoveAll() fails to remove deeply nested directory tree, trigger OOM Killer os.RemoveAll() fails to remove deeply nested directory tree, triggers OOM Killer Jul 26, 2021
@seankhliao seankhliao changed the title os.RemoveAll() fails to remove deeply nested directory tree, triggers OOM Killer os: RemoveAll fails to remove deeply nested directory tree, triggers OOM Killer Jul 26, 2021
@rsc
Copy link
Contributor

@rsc rsc commented Jul 26, 2021

Here's a standalone test case (go test rm_test.go -cpuprofile=cpu.prof -memprofile=mem.prof):

package rm

import (
	"os"
	"strings"
	"testing"
)

func TestRm(t *testing.T) {
	path := "/dev/shm/deep"
	dirname := strings.Repeat("x", 255)
	os.Mkdir(path, 0777)
	os.Chdir(path)
	n := 1<<11
	for i := 0; i < n; i++ {
		err := os.Mkdir(dirname, 0777)
		if err != nil {
			t.Fatalf("mkdir #%d: %v", i, err)
		}
		os.Chdir(dirname)
	}
	os.Chdir("/")

	err := os.RemoveAll(path)
	if err != nil {
		t.Fatal(err)
	}
}

@rsc
Copy link
Contributor

@rsc rsc commented Jul 26, 2021

Interestingly, that test case takes time O(n²) on my Mac, but all the time is in syscall (calling into Mac libc), which suggests that perhaps Mac libc implements openat/unlinkat by reconstructing the original paths instead of actually using the fds directly. Oops.

@rsc
Copy link
Contributor

@rsc rsc commented Jul 26, 2021

On my Linux workstation, getconf PATH_MAX says 4096, but every file system I've tried rejects mkdir after 31 255-long directory elements, or just under 8kB. If I change 255 to 1, then I can make more directories: 1<<11 works fine. I'm not sure what sysctl etc needs to be frobbed to get access to million-element path names.

@gopherbot
Copy link

@gopherbot gopherbot commented Jul 26, 2021

Change https://golang.org/cl/337449 mentions this issue: os: hold fewer directory-reading buffers in memory

@rsc
Copy link
Contributor

@rsc rsc commented Jul 26, 2021

As for the actual bug, the implementation of os.RemoveAll already uses openat(2) and unlinkat(2), so there's no concern with the long paths directly (at least on Linux).

The problem appears to be that the implementation keeps the parent directory open while it removes the children, and each open directory has a directory-entry-reading buffer in its dirInfo.buf, of size 8kB. If you are removing a tree a million entries deep with an 8kB buffer per depth level, that's going to be 8 GB.

Presumably the kernel also has some state related to each open directory. If that was 1kB then you're still looking at 1 GB of kernel state for a million open file descriptors.

An obvious answer is to close the parent while removing its children. But you need the parent's fd for the openat of each child. Closing the fd between children would (recursively) require rewalking the entire path to get the fd for the next child, which leads to exactly the quadratic behavior that opnat(2)/unlinkat(2) was introduced to, well, remove.

We can't make the per-directory buffer appreciably smaller, because we get lost directory entries if it is too small - see #24015.

It does look like we can adjust directory reading to drop the buffers between reads most of the time, though. I've sent CL 337449 to do that. Even that CL is not perfect, though: each level of the walk is holding all the strings from that directory read, which could be up to 8 kB or so of data. So you could still provoke 8 GB of memory usage, though not in the test case. I don't see any way to avoid having those strings, since we need to read at least 5760 bytes per #24015, and we can't throw away all but the first entry that is returned.

Those strings are already in memory, so dropping the directory-reading buffers is at least a factor of two improvement in that worst case. Of course, then there is the problem of a two-million-deep directory tree.

I tried to look at what coreutils rm does, and the answer appears to be “use some library that is not in the coreutils repo,” so I stopped there.

Generally speaking, cleaning up an arbitrarily deep directory tree is going to be arbitrarily expensive. I'm not convinced this is a battle that the implementation of os.RemoveAll can win, although the memory factor of at least 2 (more in common cases) is probably worth claiming.

We could submit my change for Go 1.18, but I think it's a bit too risky, with hardly any real benefit, for Go 1.17.

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jul 26, 2021

The GNU rm program uses the fts library that is part of gnulib. The source code is at https://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=blob;f=lib/fts.c;h=e6603f40e76fe86707ddf600ab7f818e3f63ccc1;hb=HEAD. The code is pretty complicated. As far as I can tell, as invoked by the rm command, it keeps a small cache of parent directories open as it descends. If the cache fills up, it closes and later reopens the parent directory for use with fstatat. Separately, when it gets to a new directory, it reads up to 100000 entries from that directory. If the directory has more entries than that, it also keeps it open apart from the cache. So in the typical case where there are fewer than 100000 entries, it just does a simple file walk with only a few parent directories open at any one time. When there are more entries in one directory, it keeps that directory open. So it will run out of resources with a deep tree if each level of the tree has many entries. Assuming I understand the code; it is quite possible that I don't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants