Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: shrink map as elements are deleted #20135

Open
genez opened this issue Apr 26, 2017 · 47 comments
Open

runtime: shrink map as elements are deleted #20135

genez opened this issue Apr 26, 2017 · 47 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Milestone

Comments

@genez
Copy link

genez commented Apr 26, 2017

What version of Go are you using (go version)?

go version go1.8 windows/amd64

What operating system and processor architecture are you using (go env)?

set GOARCH=amd64
set GOBIN=
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\dev\Go
set GORACE=
set GOROOT=C:\Go
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0
set CXX=g++
set CGO_ENABLED=1
set PKG_CONFIG=pkg-config
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2

What did you do?

See example on playground: https://play.golang.org/p/odsk9F1UH1
(edit: forgot to remove sleeps and changed the number of elements)

What did you expect to see?

removing elements from m1 map should release memory.

What did you see instead?

total allocated memory is always increasing

In the example the issue is not so relevant, but in my production scenario (several maps with more than 1million elements each) I can easily get OOM error, and the process is being killed.
Also I don't know if memstats.Alloc is the right counter to expose here, but I can observe the issue with regular process management tools in linux (e.g. top or htop)

@bradfitz bradfitz changed the title maps do not shrink after elements removal (delete) runtime: maps do not shrink after elements removal (delete) Apr 26, 2017
@bradfitz
Copy link
Contributor

/cc @randall77, @josharian

@bradfitz bradfitz added this to the Unplanned milestone Apr 26, 2017
@josharian
Copy link
Contributor

I'm surprised there isn't a dup of this already in the issue tracker.

Yes, maps that shrink permanently currently never get cleaned up after. As usual, the implementation challenge is with iterators.

Maps that shrink and grow repeatedly used to also cause leaks. That was #16070, fixed by CL 25049. I remember hoping when I started on that CL that the same mechanism would be useful for shrinking maps as well, but deciding it wouldn't. Sadly, I no longer remember why. If anyone wants to investigate this issue, I'd start by looking at that CL and thinking about whether that approach could be extended to shrinking maps.

The only available workaround is to make a new map and copy in elements from the old.

@tandr
Copy link

tandr commented Sep 4, 2018

just an observation - adding runtime.GC() after last loop of copy/delete brings memory down to about the same size (well, lower actually) as at "Alloc After M1" point

@hixichen
Copy link

Any update on this issue?

we load 1 Million entry into map. No matter we try to delete the value or set the map nil, seems the memory is always increasing until OOM.

@mvdan
Copy link
Member

mvdan commented Sep 24, 2018

@hixichen: see @josharian's workaround above:

The only available workaround is to make a new map and copy in elements from the old.

That is, you have to let the entire map be garbage-collected. Then all its memory will eventually be made available again, and you can start using a new and smaller map. If this doesn't work, please provide a small Go program to reproduce the problem.

As for progress - if there was any, you'd see it in this thread.

@voltrue2
Copy link

voltrue2 commented Jan 17, 2019

The only available workaround is to make a new map and copy in elements from the old.

Is this really an efficient way to handle this issue?
If you have a very large map, you'd have to loop a large map every time you delete an element or two, right?

@as
Copy link
Contributor

as commented Jan 17, 2019

@hixichen what happens if you set the map to nil (or any cleanup action you mentioned previously) and then run debug.FreeOSMemory()?

This may help differentiate between a "GC-issue" and a "returning memory to the OS" issue.

Edit: It seems you're using Go itself to gauge memory allocation so this message can be ignored (perhaps it will be useful to someone else so I'll post it anyway).

@randall77
Copy link
Contributor

Is this really an efficient way to handle this issue? If you have a very large map, you'd have to loop a large map every time you delete an element or two, right?

You can do it efficiently by delaying shrinking until you've done O(n) deletes.
That's what a built-in mechanism would do.
The map growing mechanism works similarly.

@hixichen
Copy link

hixichen commented Jan 17, 2019

@as yes, I am using Go itself to gauge memory allocation, and, personally I think Go should handle it by itself.

@rohanil

This comment has been minimized.

@mvdan

This comment has been minimized.

@mvdan mvdan added the NeedsFix The path to resolution is known, but the work has not been done. label Jun 5, 2019
@4nte
Copy link

4nte commented Dec 5, 2019

I'd expect GO to handle memory both ways here. This is unintuitive behavior and should be noted in the map docs until resolved. I just realized we have multiple eventual OOM's in our system.
cc @marco-hrlic data-handler affected

@hunterhug
Copy link

hunterhug commented Apr 16, 2020

go version go1.13.1 darwin/amd64

I have a question:

package main

import (
	"fmt"
	"runtime"
)

func main() {
	v := struct{}{}

	a := make(map[int]struct{})

	for i := 0; i < 10000; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 100000")

	for i := 0; i < 10000-1; i++ {
		delete(a, i)
	}

	runtime.GC()
	printMemStats("After Map Delete 9999")

	for i := 0; i < 10000-1; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 9999 again")

	a = nil
	runtime.GC()
	printMemStats("After Map Set nil")
}

func printMemStats(mag string) {
	var m runtime.MemStats
	runtime.ReadMemStats(&m)
	fmt.Printf("%v:memory = %vKB, GC Times = %v\n", mag, m.Alloc/1024, m.NumGC)
}

output:

After Map Add 100000:memory = 241KB, GC Times = 1
After Map Delete 9999:memory = 242KB, GC Times = 2
After Map Add 9999 again:memory = 65KB, GC Times = 3
After Map Set nil:memory = 65KB, GC Times = 4

Why a local var map a Delete 9999, it's memory not change, but Add 9999 again, it's memory reduce?

@randall77
Copy link
Contributor

	for i := 0; i < 10000-1; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 9999 again")

The map will be garbage collected at this runtime.GC. The compiler knows that the map will not be used again. Your later a==nil does nothing - the compiler is way ahead of you.

Try adding fmt.printf("%d\n", len(m)) at various places above to introduce another use of m. If you put it after the runtime.GC, you will see the behavior you are expecting.

@hunterhug

This comment has been minimized.

@mangatmodi
Copy link

Isn't the issue is that the GC is not triggered at the right point? GC process is able to recognize that the map has some deleted keys, that's why runtime.GC() helps. I guess it needs some tuning.

Also, I believe it is a pretty serious issue. Allocating a new map should be documented as best practices when using go.

tjvc added a commit to tjvc/gauche that referenced this issue Jun 5, 2022
WIP implementation of a memory limit. This will likely be superseded
by Go's incoming soft memory limit feature (coming August?), but it's
interesting to explore nonetheless.

Each time we receive a PUT request, check the used memory. To calculate
used memory, we use runtime.ReadMemStats. I was concerned that it would
have a large performance cost, because it stops the world on every
invocation, but it turns out that it has previously been optimised.
Return a 500 if this value has exceeded the current max memory. We
use TotalAlloc do determine used memory, because this seemed to be
closest to the container memory usage reported by Docker. This is broken
regardless, because the value does not decrease as we delete keys
(possibly because the store map does not shrink).

If we can work out a constant overhead for the map data structure, we
might be able to compute memory usage based on the size of keys and
values. I think it will be difficult to do this reliably, though. Given
that a new language feature will likely remove the need for this work,
a simple interim solution might be to implement a max number of objects
limit, which provides some value in situations where the user can
predict the size of keys and values.

TODO:

* Make the memory limit configurable by way of an environment variable
* Push the limit checking code down to the put handler

golang/go#48409
golang/go@4a7cf96
patrickmn/go-cache#5
https://github.com/vitessio/vitess/blob/main/go/cache/lru_cache.go
golang/go#20135
https://redis.io/docs/getting-started/faq/#what-happens-if-redis-runs-out-of-memory
https://redis.io/docs/manual/eviction/
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
@egorse
Copy link

egorse commented Aug 15, 2022

@randall77 is this case still a thing?

@randall77
Copy link
Contributor

@egorse Yes, if you're referring to the title of this issue. (There are some other tangentially related issues discussed in here which I think were all confusions of some sort, and are not, and probably were never, "things".)

fholzer added a commit to fholzer/parseq that referenced this issue Oct 1, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 2, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 2, 2022
maps will cause issues for long running programs, see
golang/go#20135
@opennota
Copy link

opennota commented Oct 17, 2022

It's not always desirable to shrink the map. Sometimes you want it to remain fixed-sized, the same size you have allocated it:

m := make(map[type1]type2, BIG_NUMBER)

You might want to delete all the elements and then fill it again, using it as a sort of cache, without unnecessary (de-)allocations.

fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
fholzer added a commit to fholzer/parseq that referenced this issue Oct 26, 2022
maps will cause issues for long running programs, see
golang/go#20135
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Projects
None yet
Development

No branches or pull requests