Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync: reduce pointer overhead in Map #21031

Open
bcmills opened this issue Jul 16, 2017 · 6 comments
Open

sync: reduce pointer overhead in Map #21031

bcmills opened this issue Jul 16, 2017 · 6 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Performance
Milestone

Comments

@bcmills
Copy link
Contributor

bcmills commented Jul 16, 2017

The Go 1.9 implementation of sync.Map is very pointer-heavy. That wastes a bit of CPU (following and caching those pointers and their address translations) and also consumes more steady-state memory than it potentially needs to (by at least one pointer per entry).

Now that sync.Map is in the standard library (instead of just x/sync), we can potentially make use of knowledge about the runtime implementations of map, interface{} and/or atomic.Value to flatten out some of those pointers.

In particular we can probably change the readOnly.m field to contain a runtime.hmap directly instead of a map variable that points to it.

We may also be able to change the elements of that map from *entry to entry, since their addresses will be stable (unless we address #19094). It's not obvious to me whether that would add too much complexity to the process of promoting the read-write map to read-only (which is currently just an atomic store): in particular, we would need some way to ensure that no thread is trying to update the entries of the previous read-only map, which might require that we only promote read-write to read-only during a GC safe-point.

Finally, we may be able to store interface{} values directly in the entry struct rather than unsafe-pointers-to-interfaces, although (as atomic.Value demonstrates) the synchronization for updates may be very subtle.

Further pointer optimizations may be possible; those are just the ones I can think of.


I don't intend to do this optimization work myself any time soon, but I wanted to document this idea in case someone else in the community has a use-case that motivates them to address it.

@changkun

This comment was marked as outdated.

@ericlagergren

This comment was marked as outdated.

@bcmills
Copy link
Contributor Author

bcmills commented Aug 14, 2023

I think that addressing this issue would require runtime-specific hooks, or at least unsafe code that makes assumptions about the behavior of the runtime that are not guaranteed by the language spec.

@qiulaidongfeng
Copy link
Member

I tried using atomic.Value to reduce some pointers today, but in the end, I found that it was not possible to save nil. See https://github.com/golang/go/blob/master/src/sync/map.go#L431C8 -L431C8.

Realizing atomic.Value that supports storing nil will encounter the problem of how to represent expunged.

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/617695 mentions this issue: sync: reduce pointer overhead in Map

@qiulaidongfeng
Copy link
Member

We may also be able to change the elements of that map from *entry to entry, since their addresses will be stable (unless we address #19094). It's not obvious to me whether that would add too much complexity to the process of promoting the read-write map to read-only (which is currently just an atomic store): in particular, we would need some way to ensure that no thread is trying to update the entries of the previous read-only map, which might require that we only promote read-write to read-only during a GC safe-point.

@bcmills I tried the above optimization today,but I find that this optimization does not seem feasible , see https://github.com/golang/go/blob/master/src/sync/map.go#L369C1-L371C20 , Because read-only maps and read/write maps share the entry pointer, this is not possible if change the map element to the entry type. I would like to ask you , you listed this optimization method, does it mean that you know other method to achieve this optimization?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Performance
Projects
None yet
Development

No branches or pull requests

7 participants