Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: profile and tune map code #3885

gopherbot opened this issue Jul 29, 2012 · 21 comments

runtime: profile and tune map code #3885

gopherbot opened this issue Jul 29, 2012 · 21 comments


Copy link

by yury.shulaev:

I did a trivial map benchmark using Go (gc), JavaScript (V8) and Python (Python3, PyPy)
and here are the results:

$ export GOMAXPROCS=""; go build map.go; time ./map

real    0m0.656s
user    0m0.652s
sys     0m0.000s

$ time node map.js

real    0m0.055s
user    0m0.040s
sys     0m0.012s

$ time python3

real    0m0.613s
user    0m0.592s
sys     0m0.016s

$ time pypy

real    0m0.143s
user    0m0.124s
sys     0m0.016s

Looks like Go's map is 12 times slower than V8's and 4.5 times slower than PyPy's.
Well, JavaScript objects can only use strings as keys, but Python's dict does not
have this limitation and it is still being able to run in 0.143s.

Is it possible to adapt V8's map implementation to Go? By the way, it looks simpler:

Code used in the benchmark:


package main

func main() {
    a := make(map[int]int)

    for i := 0; i < 10000000; i++ {
        a[123] = 456


function main() {
    var a = {};

    for (var i = 0; i < 10000000; i++) {
        a[123] = 456;


#!/usr/bin/env python3

def main():
    a = {}

    for i in range(10000000):
        a[123] = 456

if __name__ == '__main__':

Profiler output for 10 times more iterations:

Total: 772 samples
     385  49.9%  49.9%      504  65.3% hash_lookup
     106  13.7%  63.6%      106  13.7% runtime.memhash
      86  11.1%  74.7%       86  11.1% runtime.rnd
      60   7.8%  82.5%      618  80.1% runtime.mapaccess
      37   4.8%  87.3%       37   4.8% runtime.memcopy32
      33   4.3%  91.6%      765  99.1% main.main
      32   4.1%  95.7%      732  94.8% runtime.mapaccess1
      20   2.6%  98.3%       20   2.6% hash_indirect
      13   1.7% 100.0%       13   1.7% runtime.memequal32
       0   0.0% 100.0%      765  99.1% runtime.main

OS and hardware:

Linux 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64

go version go1 (which is 'apt-get install golang')

Intel Core i5, 4 GB RAM
Copy link

Comment 1:

V8 hashmaps look pretty similar to Python's maps, except that Python probing is linear
congruential. Gccgo uses an array with linked lists for collisions, it's quite faster
than standard Go library too.

Copy link
Contributor Author

Comment 2 by yury.shulaev:

Yes, I trired gccgo but it was only 2 times faster than gc.

Copy link
Contributor Author

Comment 3 by yury.shulaev:

Yes, I tried gccgo but it was only 2 times faster than gc.

Copy link
Contributor Author

Comment 4 by yury.shulaev:

Well, my benchmark turned out to be not fair enough. V8's execution time was almost
equal to empty loop time, maybe it somehow eliminated repeated assignments. I changed
"a[123] = 456" to "a[i] = i", so it no longer can be eliminated, and results were a
little different:
V8 ~0.7s
PyPy ~1.1s
Go ~3.8s (x5.4 V8, x3.4 PyPy)
A little better for Go now, but still doen't look optimal. And gccgo performed worse
this time.
I'm really not an expert here, but i didn't found existing benchmarks of Go's map type.
It will be great if someone can write less synthetic benchmarks doing random reading and
writing and using various types as keys.

Copy link
Contributor Author

Comment 5 by yury.shulaev:

Found a dict benchmark code for Python (it uses pybench):

Copy link

rsc commented Sep 12, 2012

Comment 6:

The map algorithm is chosen to work well with even very very large maps. It would not
surprise me if Chrome and Python do not do as well in those cases.
It is possible that there is some low-hanging fruit, and it is also possible that small
maps should use a different layout entirely.

Labels changed: added priority-later, removed priority-triage.

Status changed to Accepted.

Copy link

rsc commented Sep 12, 2012

Comment 7:

Labels changed: added go1.1maybe.

Copy link
Contributor Author

Comment 8 by yury.shulaev:

But in the comment #4 I used "a[i] = i" statement in the loop, so the final map
contained 10 million entries. Isn't it large enough?

Copy link

rsc commented Sep 13, 2012

Comment 9:

That's only ~100 MB.

Copy link

rsc commented Dec 10, 2012

Comment 11:

Labels changed: added size-l.

Copy link

Comment 12:

This issue was updated by revision a5d4024.

Helps some by making hashing faster.  Go time drops from
0.65s to 0.51s.
R=rsc, r, bradfitz, remyoudompheng, khr, dsymonds,, elias.naur

Copy link

rsc commented Mar 12, 2013

Comment 13:

[The time for maybe has passed.]

Labels changed: removed go1.1maybe.

Copy link

ugorji commented Mar 14, 2013

Comment 14:

I was seeing some bad results within my codebase that looked up information 
within a map. I had to look up a mapping for reflect.Type every turn in the loop
and the map access increased my benchmark times significantly.
I did some analysis comparing to Node.JS maps, for both string and integer 
key/values, checking time spent for inserts, deletes and gets for different sizes 
of the map.
I've attached 4 files with my results:
- comparelangs.go
- comparelangs.js
- (for running with appropriate arguments)
- output.txt (output from a sample run).
Simple Analysis Results:
   Node.JS significantly better than Go Maps for insert and retrieval.
     Go Maps performance degrades at retrieval as map size increases.
   Node.JS terrible/unusable at delete once since of map grows.
   Summary is that for typical map usage of small size maps that are, 
     mostly doing retrievals, Go maps do not perform as well.


  1. comparelangs.go (1516 bytes)
  2. comparelangs.js (1532 bytes)
  3. (1072 bytes)
  4. output.txt (1960 bytes)

Copy link

ugorji commented Mar 14, 2013

Comment 15:

I did some more tests with some more telling data.
I'm not sure how to interpret it; Hopefully it makes sense to you guys and you can 
use it as a data point.
In the test, I try to see which would perform better between linear searching and map
Clearly, map retrieval is much better for integer keys (and I expect for most of the
numeric types in Go). It stayed constant whereas linear search increased (as expected).
However, for string keys, it seems map retrieval was always worse than linear searching 
a slice (in this test) even as we increased the size of the map or slice. Map access
increased as n increased, even "averaging" more than the linear search in the slice.
I've attached 3 files:
- map_vs_slice_test.go (benchmark tests)
- (run the benchmark with increasing sizes of n from 2, 4, 8, ... 256)
- output.txt (sample run)
Just guessing, but could this be because the hash function is called each time on the
Is there a chance that we could do what was done for like reflect.Type, and cache a hash 
in the string? This way, we don't compute the hash every time if the same string is
As a data point, Java caches the hash code in the String object also to save on hash 
computation time. For an immutable value, I think it makes sense.


  1. map_vs_slice_test.go (3663 bytes)
  2. (163 bytes)
  3. output.txt (1250 bytes)

Copy link

rsc commented Mar 15, 2013

Comment 16:

Re #15 the benchmark is not as telling as it might seem. The n strings being considered
in the search look like:
As you increase the size of the set, you are also increasing the average length of the
strings, which also increases the average length of the strings being searched for. The
map lookup must of course hash the key in order to decide where in the table to look,
and that grows with this average length. So this format is pessimal for the map. It is
also optimal for the slice scan, because string comparison doesn't even look at the
string data unless the string lengths match. There is only one entry in the slice with
the right length, so the whole scan does on average n/2 length comparisons (very cheap)
and one length-n string comparison. In contrast the map lookup does on average one
length-n string hash and one length-n string comparison. The length-n string hash and
the n/2 length comparisons are about equal in complexity, so the slice scan and the map
lookup take roughly the same amount of time.
Real data does not look like this.
If you change sstr to return strconv.Itoa, so that there are some strings with the same
length in the set of keys, the map quickly wins:
   n     scan      map
   2     22.2     59.0
   4     34.2     60.3
   8     50.4     63.2
  16     60.3     67.0
  32      100     67.8
  64      184     66.4
 128      277     64.7
 256      504     68.4
 512     1225     64.6
1024     2721     57.9
2048     3930     65.4
4096     9631     66.4
It's not worth doing more experiments on the current map implementation.  Keith is
working on a new one that may be ready for Go 1.1 or may not be, but either way the
current one's days are numbered.

Labels changed: added go1.1maybe.

Owner changed to @randall77.

Status changed to Started.

Copy link

ugorji commented Mar 15, 2013

Comment 17:

Thanks Russ. 
Re #15: I didn't take into consideration that the fact that the string 
comparison would skip checking string data if the lengths didn't match. My oversight.
Thanks for the follow up.

Copy link

Comment 18:

Attaching a benchmark file.
$ hg id
~/mapbench$ go test -bench=. 
BenchmarkBigStr_0      10000        251329 ns/op
BenchmarkBigStr_4      10000        255585 ns/op
BenchmarkBigStr_8      10000        253203 ns/op
BenchmarkBigStr_16     10000        256934 ns/op
BenchmarkBigStr_32     10000        260725 ns/op
BenchmarkBigStr_64     10000        251684 ns/op
BenchmarkBigStr_512    10000        252046 ns/op
BenchmarkBigStr2_0     10000        251376 ns/op
BenchmarkBigStr2_4     10000        273891 ns/op
BenchmarkBigStr2_8     10000        252239 ns/op
BenchmarkBigStr2_16    10000        250466 ns/op
BenchmarkBigStr2_32    10000        250904 ns/op
BenchmarkBigStr2_64    10000        251077 ns/op
BenchmarkBigStr2_512       10000        252529 ns/op
BenchmarkSmallStr_0 50000000            37.5 ns/op
BenchmarkSmallStr_4 50000000            36.5 ns/op
BenchmarkSmallStr_8 50000000            37.2 ns/op
BenchmarkSmallStr_16    50000000            44.8 ns/op
BenchmarkSmallStr_32    50000000            36.3 ns/op
BenchmarkSmallStr_64    50000000            43.6 ns/op
BenchmarkSmallStr_512   50000000            41.7 ns/op
BenchmarkSmallStr_1024  50000000            42.5 ns/op
BenchmarkSmallStr_1M    50000000            52.5 ns/op
BenchmarkSmallStr2_0    50000000            37.4 ns/op
BenchmarkSmallStr2_4    50000000            37.2 ns/op
BenchmarkSmallStr2_8    50000000            42.7 ns/op
BenchmarkSmallStr2_16   50000000            45.8 ns/op
BenchmarkSmallStr2_32   50000000            40.7 ns/op
BenchmarkSmallStr2_64   50000000            40.5 ns/op
BenchmarkSmallStr2_512  50000000            37.3 ns/op
BenchmarkSmallStr2_1024 50000000            38.0 ns/op
BenchmarkSmallStr2_1M   50000000            46.9 ns/op
BenchmarkInt_0  50000000            32.9 ns/op
BenchmarkInt_4  50000000            33.4 ns/op
BenchmarkInt_8  50000000            37.2 ns/op
BenchmarkInt_16 50000000            33.0 ns/op
BenchmarkInt_32 50000000            39.7 ns/op
BenchmarkInt_64 50000000            48.3 ns/op
BenchmarkInt_512    50000000            36.7 ns/op
BenchmarkInt_1024   50000000            36.0 ns/op
BenchmarkInt_1M 50000000            41.9 ns/op
BenchmarkInt2_0 50000000            34.1 ns/op
BenchmarkInt2_4 50000000            35.9 ns/op
BenchmarkInt2_8 50000000            42.6 ns/op
BenchmarkInt2_16    50000000            34.0 ns/op
BenchmarkInt2_32    50000000            37.3 ns/op
BenchmarkInt2_64    50000000            39.9 ns/op
BenchmarkInt2_512   50000000            34.2 ns/op
BenchmarkInt2_1024  50000000            37.1 ns/op
BenchmarkInt2_1M    50000000            46.5 ns/op
BenchmarkPtr_32 50000000            41.7 ns/op
BenchmarkPtr2_32    50000000            33.3 ns/op


  1. map_test.go (5039 bytes)

Copy link

Comment 19:

This issue was updated by revision 00224a3.

Go time drops from 0.51s to 0.34s.
R=r, rsc, m3b, dave, bradfitz, khr, ugorji, remyoudompheng

Copy link

robpike commented May 18, 2013

Comment 20:

Labels changed: added go1.2maybe, removed go1.1maybe.

Copy link

rsc commented Jul 30, 2013

Comment 21:

Is there anything more worth doing? I'm inclined to mark this Fixed.

Copy link

robpike commented Aug 7, 2013

Comment 22:

Marking as fixed.

Status changed to Fixed.

@rsc rsc added this to the Go1.2 milestone Apr 14, 2015
@rsc rsc removed the go1.2maybe label Apr 14, 2015
@golang golang locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
None yet

No branches or pull requests

7 participants