Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
go/format: add benchmarks #26528
I maintain a (fork of a) tool called
Currently there are no benchmarks for
A starting point may be the benchmark here: https://github.com/kevinburke/go-bindata/blob/master/benchmark_test.go. Committing an extremely large file to source might not be the best way to do this, though.
added a commit
Jul 22, 2018
I suspect go/format performance hotspots vary a lot based on input, with a long tail of pathological problematic inputs. Instead of a generic “add benchmarks” issue, which is hard to fix, let’s instead file performance issues for specific inputs, like go-bindata-style output.
We don’t want to add a large file to the repo, but I suspect you could create a suitable input (for reproduction purposes) on the fly, in memory.
Also, have you tried with 1.11 betas yet? I made some improvements to text/tabwriter this cycle that might help.
I have, thanks! I'm not sure about the benchmarking output. When we use format.Source on the whole file, it looks slower.
I just merged a change that only calls format.Source on a small part of the file. With that change it looks like there's an improvement between 1.9.7 and tip.
So what are my options here for a benchmark? I am guessing these are not possible:
What about downloading a large file from the public internet, and skipping the benchmark if the download fails? This one for example would be suitable. https://github.com/kevinburke/go-bindata/blob/master/testdata/assets/bindata.go
Otherwise, I suppose I could check in a medium size photo (or use an existing one in the git repository), and manually reconstruct the process of building the bindata file.
Skimming the beginning of that file, I'm guessing you could get similar performance behavior with just something like:
package p var x1 = byte("some very very very long string created with bytes.Repeat") var x2 = byte("another long string") var x3 = byte("as many of these as you need to recreate your performance characteristics")
That should be easy to create in memory with a plain old for loop. In any case, I'd start there and use pprof profiles to convince yourself that the cpu/memory behavior is similar. If it isn't, tweak the simple generated file until it is. If somehow (which I would find very surprising) the contents of the strings involved matter, just generate random bytes using math/rand.
I used to use