When compiled and run on Darwin system (using binary distribution), it have more or less expected performance -- twice as slow compared to curl.
$ /usr/bin/time ./hello
22.214.171.124 0.25 real 0.05 user 0.01 sys
$ /usr/bin/time curl https://ifconfig.me
126.96.36.199 0.12 real 0.01 user 0.01 sys
But when I compile and run it on my Linux box (Linux void-live 5.13.13_1 #1 SMP Fri Aug 27 13:28:13 UTC 2021 x86_64 GNU/Linux),
also using binary distribution, I get following results:
$ time ./hello
real 0m 1.03s
user 0m 1.30s
sys 0m 0.14s
$ time curl https://ifconfig.me
real 0m 0.16s
user 0m 0.06s
sys 0m 0.00s
Difference is much bigger. This time seems to come from some one-time initialization -- second request made in same program is fast. Here is annotated strace. It looks like it is doing many "rt_sigreturn" calls, but this is just my guess.
I have access to another linux box, with unusual kernel (lacking madvise(2), for example), which have more of the same problem: first request takes around 4 seconds, any subsequent work as expected.
The text was updated successfully, but these errors were encountered:
That seems to imply that the latency is going into something CPU-bound, and the fact that the reported user time exceeds the reported real time corroborates that at least part of the program is spending non-trivial CPU on something.
If you add a pprof.StartCPUProfile at the beginning of func main and pprof.StopCPUProfile at the end, does it show anything interesting?