-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce UDP service per-packet allocation size #6263
Conversation
This change makes the UDP service only create a buffer that is the exact size of the incoming packet. This will reduce memory pressure and number of GC cycles, my results sending 100,000 UDP points were:
|
b98319e
to
da975c1
Compare
cc @joelegasse because you've been working on the influxdb relay, might be relevant to the UDP service there |
0e5659c
to
2298349
Compare
I actually did something similar in the relay that entirely eliminates the need for the config option, and instead just has a per-listener 64K buffer. :-) https://github.com/influxdata/influxdb-relay/blob/master/relay/udp.go#L113 |
this essentially does that too, I just don't really understand the influxdb config enough to know how to deprecate a config option? |
acaba81
to
253975d
Compare
Erp, nevermind, I've figured it out |
cc @sebito91, I believe you made this change to have a configurable udp-payload-size in the first place. This change goes a bit further and only allocates a buffer on each loop iteration that is the exact size of the incoming packet. |
I'm away from the machine at the moment but the reason we put in the option I will post the findings when I'm back at the terminal. On Thursday, April 7, 2016, Cameron Sparr notifications@github.com wrote:
Sebastian BorzaPGP: EDC2 BF61 4B91 14F2 AAB4 06C9 3744 7F3F E411 0D3E |
And re-reading the thread I think this change would actually be awesome, 👍 from me! On Thursday, April 7, 2016, Sebastian Borza sborza@alumni.princeton.edu
Sebastian BorzaPGP: EDC2 BF61 4B91 14F2 AAB4 06C9 3744 7F3F E411 0D3E |
|
||
bufCopy := make([]byte, n) | ||
copy(bufCopy, buf[:n]) | ||
s.parserChan <- bufCopy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC we have to zero out buf for each subsequent ReadFromUDP otherwise we fill the buf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
each call to ReadFromUDP will write to buf from the beginning, so it won't ever fill
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is correct the way it's written. ReadFromUDP
can't modify our slice, so buf
will always refer to the full 64K slice, and will get data copied into it starting at index zero, and return n
, the number of bytes that were copied for a given read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why I thought (or remembered) differently, but as long as this checks out and passes tests then I'm fine with it.
LGTM 👍 |
This will reduce memory pressure and number of GC cycles, my results sending 100,000 UDP points were: - udp-payload-size=0: 242 GC cycles - udp-payload-size=1500: 142 GC cycles - udp-payload-size=0 (with change): 114 GC cycles - udp-payload-size=1500 (with change): 112 GC cycles
253975d
to
23e2c40
Compare
Required for all non-trivial PRs