-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop using tchannel.WriteArgs in requestSender.MakeCall #217
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline, there's a few missing ops:
- Close writers after the write
- Read response
Basically, do everything raw.WriteArgs
does, except using ioutil.ReadAll
Oh I see... yeah what I'm doing here initially is quite flawed, will sort it out and update. |
27b53c6
to
c6db774
Compare
Okay, so I'm pretty sure that this is As Good As It Gets ™ as far as local changes to the request sender go. Since the stability ship has sailed on the ringop interface wanting to return a response byte buffer from the forwarding path, the only other option would be to start adding a new faster forwarding path that doesn't have to read the response back. For context, in production we're seeing that growing buffer space for ringpop forwarding dominates lifetime allocations (under the prior abstracted |
Derp, I'd missed that |
return nil | ||
} | ||
|
||
func writeArg(writable func() (tchannel.ArgWriter, error), bs []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you might want to look at tchannel.NewArgWriter
which is basically this. suggest comparing perf to see if it hurts (it uses a closure)
forward/request_sender.go
Outdated
} | ||
|
||
func (bp *_bufPool) Get() *bytes.Buffer { | ||
const allocLimit = 100 * 1024 * 1024 // cap initial allocations at 100MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this buffer pool is complicated by the limiting -- is that really needed? sync.Pool
is frequently cleared (on every GC), so I'd drop the extra logic unless we have a good reason to need it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well see... let's try the turn around:
- if we ever happen to forward a 500MiB message... do we always and forever want to be pre-allocating half-GiB buffers for every future message? I for one, say no ;-)
- in fact, I think 100MiB may be too generous in such scenario
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make the buffer pool very simple: just have a prefixed size (E.g., 1-4mb) and always return buffers of that size.
If we see performance issues with that, then we can make it smarter
forward/request_sender.go
Outdated
if res == nil { | ||
*res = []byte{} | ||
} | ||
*res = append(*res, buf.Bytes()...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't need the bytes if res
is nil
, can do:
if res != nil {
*res = buf.Bytes()
}
return nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Derp, I meant to make that a *res != nil
... pointers to slices are a bit of a change up.
res
itself will never be nil ( this isn't an exported interface, the only caller is here in the ringpop forwarding package, and it always passes a pointer to a nil slice.... which actually , append will always work... what even am I saying...
forward/request_sender.go
Outdated
if err != nil { | ||
return err | ||
} | ||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use ioutil.Discard.ReadFrom(..)
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL; will do
…orward (modulo sanity)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change looks good, it looks functionally correct, but can probably be simplified further.
forward/request_sender.go
Outdated
} | ||
|
||
func (bp *_bufPool) Get() *bytes.Buffer { | ||
const allocLimit = 100 * 1024 * 1024 // cap initial allocations at 100MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make the buffer pool very simple: just have a prefixed size (E.g., 1-4mb) and always return buffers of that size.
If we see performance issues with that, then we can make it smarter
forward/request_sender.go
Outdated
return err | ||
} | ||
|
||
*res = append(*res, buf.Bytes()...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the append to *res
is a little confusing, i think you want *res = append([]byte(nil), buf.Bytes()...)
i'd add a comment that you want a clone of buf
(since it's pooled)
forward/request_sender.go
Outdated
if err != nil { | ||
return err | ||
} | ||
_, err = io.Copy(w, ar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is going to allocate an internal buffer, is that ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If src implements the WriterTo interface, the copy is implemented by
calling src.WriteTo(dst). Otherwise, if dst implements the ReaderFrom
interface, the copy is implemented by calling dst.ReadFrom(src).
And both of our writers implement io.ReaderFrom
, you taught me this when you asked for the ioutil.Discard.ReadFrom
point ;-)
As a curiosity;
- I believe the order of preference expressed in
io.Copy
docs above... - ...means that you should prefer to have a src that implements
io.WriterTo
when possible - ...so I guess we could consider doing so in
tchannel-go.fragmentingReader
and/or the arg reader helper
This PR has just been a font of learning for me at least...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, since the call sites only pass in writers with ReadFrom
, this will be efficient. To make sure things stay efficient, consider accepting an interface with ReadFrom
(will need to cast ioutil.Discard
which should succeed, can use a global to verify that on startup: var _ io.ReaderFrom = ioutil.Discard.(io.ReaderFrom)
Joshua T Corbin seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
No description provided.