Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop using tchannel.WriteArgs in requestSender.MakeCall #217

Closed
wants to merge 9 commits into from

Conversation

jcorbin
Copy link
Contributor

@jcorbin jcorbin commented Jun 11, 2018

No description provided.

Copy link
Contributor

@prashantv prashantv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed offline, there's a few missing ops:

  • Close writers after the write
  • Read response

Basically, do everything raw.WriteArgs does, except using ioutil.ReadAll

@jcorbin
Copy link
Contributor Author

jcorbin commented Jun 12, 2018

Oh I see... yeah what I'm doing here initially is quite flawed, will sort it out and update.

@jcorbin jcorbin force-pushed the faster_forward branch 2 times, most recently from 27b53c6 to c6db774 Compare June 12, 2018 17:20
@jcorbin
Copy link
Contributor Author

jcorbin commented Jun 12, 2018

Okay, so I'm pretty sure that this is As Good As It Gets ™ as far as local changes to the request sender go.

Since the stability ship has sailed on the ringop interface wanting to return a response byte buffer from the forwarding path, the only other option would be to start adding a new faster forwarding path that doesn't have to read the response back.

For context, in production we're seeing that growing buffer space for ringpop forwarding dominates lifetime allocations (under the prior abstracted ioutil.ReadAll that the tchannel helper code hides).

@jcorbin
Copy link
Contributor Author

jcorbin commented Jun 12, 2018

Derp, I'd missed that io.Copy hides a read buffer allocation; updated to avoid even playing that game (I mean sure, we could've used io.CopyBuffer, but seemed better to just use bytes.Buffer.ReadFrom instead).

return nil
}

func writeArg(writable func() (tchannel.ArgWriter, error), bs []byte) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might want to look at tchannel.NewArgWriter which is basically this. suggest comparing perf to see if it hurts (it uses a closure)

}

func (bp *_bufPool) Get() *bytes.Buffer {
const allocLimit = 100 * 1024 * 1024 // cap initial allocations at 100MiB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this buffer pool is complicated by the limiting -- is that really needed? sync.Pool is frequently cleared (on every GC), so I'd drop the extra logic unless we have a good reason to need it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well see... let's try the turn around:

  • if we ever happen to forward a 500MiB message... do we always and forever want to be pre-allocating half-GiB buffers for every future message? I for one, say no ;-)
  • in fact, I think 100MiB may be too generous in such scenario

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make the buffer pool very simple: just have a prefixed size (E.g., 1-4mb) and always return buffers of that size.

If we see performance issues with that, then we can make it smarter

if res == nil {
*res = []byte{}
}
*res = append(*res, buf.Bytes()...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't need the bytes if res is nil, can do:

if res != nil {
  *res = buf.Bytes()
}
return nil

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Derp, I meant to make that a *res != nil... pointers to slices are a bit of a change up.

res itself will never be nil ( this isn't an exported interface, the only caller is here in the ringpop forwarding package, and it always passes a pointer to a nil slice.... which actually , append will always work... what even am I saying...

if err != nil {
return err
}
for {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use ioutil.Discard.ReadFrom(..) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL; will do

Copy link
Contributor

@prashantv prashantv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change looks good, it looks functionally correct, but can probably be simplified further.

}

func (bp *_bufPool) Get() *bytes.Buffer {
const allocLimit = 100 * 1024 * 1024 // cap initial allocations at 100MiB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make the buffer pool very simple: just have a prefixed size (E.g., 1-4mb) and always return buffers of that size.

If we see performance issues with that, then we can make it smarter

return err
}

*res = append(*res, buf.Bytes()...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the append to *res is a little confusing, i think you want *res = append([]byte(nil), buf.Bytes()...)

i'd add a comment that you want a clone of buf (since it's pooled)

if err != nil {
return err
}
_, err = io.Copy(w, ar)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going to allocate an internal buffer, is that ok?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    If src implements the WriterTo interface, the copy is implemented by
    calling src.WriteTo(dst). Otherwise, if dst implements the ReaderFrom
    interface, the copy is implemented by calling dst.ReadFrom(src).

And both of our writers implement io.ReaderFrom, you taught me this when you asked for the ioutil.Discard.ReadFrom point ;-)

As a curiosity;

  • I believe the order of preference expressed in io.Copy docs above...
  • ...means that you should prefer to have a src that implements io.WriterTo when possible
  • ...so I guess we could consider doing so in tchannel-go.fragmentingReader and/or the arg reader helper

This PR has just been a font of learning for me at least...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, since the call sites only pass in writers with ReadFrom, this will be efficient. To make sure things stay efficient, consider accepting an interface with ReadFrom (will need to cast ioutil.Discard which should succeed, can use a global to verify that on startup: var _ io.ReaderFrom = ioutil.Discard.(io.ReaderFrom)

@CLAassistant
Copy link

CLAassistant commented May 29, 2020

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


Joshua T Corbin seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@jcorbin jcorbin closed this May 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants