Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for fd passing #75

Closed
wants to merge 1 commit into from
Closed

Conversation

cpuguy83
Copy link
Member

This adds a new message type for passing file descriptors.
How this works is:

  1. Client sends a message with a header for messageTypeFileDescriptor
    along with the list of descriptors to be sent
  2. Client sends 2nd message to actually pass along the descriptors
    (needed for unix sockets).
  3. Server sees the message type and waits to receive the fd's.
  4. Once fd's are seen the server responds with the real fd numbers that
    are used which an application can use in future calls.

To accomplish this reliably (on unix sockets) I had to drop the usage of
the bufio.Reader because we need to ensure exact message boundaries.

Within ttrpc this only support unix sockets and net.Conn implementations
that implement SendFds/ReceiveFds (this interface is totally
invented here).

Something to consider, I have not attempted to do fd passing on Windows
which will need other mechanisms entirely (and the conn's provided by
winio are not sufficient for fd passing).
I'm not sure if this new messaging will actually work on a Windows
implementation.

Perhaps the message tpye should be specifically for unix sockets? I'm
not sure how this would be enforced at the moment except by checking if
the net.Conn is a *net.UnixConn.

Closes #74

types.go Outdated Show resolved Hide resolved
@cpuguy83
Copy link
Member Author

I'm interested in comments on the viability of this.
Maybe the unbuffered reader should be opt-in... as in support for FD passing would be opt-in rather than implicit for performance reasons.

@cpuguy83
Copy link
Member Author

cpuguy83 commented May 7, 2021

Updated the test to use an actual out of process ttrpc server.

@codecov-commenter
Copy link

codecov-commenter commented May 7, 2021

Codecov Report

Merging #75 (a19e82d) into master (8ecd516) will decrease coverage by 4.68%.
The diff coverage is 43.84%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #75      +/-   ##
==========================================
- Coverage   69.77%   65.09%   -4.69%     
==========================================
  Files          11       11              
  Lines         612      719     +107     
==========================================
+ Hits          427      468      +41     
- Misses        149      212      +63     
- Partials       36       39       +3     
Impacted Files Coverage Δ
types.go 15.78% <16.66%> (+0.40%) ⬆️
channel.go 51.21% <17.50%> (-30.60%) ⬇️
server.go 69.11% <40.90%> (-4.55%) ⬇️
client.go 78.01% <77.50%> (-0.98%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 25f5476...a19e82d. Read the comment docs.

This adds a new message type for passing file descriptors.
How this works is:

1. Client sends a message with a header for messageTypeFileDescriptor
   along with the list of descriptors to be sent
2. Client sends 2nd message to actually pass along the descriptors
   (needed for unix sockets).
3. Server sees the message type and waits to receive the fd's.
4. Once fd's are seen the server responds with the real fd numbers that
   are used which an application can use in future calls.

To accomplish this reliably (on unix sockets) I had to drop the usage of
the bufio.Reader because we need to ensure exact message boundaries.

Within ttrpc this only support unix sockets and `net.Conn` implementations
that implement `SendFds`/`ReceiveFds` (this interface is totally
invented here).

Something to consider, I have not attempted to do fd passing on Windows
which will need other mechanisms entirely (and the conn's provided by
winio are not sufficient for fd passing).
I'm not sure if this new messaging will actually work on a Windows
implementation.

Perhaps the message tpye should be specifically for unix sockets? I'm
not sure how this would be enforced at the moment except by checking if
the `net.Conn` is a `*net.UnixConn`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@cpuguy83 cpuguy83 marked this pull request as ready for review May 7, 2021 19:44

ls, err := unix.ParseSocketControlMessage(oob[:oobn])
if err != nil {
return fmt.Errorf("error parsing socket controll message: %w", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

controll -> control

return err
}
case unixReader:
oob := ch.getmbuf(unix.CmsgSpace(len(files.List) * 4))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 4?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the magic number for fd messages, as I recall.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The size of the message?

Comment on lines +233 to +239
type unixReader interface {
ReadMsgUnix(p, oob []byte) (n, oobn, flags int, addr *net.UnixAddr, err error)
}

type unixWriter interface {
WriteMsgUnix(b, oob []byte, addr *net.UnixAddr) (n, oobn int, err error)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some doc comments? It is hard to understand the roles of p, b, oob.

@crosbymichael
Copy link
Member

What would this allow you to do? What do you have in mind?

@cpuguy83
Copy link
Member Author

@crosbymichael Apologies, those links I posted started 404ing when I force pushed over them.
I have an example here: moby/moby#42019

I've been working on getting dockerd out of the I/O path of containers.
In the case of the above mentioned PR everything related to container stdio is moved out of process: log drivers, exec I/O, even attaches. This allows dockerd to restart and keep HTTP clients attached to a container, as an example.

I would also like to be able to do similar things for containerd where a client is able pass the actual stdio pipes it is using directly to the executed container rather than the multiple levels of buffering in place today.

I'm also looking at being able to have a shim that can drain its ttrpc connections and pass them to a cooperating process.

All this can, of course, be done without ttrpc, but then we'd need to come up with a separate protocol/service for managing this.

We should be able to fix the buffered read mentioned in the original comment by wrapping wrapping the net.Conn to replace read with readmsg and set a fixed max number of fd's to be passed in a single rpc.

@cpuguy83
Copy link
Member Author

And another thing we may be able to do is support another message type like FileStream that the client can use when the transport does not support FD passing.

@AkihiroSuda
Copy link
Member

Needs rebase

@dmcgowan
Copy link
Member

I think this could just be implemented at the service level. You could implement a service specifically for passing fd and only register it if supported by the underlying connection.

@dmcgowan
Copy link
Member

Going to close this one for now, would love to see an example of this done as a service though

@dmcgowan dmcgowan closed this Nov 30, 2022
@cpuguy83
Copy link
Member Author

@dmcgowan I'm not sure I understand what you are saying.

What I think you are saying is this would be better implemented as a service that explicitly handles FD passing via a side-channel.

Given that, I've definitely given this some thought and was initially like "ok yeah we could do this and it would be fine".
However, with more time to think on this, I think it would be very beneficial to have it directly in the protocol so we don't need to have a side-channel.

I believe we should be able to work out the issues with buffering here by always reading the OOB data with some fixed size buffer.... so the caller would only ever be able to pass some number N of fd's at once, perhaps the buffer for this could be configurable when creating the server.

Of course I may have totally misunderstood what you were trying to say here.

@dmcgowan
Copy link
Member

You have probably spent more timing thinking about it than me. What are some of the downsides of having it as a side-channel service?

I might be looking at this differently, less about the more efficient way to do it and more about compatibility. I see the ttrpc client interface as mainly to be used for generated proto services so that services remain compatible between ttrpc and grpc. The tagline for this project is "GRPC for low-memory environments.", so it I think its fair to look at new features from the perspective of whether it is ttrpc only and what diverging from grpc (or more generally usage via proto service definitions) would mean for how ttrpc is used.

@cpuguy83
Copy link
Member Author

I think the main thing is you still need to come up with some kind of protocol for exchanging that information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

Support socket control messages for rpc calls
6 participants