Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prototype message patterns: RecvSend, SendRecv and Send #5

Draft
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

thomaseizinger
Copy link
Contributor

@thomaseizinger thomaseizinger commented Feb 5, 2023

Whilst thinking more about libp2p/rust-libp2p#3130 and the PoC I made in https://github.com/thomaseizinger/rust-async-response-stream, I realized that this could actually be part of asynchronous-codec.

It could be implemented on top but it is also fairly intertwined with the idea of Framed. A good data point here is that we don't need any additional dependencies. All we are doing is defining a Stream utility on top of Framed that makes it easier to implement a request-response message pattern.

The design has slightly evolved from https://github.com/thomaseizinger/rust-async-response-stream. Instead of providing two Futures, we now just implement Stream. For the easy usecase where we close the stream after sending the response, we only ever output one Item: The incoming request and the placeholder for the response. Because it is a Stream, a SelectAll will continue to poll it until it returns None. This allows us to completely hide the message exchange and makes using this really easy:

  1. Create an instance and push it into a SelectAll.
  2. Process messages coming in from polling SelectAll.

I also removed the timeout, I think this should be implemented on top.

Copy link

@nazar-pc nazar-pc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I think about the pattern and what this seems to essentially provide is this (pseudo-code):

fn read_write_pair<R, W, C>(read_half: R, write_half: W, codec: C)
    -> impl Stream<Item = (Request, Sender<Response>)>
where
    R: Stream,
    W: Sink,
    C: Encode + Decode,
{
    read_half.map(|request| {
        let (tx, rx) = oneshot();
        
        tokio::spawn(async {
            let response = rx.recv().await;
            write_half.send(codec.encode(response)).await;
        });
        
        (codec.decode(request), tx)
    )
}

I might be completely wrong, but there doesn't seem to be much documentation explaining what all this is and what problem it tries to solve. It would also be great to see explanation of design choices and why one might want to use this library. This comment isn't just about the pull request, but rather about the whole asynchronous-codec, I look at it and I just don't get why and when would I use it in my own project if I had one where it is applicable.

tests/reqres.rs Outdated
let mut buffer = Vec::new();
buffer.extend_from_slice(b"hello\n");

let mut stream = RecvSend::new(Cursor::new(&mut buffer), LinesCodec).close_after_send();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing to me. It seems to use buffer for both reading a messages from and writing message to. It'll be a more complex data structure in practice though where data flow for reads and writes is separate, right?

If so, why not having two explicit arguments: source and sink?

UPD: I see it comes from Framed, whose usefulness and reason for combining both source/stream and sink I don't get either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The buffer is meant to simulate a network stream.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just found example counter-intuitive because TCP stream (as an example) is duplex, but reads and writes are accumulated on different sides rather than being concatenated in the same vector. Composing duplex stream from separate read and write buffers would be more intuitive, but it would also be much more code.

I guess what I anticipated is some comment about why we read and write to the same slice of memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you are saying. Unfortunately, I am yet to find a good test utility that plugs neatly into AsyncRead + AsyncWrite. Might need to write one ...

tests/reqres.rs Outdated

assert_eq!(req, "hello\n");

let (rx, tx) = oneshot::channel();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tx usually means transmitting, meaning sending. Names are flipped here.

@thomaseizinger
Copy link
Contributor Author

Sorry for the lack of documentation!

The asynchronuous-codec crate is useful to define encoding and decoding modules that compose well. Framed can then be used to turn an Async{Read,Write} into a Stream and Sink of messages. We use it extensively inside rust-libp2p to decode network messages from a byte stream.

What this PR adds are "message patterns" on top of a framed stream. So far only one is added but it is easy to envision more. I've started with RecvSend because it is the most interesting one.

Your pseudo-code is on point. My vision is that by implementing Stream, we can drive a RecvSend through a SelectAll as part of ConnectionHandler::poll.

The item returned from RecvSend are the incoming requests plus a "channel" to send the response back. This allows for the response to be provided out-of-bound from the stream. There are various patterns employed in rust-libp2p as to how the response makes it to the stream. I think a channel-design lives at the intersection of convenience (response can be sent from anywhere), minimal boilerplate (no message passing down the layers) and (type)safety (response can only be sent once).

The design goals for this were:

  • Minimal boilerplate upon usage. It is intended to be repeated in every ConnectionHandler to replace the current InboundUpgrade and OutboundUpgrade traits.
  • Good composition. For most use cases, we will want to add a timeout to the entire recv and send operation. By implementing Stream, we should be able to compose it with a generic wrapper that times out if the inner stream doesn't finish in time.
  • Flexible for different messages. Because we depend only on Encoder and Decoder, this can be used with probably any protocol.

@nazar-pc
Copy link

Okay, that is really helpful, I think I have a much better understanding of it now, thanks!

@nazar-pc
Copy link

Sent some tweaks for this branch in thomaseizinger#1

@thomaseizinger
Copy link
Contributor Author

@mxinden I keep coming back to wanting this almost every time I touch any protocol code. Are you in favor of this?

@thomaseizinger
Copy link
Contributor Author

@mxinden I keep coming back to wanting this almost every time I touch any protocol code. Are you in favor of this?

We would use it in combination with a TimedStreams abstraction that wraps a stream::SelectAll but races each stream against a timeout. Using it together with the RecvSend in here (and other "protocols" like SendRecv or just Recv or Send) gives us the same functionality as InboundUpgrade and OutboundUpgrade just without having to implement any traits but composing existing code together.

@mxinden
Copy link
Owner

mxinden commented Mar 13, 2023

I am hesitant: I don't think the usage of Stream is intuitive here. I consider a Stream a consumer from a source, not both a consumer and a source sending data to a destination.

All that said, the abstraction might be worth the confusion that comes with it. It seems to me that you feel strongly about this new abstraction. Which of our rust-libp2p protocols would benefit the most of this pattern? Do you want to integrate this into one of them as a proof-of-concept?

@thomaseizinger
Copy link
Contributor Author

I am hesitant: I don't think the usage of Stream is intuitive here. I consider a Stream a consumer from a source, not both a consumer and a source sending data to a destination.

I think of it as a source of events of a protocol.

All that said, the abstraction might be worth the confusion that comes with it. It seems to me that you feel strongly about this new abstraction. Which of our rust-libp2p protocols would benefit the most of this pattern? Do you want to integrate this into one of them as a proof-of-concept?

Every one that sends messages. The idea has the potential to replace every state machine for sending messages in rust-libp2p with a use of the appropriate "message pattern". Currently we only have RecvSend implemented here but it is easy enough to envision additional ones, SendRecv for example.

@thomaseizinger thomaseizinger changed the title Initial RecvSend implementation Prototype message patterns: RecvSend, SendRecv and Send Mar 13, 2023
@thomaseizinger
Copy link
Contributor Author

Do you want to integrate this into one of them as a proof-of-concept?

Here you go: libp2p/rust-libp2p#3610

@thomaseizinger
Copy link
Contributor Author

Code duplication in here is currently horrendous but that can be fixed later.

@thomaseizinger
Copy link
Contributor Author

thomaseizinger commented Mar 14, 2023

I am hesitant: I don't think the usage of Stream is intuitive here. I consider a Stream a consumer from a source, not both a consumer and a source sending data to a destination.

All that said, the abstraction might be worth the confusion that comes with it. It seems to me that you feel strongly about this new abstraction. Which of our rust-libp2p protocols would benefit the most of this pattern? Do you want to integrate this into one of them as a proof-of-concept?

I've written about this - to some extent - here: libp2p/rust-libp2p#3603

The usage of Stream is somewhat accidental actually. It happens to be the closest thing to a generator interface that we have on stable and has widespread use.

If we had a generator interface, I'd use that!

@thomaseizinger
Copy link
Contributor Author

I might have a go at re-writing these using async/await and see whether they turn out to be cleaner. Most likely they will at the cost of an allocation but I'd be surprised if we can even measure that in the context of network IO.

@thomaseizinger
Copy link
Contributor Author

I might have a go at re-writing these using async/await and see whether they turn out to be cleaner. Most likely they will at the cost of an allocation but I'd be surprised if we can even measure that in the context of network IO.

It doesn't really work very well because Rust doesn't have native generators yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants