Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: add CallOption that sets callback on stream close event #1717

Closed
minaevmike opened this issue Dec 7, 2017 · 12 comments
Closed

proposal: add CallOption that sets callback on stream close event #1717

minaevmike opened this issue Dec 7, 2017 · 12 comments
Labels
fixit P3 Type: Feature New features or improvements in behavior

Comments

@minaevmike
Copy link

It will be very usefull, if i can set some callback, witch will be called when grpc stream is closed.
My use case: i have an interceptor for tracing. It looks smth like this for unary:

func UnaryClientOpentracingInterceptor(tracer opentracing.Tracer) grpc.UnaryClientInterceptor {
	return func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
        startSpan(...)
        invoker(...)
        finishSpan(...)
}

but for stream iterceptor i can't do this, because intererceptor returns Client.

So my proposal:
add CallOption that sets callback on stream close event:

func OnStreamClose(f func(ctx context.Context)) CallOption {
....
}

So i would like to listen your thoughts about it.

@dfawley dfawley added P2 Type: Feature New features or improvements in behavior labels Dec 7, 2017
@dfawley
Copy link
Member

dfawley commented Dec 8, 2017

For the client side of streaming RPCs, unfortunately things like this are not straightforward.

  • There is no Stream.Close() method that the user must call to signify the end of the stream.
  • Instead, the final call to Stream.RecvMsg(), which returns the status, is essentially the end.
  • Some users don't call RecvMsg to retrieve the final status. So if we tie a callback to that, it may never be called.
  • We could tie it to when the server sends the trailers and ends the stream.
  • However, this could be confusing, because RecvMsg will typically be called after this happens. This means if OnStreamClose() is used, for example, to free resources used by the interceptor, that would be incorrect.

@minaevmike
Copy link
Author

Ok i agree with you. Mb logic can be similar to stats.Handler?

@dfawley dfawley added P3 and removed P2 labels Dec 14, 2017
@dfawley
Copy link
Member

dfawley commented Dec 14, 2017

stats.Handler is notified of stream closure, so I think for now the best way to implement your tracing is to use the stats.Handler instead if you need end-of-stream notifications.

EDIT: see below

@jhump
Copy link
Member

jhump commented Jan 17, 2018

FWIW, I have been implementing this sort of clean-up by wrapping the client-stream and then calling the hook from either of two places:

  1. From clientStream.RecvMsg, when the underlying stream returns non-nil error.
  2. From a finalizer (e.g. runtime.SetFinalizer) on the underlying client-stream. (Should be safe as long as the grpc library itself does not use a finalizer on the stream it returns. This stream never "escapes" since it is wrapped, so calling code cannot set a finalizer.)

Some sort of synchronizer (sync.Once, an atomic int flag, or a mutex) can be used to ensure the hook is only called from one of these two places.

The second bullet handles the case where the client never exhausts the response stream. Yes, I know it is not pretty. But I haven't found any other way to know with certainty that a client stream is over.

@dfawley
Copy link
Member

dfawley commented Jan 19, 2018

  1. From a finalizer (e.g. runtime.SetFinalizer) on the underlying client-stream. (Should be safe as long as the grpc library itself does not use a finalizer on the stream it returns. This stream never "escapes" since it is wrapped, so calling code cannot set a finalizer.)
    ...
    The second bullet handles the case where the client never exhausts the response stream. Yes, I know it is not pretty. But I haven't found any other way to know with certainty that a client stream is over.

We had this same thought a few months ago when the issue of stats.Handler not firing for users that don't do the final Recv() first came up. Since there's no stream.Close() that we tell users must be called, unfortunately, I think this is the only way. If/when this becomes an important thing to fix, we probably would install a finalizer (vs. calling via the transport, before the final Recv), so I hope that doesn't break your wrapper.

... And now I believe my comment on Dec 14 is bad:

stats.Handler is notified of stream closure, so I think for now the best way to implement your tracing is to use the stats.Handler instead if you need end-of-stream notifications.

This has the same drawbacks as just wrapping the Stream, so it won't help. Your best bet if you can't be sure the application will do the final Recv() is to add a finalizer to your wrapped Stream like @jhump suggests.

@dfawley
Copy link
Member

dfawley commented May 3, 2021

Closing due to lack of activity and priority.

@dfawley dfawley closed this as completed May 3, 2021
@easwars
Copy link
Contributor

easwars commented May 4, 2021

Isn't this similar to the internal issue for which @greg-dennis filed #4379.

If we plan on not adding support for this in the near future, how about adding an example which uses one of the two methods (or both) described in this thread?

@greg-dennis
Copy link

My issue #4379 is a bit more narrowly tailored to just the case of wanting to cancel a context when streaming is done, not an arbitrary callback. Such a CallOption would make the same level of guarantee (or lack thereof) of any other call option that has an "after" effect. If this is not acceptable, we could perhaps tailor it even further to my particular use case of wanting to inject a default timeout by creating a new child context.

@easwars easwars added the fixit label May 4, 2021
@easwars easwars reopened this May 4, 2021
@easwars
Copy link
Contributor

easwars commented May 4, 2021

If we can have a CallOption which allows users to perform any action of their choice on stream termination, we would prefer that over a specific CallOption for timeout specific purposes.

@dfawley
Copy link
Member

dfawley commented May 4, 2021

I still don't think us doing anything here is likely, and it doesn't come up that often.

An interceptor can do this, it's just tricky. There are rules documented that applications are expected to follow that interceptors can rely upon:

To ensure resources are not leaked due to the stream returned, one of the following actions must be performed:

  1. Call Close on the ClientConn.
  2. Cancel the context provided.
  3. Call RecvMsg until a non-nil error is returned. A protobuf-generated
    client-streaming RPC, for instance, might use the helper function
    CloseAndRecv (note that CloseSend does not Recv, therefore is not
    guaranteed to release all resources).
  4. Receive a non-nil, non-io.EOF error from Header or SendMsg.

The trickiest one here is (1), which would require intercepting (wrapping) grpc.Dial. Realistically, though, why is your application not canceling the context or reading to the end of the stream? That's unusual.

(2) requires a goroutine for the stream, which is unfortunate, but manageable. And, in the case of #4379, wouldn't be necessary due to cancelation propagation.

At this point I don't see this feature ever becoming a high enough priority.

@greg-dennis
Copy link

My takeaway seems to be that if a StreamClientInterceptor creates its own child context with a new deadline, maybe I should just not bother to try and cancel it. It doesn't make me feel comfortable to do that in code, but the application in this case does read to the end of the stream and cancel the parent context. And all the other alternatives seem to me to similarly rely on the client of the stream using it "correctly." And if the client were to misbehave / use it incorrectly -- which it doesn't, at worst there is a resource leak up to the newly imposed (short) deadline.

The other alternatives seem baroque and not fundamentally different in terms of their guarantees. Let me know if you believe that conclusion to be incorrect.

@dfawley
Copy link
Member

dfawley commented May 5, 2021

My takeaway seems to be that if a StreamClientInterceptor creates its own child context with a new deadline, maybe I should just not bother to try and cancel it.

As a general philosophy, this doesn't sound like a good idea. The application may be using a context that persists for the duration of the process, and it may be performing millions of RPCs during that time. This kind of scenario would result in leaked contexts.

And if the client were to misbehave / use it incorrectly -- which it doesn't, at worst there is a resource leak up to the newly imposed (short) deadline.

If the client doesn't use the stream properly, there is already a goroutine / memory leak. So you can rely on the application following these rules, or if it doesn't, you won't be making anything substantially worse. (In fact, it would be better as your shorter deadlines would actually clean up the leaks.)

Let me know if you believe that conclusion to be incorrect.

If you are in full control of the application that is using the interceptors, then the decision is ultimately up to you.

If you are writing an interceptor to be used by applications outside your control, then my recommendation is to cancel the contexts you create when RecvMsg returns a non-nil error or Header/SendMsg return a non-nil, non-io.EOF error. These are very trivial to wrap if you aren't already intercepting them. Of the others, (2) provides cancelation automatically, and (1) is not worth worrying about IMO. An application should still follow rules 2-4 except in very unusual cases.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
fixit P3 Type: Feature New features or improvements in behavior
Projects
None yet
Development

No branches or pull requests

5 participants