Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid closing the stream if server still open #47

Merged
merged 5 commits into from Sep 11, 2023

Conversation

olegbespalov
Copy link
Collaborator

@olegbespalov olegbespalov commented Sep 4, 2023

What?

In #29, it was reported that the server-side stream is prematurely closing when the client call stream.end. But the stream could still send the data.

So we do close the stream only when it closes (any error came, including the io.EOF)

To test the fix we, we just need to remove the sleep 5963044

Why?

Closes: #29

@olegbespalov
Copy link
Collaborator Author

Hey @thiagodpf

This PR is trying to address the issue that you've reported #29, I'm wondering if you have the capacity to build and check the k6 with the fix.:relaxed: Thanks!

@olegbespalov olegbespalov marked this pull request as ready for review September 4, 2023 14:27
@olegbespalov olegbespalov requested a review from a team as a code owner September 4, 2023 14:27
@olegbespalov olegbespalov requested review from mstoykov and codebien and removed request for a team September 4, 2023 14:27
@thiagodpf
Copy link
Contributor

Hi @olegbespalov!

I compiled and ran k6 with your changes and the timeout control worked perfectly! 🎉🎉🎉

Thank you very much for addressing this fix! 🙇

grpc/stream.go Outdated Show resolved Hide resolved
grpc/stream.go Show resolved Hide resolved
return
}

s.logger.WithError(err).Debug("stream is closing")

s.state = closed
close(s.done)
s.tq.Queue(func() error {
return s.callEventListeners(eventEnd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a separate event for the stream "ending" in one direction but not the other?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I'm not sure if I'm getting what is the concern here 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this events measn the whole stream has ended - and will in prtactice be called once. But now it either will mean "the writing direction is closed /ended" or will mean "either the writing or reading direction has ended".

Given that it seems like it will be beneficial to close one or the other and continue getting other/data events- it seems beneficial to also be able ot recognise when one or the other direction has ended.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this events measn the whole stream has ended - and will in prtactice be called once. But now it either will mean "the writing direction is closed /ended" or will mean "either the writing or reading direction has ended".

But it's not. The behavior remains. The stream.on('end', () => {}) will be triggered only once, when the stream is closed (for both writing and reading).

And I see no need for ending event since that moment happens exact after calling stream.end()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean, but I think the bigger problem is that I personally was mistaken in my understanding of how this is suppsoed to work.

Having taken some time to look at some nodejs exampels it seems to me like .end() the method only closes the writing part and the end event only signals closing the reading part.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been trying to figure out what happens if you try to write to a stream that that was readonly ... and ... I am not certain what happens as I feel the current examples are not complicated enough :(

And I have been trying to not try to write completely new examples just to get them wrong :(

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it isn't a good idea to look at the nodejs's docs/examples to get an understanding of how our implementation works 🤷 .

Yes, we have a kind of similar API, but it's not the same and in our docs, we're pretty explicit on how it is supposed to work. Like stream.end signals that the client finished the sending and stream.on('end', () => {}) happens when the stream closes.

And I believe our current behavior on's end is better in k6's context since it could help our users to put some checks there since we declare that this is the moment when the stream finished and everything that should happen is happen or not 😅

I have been trying to figure out what happens if you try to write to a stream that that was readonly ... and ... I am not certain what happens as I feel the current examples are not complicated enough :(

I might think that here you mean two different cases:

The first case is about the stream being always read-only (a.k.a server-side stream). In that case, we could learn it from the definitions, I believe, and just return trigger an error (or write a log that there was an attempt to write to read-only stream).

The second case is more interesting 🤔 , but I'm not sure that it is realistic. It's the case when the server finishes the stream while the client still wants to send data. On the other side, I believe even in that case, the last word is on the server to report that it's all done (io.EOF in the read from server part).

Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like an improvement to me.

I feel like this currently closes both directions and whole stream in cases where it is possible that a user can still write to the server. But let's wait for this to be a case a user reports I guess.

I would still prefer if we had a specific unit test instead of just an example which happens to test it ... maybe - I am pretty sure it still races without that sleep and the connection actually finishing.

This adds a dedicated unit test for checking that all responses from the
server were received even if client declared that sending is finished
(client.end) called.
@olegbespalov
Copy link
Collaborator Author

I feel like this currently closes both directions and whole stream in cases where it is possible that a user can still write to the server. But let's wait for this to be a case a user reports I guess.

Fully agree 👍 Let's wait for more feedback and based on that I can iterate.

I would still prefer if we had a specific unit test

I've tried to address this in afff315

Copy link
Contributor

@codebien codebien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, some nitpicks in the naming and the running of the test.

@olegbespalov olegbespalov merged commit c1e8f28 into main Sep 11, 2023
10 checks passed
@olegbespalov olegbespalov deleted the fix/dont-close-server-stream branch September 11, 2023 08:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

prematurely "context canceled" on gRPC stream when timeout is set
4 participants