-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
review: HttpClient implementation refactor #8
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a couple of comments.
final var request = toStandardHttpRequest(originalRequest); | ||
final var future = new JettyAsyncResponseListener(request) { | ||
|
||
final StringBuilder builder = new StringBuilder(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the threading model of jetty - will it always be the same websocket thread making the onContent calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure. But the synchronization and calling order should be guaranteed by the callback.succeed() call.
i.e. do you mention this to make the variable volatile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wouldn't need to be volatile, it would need to be a StringBuffer or synchronized to account for the possibility of reordered writes. As you say that's probably highly unlikely given the higher level serialization control.
Also JettyHttpClientTest can remove the testConsumeLines override.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so yes, I considered StringBuffer, but I feel that the callback.succeed call should guard against any possible reordered rewrites regardless of which thread calls the original onContent method.
Leaving it as it is then.
Also JettyHttpClientTest can remove the testConsumeLines override.
Yes, I forgot those and the one in the JettyInterceptorTest
|
||
@Override | ||
protected void onContent(ByteBuffer content) throws Exception { | ||
for (char c : StandardCharsets.UTF_8.decode(content).array()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can consider elevating something like this so that we have the same thing across all implementations. Honestly I'm not sure what is currently guaranteed by the api server wrt http watches messages - I was just trying to mimic the okhttp behavior. Looking back I can see that in 5.10 and older we were making a strict call https://github.com/fabric8io/kubernetes-client/blob/23db4301478e74dac7154af70907bf98f2936ff5/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/dsl/internal/WatchHTTPManager.java#L98 - I must have changed that to non-strict to be less blocking in the 5.12+. However that did introduce the possibility of truncated messages.
This implementation is strict - it will not produce truncated messages, and if the content does not end with a newline, then the final line is not emitted.
The jdk logic also handles the case of splitting multi-bye values across messages - again I'm not even sure if the api server would do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to add more tests regarding this, but I don't want to delay the merge of the 4430 PR.
The current implementation:
- supports only
\n
as line terminators/breaks - ignores the last line if it's not terminated/breaks
- won't emit truncated messages (which I thought was part of the deal, TBH I didn't check anything and implemented from what I recalled from our conversations 馃槄)
So how do you want me to proceed in the scope of this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
supports only \n as line terminators/breaks
Which if fine, that's what the other implementations are doing.
ignores the last line if it's not terminated/breaks
The JDK implementation does this, okhttp currently does not - but did in 5.10.
won't emit truncated messages (which I thought was part of the deal, TBH I didn't check anything and implemented from what I recalled from our conversations sweat_smile)
That is the desired behavior - it's just that it adds even more blocking to the current okhttp implementation.
So how do you want me to proceed in the scope of this PR?
As long as we're comfortable with ignoring the possibility that multi-byte characters are split across messages (I'm willing to assume that), I would elevate this implementation and remove the okhttp and jdk ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as we're comfortable with ignoring the possibility that multi-byte characters
Depending on how the consumeLines is used downstream which I haven't checked 馃う and don't recall precisely.
I would elevate this implementation and remove the okhttp and jdk ones.
So my proposal is that I tackle this in a follow-up PR and we move on with this one to not delay the Vert.x implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depending on how the consumeLines is used downstream which I haven't checked facepalm and don't recall precisely.
It's only used for http watches, which are only used if web sockets do not seem supported.
So my proposal is that I tackle this in a follow-up PR and we move on with this one to not delay the Vert.x implementation
Either way - that next pr will still need to go in before the vert.x implementation - as it would remove the need to implement a consumeLines method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's only used for http watches, which are only used if web sockets do not seem supported.
Ok, so it's not a critical thing, and probably the API server is serving complete chunks. I wouldn't complicat it more then.
as it would remove the need to implement a consumeLines method.
Seems like a valid point
Maybe we can discuss in the call
* @return the HTTP request. | ||
*/ | ||
HttpRequest request(); | ||
|
||
Optional<HttpResponse<?>> previousResponse(); | ||
|
||
enum SupportedResponses { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be consolidated with a renamed SendAsyncUtils - it could remain package private that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did want to expose the SupportedResponses, because besides Javadoc, there's nothing tangible that suggests users what the preferred response should be.
However, a sendAsync method signature accepting the enum would be less friendly since we'd lose the T parameter.
Would making the SupportedResponses.asString and SupportedResponses.sendAsync methods private be enough for you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did want to expose the SupportedResponses, because besides Javadoc, there's nothing tangible that suggests users what the preferred response should be.
Do you envision refactoring the calling code from:
client.sendAsync()
To:
SupportedResponses.TEXT.sendAsync(...)
And eliminating HttpClient.sendAsync altogether? I'm fine with that too. There will eventually be a couple of other cases - ReadableByteChannel - which could either be blocking (essentially what is underpinning the InputStream now) or non-blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, I was thinking more something like this:
default <T> CompletableFuture<HttpResponse<T>> sendAsync(HttpRequest request, HttpResponse.SupportedResponses type) {
return type.sendAsync(request, this);
}
I added the method to this PR so you can check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I thought you were saying you didn't want to loose the type parameter from the enum. Yes, that is fine. But are you thinking of moving forward with this change in this pr, or a follow-on? Some tests won't compile with this change in.
2f328e0
to
bf8e414
Compare
- implementation of JettyHttpClient consumeLines - additional tests - centralization of HttpResponse body/async consumption logic - uncalled-for nitpicks 馃槄馃槆 Signed-off-by: Marc Nuri <marc@marcnuri.com>
bf8e414
to
3356764
Compare
Hi @shawkins,
I've been reviewing 4430 and added a few changes to your branch. Please check if you see them fit.
I want to add more tests, but will proceed with this once we merge fabric8io#4430