New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpc-common/cpp/helloworld segfaults/hits assertion #946
Comments
Do you happen to have a client that reproduces this available? (It'd save My guess is that we're hitting an already deleted call, which would On Wed, Mar 4, 2015 at 8:40 AM Martin Kustermann notifications@github.com
|
I've sent you a reproduction of the bug via email. |
I'll be looking at it now. |
Ok, so I've seen that your custom HTTP/2 client seems to wrap the HTTP/2 stream ID after 24-bits. This is non-compliant with the spec (as the stream ID is supposed to keep increasing), but we need to be checking for it also in the gRPC libary. I've created a separate issue #957 for us to deal with that problem. I believe that recent changes in HEAD plus the changes that will be needed for #957 should solve the current problem, but I will keep checking it out. Thanks for sending your code. |
I have also run into this issue using the iOS binding. The bug is reproducible on the latest master after attempting multiple connections to the RPC server within 30 seconds.
|
I believe that the initial reported bug is now working. Issue #957 was addressed in the code, but further tests are necessary and have been pushed to GA. I recommend closing this issue and reopening a new issue with the problem reported by @mikepb as I don't think that they are the same codepath at all. Is everyone ok with that? |
Sounds great. I found that using the gRPC client to connect to a |
Using grpc at 1acbf43 and running the C++ grpc-common/cpp/helloworld example segfaults when hitting it very hard at roughly 15k requests/second.
I've added a small
printf()
statement to get the return value ofpthread_mutex_lock
and it turns out to be22
which meansEINVAL
.The text was updated successfully, but these errors were encountered: