-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Datastore: infrequent operations always fail first time, requires retry #899
Comments
@murgatroid99 does the GRPC client have a keep alive feature for non-streaming requests? @timanovsky what is the operation that is affected in V1? |
@blowmage It is just query (with ancestor) |
@timanovsky Is this still an issue? If not, can you close? |
@timanovsky Is this still happening with |
Hi, I haven't updated for awhile, I can give it a try. Does it have the
enhancement? I couldn't find anything related in the release notes.
Best regards,
Alexey Timanovsky.
|
The grpc 1.1.0 release made improvements to networking. If you install the latest gem you will get use that version. Curious if it improves your situation. |
@murgatroid99 can you or someone else comment on keep alive in the GRPC lib? |
I've been running it for half a day, and I would say no change, connections
still die after some inactivity. Error text has changed though, now it is
GRPC::Internal / 13 / "Transport closed"
Best regards,
Alexey Timanovsky.
|
Thanks @timanovski! |
I'm experiencing the same issue in a rails app with the Vision library. In my config/initializers folder, I have
Then in the controller, I simply have:
This works, but if I wait 4 minutes, I get:
And such as a result. Subsequent requests within the time frame still work. |
grpc/grpc#9986 should be able to help. |
grpc-1.2.1.pre1 pre-release gem was just pushed, which should fix this (it includes grpc/grpc#9986). Can you please use this pre-release gem to further test and verify? |
@Rob117 @timanovsky Can you please give it a try? |
@Rob117 @timanovsky This issue is blocking our release. If there are no updates from you, we will close this issue. You may reopen it if you still run into the issue later. |
FWIW, I have not been able to reproduce the behavior in described in this issue. I've left a process idle for hours and it connects again without error. |
Close it now since no updates from the original reporters. Fix added in grpc 1.2.1-pre1 gem. |
Source-Link: googleapis/googleapis@55499b5 Source-Link: googleapis/googleapis-gen@cf5049b Copy-Tag: eyJwIjoiZ29vZ2xlLWNsb3VkLWNvbXB1dGUtdjEvLk93bEJvdC55YW1sIiwiaCI6ImNmNTA0OWI3MDc5MjgyMDA2NWRiMzhlNzEyN2YzMmVhYjc3MDU5NDQifQ==
Once we moved to v1 API we saw significant slow down of one particular operation. Investigation suggested that only particular type of operation is affected - infrequent (once per tens of minutes) reads, and unfortunately the servers performing these ops are not doing any other kind of datastore operations. I believe it is due to the reason mentioned in grpc/grpc-java#1648 that google load balancer shuts inactive TCP connections down after 10 minutes. So if previous operation was further back than that, new operation fails with EOF code (I mentioned that in other issue). Effectively what we see is that the operations goes shortly enough after previous one it takes 120-180 msec, but if retry is involved it takes 1200 ms.
I think some kind of keep alive / pings should be configured on the connection to prevent this. I'm not sure grpc provides such configuration option though.
In worst case, should I implement this keepalive myself in background thread, what would be a good Datastore endpoint to reach, so that it does not depend on data presence?
The text was updated successfully, but these errors were encountered: