You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have used TensorFlow Serving to load multiple models for online or offline services. For online services, we focus on the latency of each gRPC request and it's good enough. But for offline services, we want to improve the throughout by batching data in one gRPC request.
Then we need the limitation from gRPC max message size which is already discussed in #284 . The default max size is 4M which is much smaller than the real world requests.
I'm not sure if 100M is suitable but it would be better for most users. I may send an PR to raise the max message size if someone also think it's reasonable. Otherwise, most developers need to compile the source code to change this limitation.
The text was updated successfully, but these errors were encountered:
Discussed on the PR, but our current limit as of this commit is 2G (as of about a month ago). Perhaps you're running with an old binary that's still using the default 4M? We tested the model_server after the change and it should be able to handle requests of 100M already.
We have used TensorFlow Serving to load multiple models for online or offline services. For online services, we focus on the latency of each gRPC request and it's good enough. But for offline services, we want to improve the throughout by batching data in one gRPC request.
Then we need the limitation from gRPC max message size which is already discussed in #284 . The default max size is 4M which is much smaller than the real world requests.
I'm not sure if 100M is suitable but it would be better for most users. I may send an PR to raise the max message size if someone also think it's reasonable. Otherwise, most developers need to compile the source code to change this limitation.
The text was updated successfully, but these errors were encountered: