You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to use Clipper to deploy a single model with approximately 1.3M parameters.
This model's forward pass works very quickly locally (0.1s), while the same model in Clipper generally takes 5 to 20 seconds (usually ~15) to do the forward pass.
I'm quite stumped at what the issue might be with this model, and while I would love to share a detailed example here, the code I am using is closed source.
Could you please help guide me in the right direction to solving my issue?
Many thanks in advance!
The text was updated successfully, but these errors were encountered:
For comparison, I managed to implement the same model in Flask and get it doing predictions ~15 times faster, so my suspicion is that the issue is not (at least entirely) related to the containerisation.
Dear Clipper admins,
I'm attempting to use Clipper to deploy a single model with approximately 1.3M parameters.
This model's forward pass works very quickly locally (0.1s), while the same model in Clipper generally takes 5 to 20 seconds (usually ~15) to do the forward pass.
I'm quite stumped at what the issue might be with this model, and while I would love to share a detailed example here, the code I am using is closed source.
Could you please help guide me in the right direction to solving my issue?
Many thanks in advance!
The text was updated successfully, but these errors were encountered: