Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with speed of clipper with custom PyTorch model #760

Open
alexvicegrab opened this issue Dec 2, 2019 · 1 comment
Open

Issue with speed of clipper with custom PyTorch model #760

alexvicegrab opened this issue Dec 2, 2019 · 1 comment

Comments

@alexvicegrab
Copy link

Dear Clipper admins,

I'm attempting to use Clipper to deploy a single model with approximately 1.3M parameters.

This model's forward pass works very quickly locally (0.1s), while the same model in Clipper generally takes 5 to 20 seconds (usually ~15) to do the forward pass.

I'm quite stumped at what the issue might be with this model, and while I would love to share a detailed example here, the code I am using is closed source.

Could you please help guide me in the right direction to solving my issue?

Many thanks in advance!

@alexvicegrab
Copy link
Author

For comparison, I managed to implement the same model in Flask and get it doing predictions ~15 times faster, so my suspicion is that the issue is not (at least entirely) related to the containerisation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant