Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The inference speed of V2 API is much slower than the V1 version #2080

Closed
lcy-seso opened this issue May 10, 2017 · 0 comments · Fixed by #2178
Closed

The inference speed of V2 API is much slower than the V1 version #2080

lcy-seso opened this issue May 10, 2017 · 0 comments · Fixed by #2178
Assignees

Comments

@lcy-seso
Copy link
Contributor

A user reported that the speed of inference by using V2 APIs is slower than before, as I know, there are some not quite good designs. Is it possible to optimized?

Besides, we do not provide an official doc on how to do batch inference. I want to know if there are some rules to follow or some things I should pay attentions to.

@lcy-seso lcy-seso added this to Top priorities in Defects board May 10, 2017
@lcy-seso lcy-seso moved this from Not in schedule to Next Week in Defects board May 10, 2017
@lcy-seso lcy-seso moved this from Next Week to Current Week ToDo in Defects board May 10, 2017
@lcy-seso lcy-seso moved this from Current Week ToDo to Next Week in Defects board May 10, 2017
@lcy-seso lcy-seso moved this from Next Week to Not in schedule in Defects board May 10, 2017
@lcy-seso lcy-seso moved this from Not in schedule to Current Week ToDo in Defects board May 17, 2017
@lcy-seso lcy-seso moved this from Current Week ToDo to Done in Defects board May 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging a pull request may close this issue.

2 participants