Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve the Base model #1

Closed
vishwarajanand opened this issue Apr 18, 2022 · 3 comments
Closed

Improve the Base model #1

vishwarajanand opened this issue Apr 18, 2022 · 3 comments

Comments

@vishwarajanand
Copy link

Hi, I forked this repo and renamed full-demo to docs folder so that i could host it on pages.

Here is the URL I tried: https://vishwarajanand.github.io/pytorch-to-javascript-with-onnx-js/

The base model is actually way more basic than it needs to be. It doesn't detect basic hand drawn numbers, which are often confusing. Sharing a few examples of the same below.

Why I opened this issue?

Because there's no way for me to know (easily) whether these issues are coming up due to the base model or onnx.js has caused these behaviors?

Expected Number vs Predicted Number Snap
8 vs 0 Screenshot 2022-04-18 at 10 47 24
7 vs 1 Screenshot 2022-04-18 at 10 52 04
6 vs 5 Screenshot 2022-04-18 at 10 52 39
8 vs 0 Screenshot 2022-04-18 at 10 53 23
8 vs 5 Screenshot 2022-04-18 at 10 53 34
9 vs 5 Screenshot 2022-04-18 at 10 53 52
6 vs 5 Screenshot 2022-04-18 at 10 54 04
4 vs 0 Screenshot 2022-04-18 at 10 54 14
4 vs 0 Screenshot 2022-04-18 at 10 54 22
2 vs 3 Screenshot 2022-04-18 at 10 55 09
@elliotwaite
Copy link
Owner

Thanks for opening this issue. The poor model accuracy is due to the model being very basic. The fact that it's being run with ONNX.js rather than PyTorch should not affect its predictions. The demo is only meant to be a proof of concept so I wanted to keep the model simple, so I just used the same model that was used in PyTorch's MNIST example. But since others may also be wondering about the poor performance, I've added a note about this to the README that references this issue.

Also, if you wanted to try using a better model in your forked version, I could add a link in the README to your GitHub Pages demo so that others could also try it out.

Let me know if this resolves the issue for you, or if you think there's a better solution.

@vishwarajanand
Copy link
Author

Thanks @elliotwaite. I was just evaluating the webpage response times with the provided onnx model. Since you commented that the reason for poor predictions is the base model itself, rather than onnx, we can close out this issue.
Creating a new model for myself is a bit stretched goal for me.

@elliotwaite
Copy link
Owner

Okay, sounds good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants