Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] A more light-weighted model #49

Closed
fkcptlst opened this issue Mar 31, 2023 · 3 comments · Fixed by #50
Closed

[Feature request] A more light-weighted model #49

fkcptlst opened this issue Mar 31, 2023 · 3 comments · Fixed by #50

Comments

@fkcptlst
Copy link

Maybe add some optional light-weighted models to choose from? After all, Inception is a bit too heavy for small servers. Its memory consumption is too large.

@arnidan
Copy link
Owner

arnidan commented Apr 1, 2023

Hello @fkcptlst, thank you for the great idea!

Could you check docker pull ghcr.io/arnidan/nsfw-api:50_merge-min image on your environment?
It's bundled with quantized model

@arnidan
Copy link
Owner

arnidan commented Apr 1, 2023

I made some tests.
Seems like there is no big difference in RAM usage between the models. It's saving around 50MB, 600MB (default) vs 650MB (min)

@fkcptlst
Copy link
Author

fkcptlst commented Apr 2, 2023

Yes, my test results are the same. It turns out that the Inception sometimes even consumes less RAM than MobileNet, which is a bit peculiar.

Thanks for the effort!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants