Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ncnn load net refactor #1023

Merged

Conversation

theflyingzamboni
Copy link
Collaborator

As with the ONNX sessions, we were loading the NCNN model weights from cache to the Net object every time the upscale node was run. This refactor just reuses the ONNX pattern to cache the Net the first time it is loaded, and use that afterwards. I've been getting iterator run times 60-75% as long as pre-fix for smaller images. Also removes the CPU spikes from loading weights to the Net, though does not change high CPU usage for very fast iterator processing with small image/small model combinations.

Also fixed a bug with ncnn_auto_split_process not passing the input and output names when recursing.

@joeyballentine joeyballentine merged commit 0935ddc into chaiNNer-org:main Sep 22, 2022
@theflyingzamboni theflyingzamboni deleted the ncnn-load-net-refactor branch December 11, 2022 21:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants