Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.12.0 #824

Merged
merged 33 commits into from
Sep 4, 2022
Merged

v0.12.0 #824

merged 33 commits into from
Sep 4, 2022

Conversation

joeyballentine
Copy link
Member

@joeyballentine joeyballentine commented Aug 24, 2022

Closes #235
Closes #883
Has the potential to close both #783 as well as #826, but needs verification

@joeyballentine joeyballentine marked this pull request as draft August 24, 2022 21:11
@joeyballentine joeyballentine marked this pull request as ready for review September 2, 2022 13:05
@joeyballentine joeyballentine marked this pull request as draft September 2, 2022 13:05
@joeyballentine joeyballentine marked this pull request as ready for review September 2, 2022 14:25
@joeyballentine joeyballentine changed the title v0.12.0 (WIP) v0.12.0 Sep 2, 2022
joeyballentine and others added 24 commits September 4, 2022 10:41
update react-flow
* add swinir, initial

* kinda working

* Fix "classical_sr" swinir models not working with fp16

* Backend linting fixes

* Add links to supported arches in README (#817)

* update docs
* Paranoid incomplete initial commit

* Another fuse function

* More fuse functions

* THE MONSTER

* Final Fusion

* Some more main and bug fixes

* Creating layers

* Finished adding code needed for ESRGAN

* Param now printing correctly for ESRGAN test

* Bin now writing correctly for ESRGAN

* Convert -> Save NCNN working in chaiNNer

* Added fp16 to converter

* FpMode type

* Model structure changes and convert additions; untested

* Fixed bugs caused by previous commit

* More ops + some refactoring

* More ops

* More ops and bugfixes

* MemoryData and bugfixes

* More ops and bugfixes

* Added rest of ops except hard ones (I think)

* Minor bugfix

* Added an incomplete op

* Added interpolation

* fp16/fp32 can now interpolate

* Load Model and Upscale Image now working

* Renamed a couple files

* More tweaks and fixes, eliminated major slowdowns

* Fixed 1-channel bug

* opt/non-opt now interpolable

* Removed some unnecessary code

* Cleaned up code and fixed bugs

* Removed __main__

* Minor bugfix

* linting bugfixes

* more linting bugfixes

* Removed unnecessary parse_ncnn_param.py

* Fixed category error and linting issues

* Fixed some pyright complaints

* Fixed a fix

* PR changes

* Removed unused ncnn_parsers.py

* Updated save test

* Minor fix for commit before last

* Changed NCNN model loading again

* Removed garbage auto-import

* PR comment

* Implemented static factory method

* Removed references to convertmodel.com

* Added missing weightOrders to param schema

* Fixed bug and added load prelu weight

* Added error messages for param key errors

* Added FP Mode TextOutput

* Optimize onnx pre-conversion to for torch 1.12 compatibility

* Compare weight shape instead of just size

* Add onnxoptimizer to onnx requirements

* Added onnxoptimizer size estimate

* Added onnxoptimizer to requirements

* Trigger CI

* Fixed raise statements

* Typing attempt

* Add links to supported arches in README (#817)

Co-authored-by: Joey Ballentine <34788790+joeyballentine@users.noreply.github.com>
* Added ONNX interpolate node

* Added save test

* Fixed linting errors
* Paranoid incomplete initial commit

* Another fuse function

* More fuse functions

* THE MONSTER

* Final Fusion

* Some more main and bug fixes

* Creating layers

* Finished adding code needed for ESRGAN

* Param now printing correctly for ESRGAN test

* Bin now writing correctly for ESRGAN

* Convert -> Save NCNN working in chaiNNer

* Added fp16 to converter

* FpMode type

* Model structure changes and convert additions; untested

* Fixed bugs caused by previous commit

* More ops + some refactoring

* More ops

* More ops and bugfixes

* MemoryData and bugfixes

* More ops and bugfixes

* Added rest of ops except hard ones (I think)

* Minor bugfix

* Added an incomplete op

* Added interpolation

* fp16/fp32 can now interpolate

* Load Model and Upscale Image now working

* Renamed a couple files

* More tweaks and fixes, eliminated major slowdowns

* Fixed 1-channel bug

* opt/non-opt now interpolable

* Removed some unnecessary code

* Cleaned up code and fixed bugs

* Removed __main__

* Minor bugfix

* linting bugfixes

* more linting bugfixes

* Removed unnecessary parse_ncnn_param.py

* Fixed category error and linting issues

* Fixed some pyright complaints

* Fixed a fix

* PR changes

* Removed unused ncnn_parsers.py

* Updated save test

* Minor fix for commit before last

* Changed NCNN model loading again

* Removed garbage auto-import

* PR comment

* Implemented static factory method

* Updates NCNN upscale channel handling

* Transparency hack for 1-channel models

* Unsqueeze 1D arrays to work with pytorch

* Removed references to convertmodel.com

* Added missing weightOrders to param schema

* Fixed bug and added load prelu weight

* Added error messages for param key errors

* Added FP Mode TextOutput

* Optimize onnx pre-conversion to for torch 1.12 compatibility

* Compare weight shape instead of just size

* Add onnxoptimizer to onnx requirements

* Added onnxoptimizer size estimate

* Added onnxoptimizer to requirements

* Trigger CI

* Fixed raise statements

* Typing attempt

* Add links to supported arches in README (#817)

Co-authored-by: Joey Ballentine <34788790+joeyballentine@users.noreply.github.com>
Fixed convenient upscale bugs
* Replace tile size target with new dropdown

* Change to "number of tiles"

* Update descriptions

* make tile numbers correct

* add migration

* remove debug log
* Compile cache options before running

* wip broken caching

* working cache reduction

* logging/linting fixes
* Allow numbers for all string inputs

* fix text append type
Make all iterator indexes numbers
* Add direct pytorch to ncnn conversion

* none
* Caching optimization fixes

* garbage collection

* manually delete and garbage collect auto splitting

* use lazy init

* remove some logging

* Revert "use lazy init"

This reverts commit 9129db8.
* Interpolate noop and onnx output bugfix

* Fixed interpolate tile mode issue
* Replace the ncnn logo with something closer to the official one

* replace with better ncnn icon
* Added Canny Edge Detection node (@jumpyjacko)

* Add Canny Edge Detection node (@jumpyjacko)
Added a newline to end of file for consistency between the files.

* Add changes suggested in #869

* Fix bad grammar in node (#869)

* Added Canny Edge Detection Node (#869)
* Add toast warning when modifying the chain during an execution

* use id and make it subtle

* put in middle
* Allow model save nodes to be used without onnx or ncnn installed

* better return

* fix import

* remove dividing line
* Use opencv imwrite when possible to save ram

* use rename and temp path
* Model file iterator

* Update description
* Added toggle for checking for update on startup

* Fix linting errors
joeyballentine and others added 9 commits September 4, 2022 10:42
Disable FP16 processing for swinir

Found more models that don't support it, so disabling until I figure out why that is.
* rely on estimated split depth

* possibly more optimizations

* oops

* Fixes

* Fix CPU
* Add favorites section to node selection context menu

* Hide favorites if none
* Show execution time on nodes

* Refactor into separate component

* ts fixes
* Fixed blend shape bugs

* Removed unused import
Blend fix 2 electric blendaloo (#888)

Shifted copy
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

How do I increment a number [ Feature ] Pytorch Model to NCNN Model
3 participants