Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make part number part of yml configuration #29

Closed
nhanvtran opened this issue Jan 3, 2018 · 1 comment
Closed

make part number part of yml configuration #29

nhanvtran opened this issue Jan 3, 2018 · 1 comment

Comments

@nhanvtran
Copy link
Contributor

trivial, but making an issue so we don't forget

@benjaminkreis
Copy link
Member

#30

thesps added a commit to thesps/hls4ml that referenced this issue May 12, 2020
* Increase precision of weight printing

* Get class name differently for profiling. Import profiling in model/__init__ for easier import elsewhere

* Make optimizer passes configurable. API is with list in config file. Splitting hls_model.py into hls_model.py and hls_layers.py was necessary to remove circular import dependency from optimizers importing Layers and utilities, while hls_model now needs to import optimizer

* QKeras use of integer bits seems to differ a bit from ap_fixed. e.g. quantized_bits(4,0).max() is 1.0, whereas it would be 0.5 with ap_fixed. So, add 1 bit to the integer for ap_fixed types

* Add 1 bit elsewhere for QKeras, since they don't count the sign bit
calad0i pushed a commit to calad0i/hls4ml that referenced this issue Jul 1, 2023
* Increase precision of weight printing

* Get class name differently for profiling. Import profiling in model/__init__ for easier import elsewhere

* Make optimizer passes configurable. API is with list in config file. Splitting hls_model.py into hls_model.py and hls_layers.py was necessary to remove circular import dependency from optimizers importing Layers and utilities, while hls_model now needs to import optimizer

* QKeras use of integer bits seems to differ a bit from ap_fixed. e.g. quantized_bits(4,0).max() is 1.0, whereas it would be 0.5 with ap_fixed. So, add 1 bit to the integer for ap_fixed types

* Add 1 bit elsewhere for QKeras, since they don't count the sign bit
GiuseppeDiGuglielmo pushed a commit that referenced this issue Oct 13, 2023
softsign example with parallel i/o
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants