Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add quantized sigmoid, fix quantized tanh for QKeras #569

Merged
merged 36 commits into from Mar 23, 2023

Conversation

jmitrevs
Copy link
Contributor

This is an attempt to fix quantized tanh and add quantized sigmoid activation support for QKeras ingestion. In Quartus the tanh handling probably needs an optimizer to rename it to dense_tanh (and that is why the tests fail--the hacky fix in the writer doesn't work), but for Vivado I think this generally works. Overall, though, I think this fix is only temporary. The method breaks if granularity='name' is not used in hls4ml.utils.config_from_keras_model, though this feature is not just for these quantizers.

Any suggestions?

test/pytest/test_qkeras.py Outdated Show resolved Hide resolved
@thesps
Copy link
Contributor

thesps commented Jun 16, 2022

The method breaks if granularity='name' is not used in hls4ml.utils.config_from_keras_model, though this feature is not just for these quantizers.

You could move the new logic that sets the number of bits and rounding mode to an optimizer pass instead

@jmitrevs
Copy link
Contributor Author

I agree about moving the logic to an optimizer pass, but that would be for all the quantizers, not just for quantized tanh and sigmoid. Basically all QKeras info is lost if granularity='name' is not used, so it is a more fundamental change we should discuss.

hls4ml/utils/config.py Outdated Show resolved Hide resolved
hls4ml/utils/config.py Outdated Show resolved Hide resolved
@thesps
Copy link
Contributor

thesps commented Jun 16, 2022

Also can you push to a local branch (like pr/569) to trigger Gitlab CI for pytests?

@jmitrevs
Copy link
Contributor Author

Let me do a few more tests first, then I'll push to a local branch.

@jmitrevs
Copy link
Contributor Author

The issue that we are seeing is that the hls4ml sigmoid really seems to approximate 1/1+exp(-x), but the QKeras one is quite different in a way I don't currently understand.

@jmitrevs
Copy link
Contributor Author

So qkeras uses the hard sigmoid by default:
https://github.com/google/qkeras/blob/master/qkeras/quantizers.py#L192-L237
I have to see if hls4ml implements it; otherwise, it's easy to do: clip(0.5 * x + 0.5, 0.0, 1.0).

@jmitrevs
Copy link
Contributor Author

This resulted in more changes than I expected. Please take a look to see if this is reasonable.

@jmitrevs
Copy link
Contributor Author

I am not sure how the matplotlib import came into this pull request. It should not be there. I will fix that.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Jul 12, 2022

Merged in master to fix pytest setup. Not sure why the tests aren't starting, though.

@jmitrevs
Copy link
Contributor Author

I pushed the changes to pr/569

@thesps
Copy link
Contributor

thesps commented Oct 13, 2022

I notice in the qkeras pytest we get:
E TypeError: __init__() got an unexpected keyword argument 'use_real_tanh'
It's because that option is in QKeras main branch but not in a release.

@jmitrevs
Copy link
Contributor Author

So the test fails with tensorflow 2.4.1 (used by the CI) but passes with 2.9.1 and 2.11.0. I don't know why. Should I increase the allowed difference?

@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Feb 21, 2023
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Mar 7, 2023
@jmitrevs
Copy link
Contributor Author

jmitrevs commented Mar 8, 2023

The pytests now succeed. I am not sure why there are the CI failures, though. Is it because we couldn't build the docker image with Vivado?

@vloncar
Copy link
Contributor

vloncar commented Mar 8, 2023

Looks like a real error, probably something was generated with a syntax error.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Mar 8, 2023

I was able to manually reproduce it. It is in the vivado_accelerator_writer not liking the pre-commit changes:

            elif 'void myproject(' in line:
                newline = f'void {model.config.get_project_name()}_axi(\n'

This basically removes that arguments that were added to the same line by the pre-commit:

void myproject(input_axi_t in[N_IN], output_axi_t out[N_OUT]);

The fix should be easy. I'll try to fix it also in vivado_writer where it currently doesn't cause a problem but could.

@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Mar 8, 2023
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Mar 10, 2023
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Mar 23, 2023
@vloncar vloncar merged commit 6d8e4ca into fastmachinelearning:main Mar 23, 2023
3 checks passed
vloncar added a commit to vloncar/hls4ml that referenced this pull request Mar 24, 2023
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569)

* snapshot of beginnings

* make version that works for Vivado, error for Quartus

* Change order of precision from quantizer

* add hard sigmoid and tanh

* fix setting of slope and shift type

* revert config parsing--seems a little strange but works

* fix hard_sigmoid and hard_tanh for streaming

* update pytest for quantized tanh and sigmoid

* remove inadvertently included matoplotlib

* add special case when W == min_width.

* fix merge of main

* Go back to having AP_TRN and AP_WRP as defaults

* handle case when use_real_tanh is not defined

* make the activations use AP_RND_CONV (and AP_SAT) by default

* remove use of use_real_tanh in test since not always supported

* fix incorrect default types for Keras (not QKeras) hard_sigmoid

* Mostly fix up things for Quartus

* get rid of intermediate cast

* fix an i++ compilation issue

* Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits.

* fix activation quantizer

* make sat, round defeult activation parameters, don't need to set again

* Make the slope and shift not be configurable for HardActivation

* some pre-commit fixes

* pre-commint //hls to // hls fixes

* update CI version

* fixes for parsing errors from pre-commits

* remove qactivation from list of activation_layers

* print_vivado_report function for nicer reports (fastmachinelearning#730)

* print_vivado_report function for fancier reports

* Fancy reports (#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <jduarte@ucsd.edu>

---------

Co-authored-by: Jovan Mitrevski <jmitrevs@fnal.gov>
Co-authored-by: Vladimir <vloncar@users.noreply.github.com>
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
…ning#569)

* snapshot of beginnings

* make version that works for Vivado, error for Quartus

* Change order of precision from quantizer

* add hard sigmoid and tanh

* fix setting of slope and shift type

* revert config parsing--seems a little strange but works

* fix hard_sigmoid and hard_tanh for streaming

* update pytest for quantized tanh and sigmoid

* remove inadvertently included matoplotlib

* add special case when W == min_width.

* fix merge of main

* Go back to having AP_TRN and AP_WRP as defaults

* handle case when use_real_tanh is not defined

* make the activations use AP_RND_CONV (and AP_SAT) by default

* remove use of use_real_tanh in test since not always supported

* fix incorrect default types for Keras (not QKeras) hard_sigmoid

* Mostly fix up things for Quartus

* get rid of intermediate cast

* fix an i++ compilation issue

* Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits.

* fix activation quantizer

* make sat, round defeult activation parameters, don't need to set again

* Make the slope and shift not be configurable for HardActivation

* some pre-commit fixes

* pre-commint //hls to // hls fixes

* update CI version

* fixes for parsing errors from pre-commits

* remove qactivation from list of activation_layers
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569)

* snapshot of beginnings

* make version that works for Vivado, error for Quartus

* Change order of precision from quantizer

* add hard sigmoid and tanh

* fix setting of slope and shift type

* revert config parsing--seems a little strange but works

* fix hard_sigmoid and hard_tanh for streaming

* update pytest for quantized tanh and sigmoid

* remove inadvertently included matoplotlib

* add special case when W == min_width.

* fix merge of main

* Go back to having AP_TRN and AP_WRP as defaults

* handle case when use_real_tanh is not defined

* make the activations use AP_RND_CONV (and AP_SAT) by default

* remove use of use_real_tanh in test since not always supported

* fix incorrect default types for Keras (not QKeras) hard_sigmoid

* Mostly fix up things for Quartus

* get rid of intermediate cast

* fix an i++ compilation issue

* Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits.

* fix activation quantizer

* make sat, round defeult activation parameters, don't need to set again

* Make the slope and shift not be configurable for HardActivation

* some pre-commit fixes

* pre-commint //hls to // hls fixes

* update CI version

* fixes for parsing errors from pre-commits

* remove qactivation from list of activation_layers

* print_vivado_report function for nicer reports (fastmachinelearning#730)

* print_vivado_report function for fancier reports

* Fancy reports (fastmachinelearning#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <jduarte@ucsd.edu>

---------

Co-authored-by: Jovan Mitrevski <jmitrevs@fnal.gov>
Co-authored-by: Vladimir <vloncar@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants