New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add quantized sigmoid, fix quantized tanh for QKeras #569
Add quantized sigmoid, fix quantized tanh for QKeras #569
Conversation
You could move the new logic that sets the number of bits and rounding mode to an optimizer pass instead |
I agree about moving the logic to an optimizer pass, but that would be for all the quantizers, not just for quantized tanh and sigmoid. Basically all QKeras info is lost if |
Also can you push to a local branch (like |
Let me do a few more tests first, then I'll push to a local branch. |
The issue that we are seeing is that the hls4ml sigmoid really seems to approximate |
So qkeras uses the hard sigmoid by default: |
This resulted in more changes than I expected. Please take a look to see if this is reasonable. |
I am not sure how the matplotlib import came into this pull request. It should not be there. I will fix that. |
Merged in master to fix pytest setup. Not sure why the tests aren't starting, though. |
I pushed the changes to pr/569 |
I notice in the qkeras pytest we get: |
So the test fails with tensorflow 2.4.1 (used by the CI) but passes with 2.9.1 and 2.11.0. I don't know why. Should I increase the allowed difference? |
The pytests now succeed. I am not sure why there are the CI failures, though. Is it because we couldn't build the docker image with Vivado? |
Looks like a real error, probably something was generated with a syntax error. |
I was able to manually reproduce it. It is in the vivado_accelerator_writer not liking the pre-commit changes: elif 'void myproject(' in line:
newline = f'void {model.config.get_project_name()}_axi(\n' This basically removes that arguments that were added to the same line by the pre-commit: void myproject(input_axi_t in[N_IN], output_axi_t out[N_OUT]); The fix should be easy. I'll try to fix it also in vivado_writer where it currently doesn't cause a problem but could. |
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569) * snapshot of beginnings * make version that works for Vivado, error for Quartus * Change order of precision from quantizer * add hard sigmoid and tanh * fix setting of slope and shift type * revert config parsing--seems a little strange but works * fix hard_sigmoid and hard_tanh for streaming * update pytest for quantized tanh and sigmoid * remove inadvertently included matoplotlib * add special case when W == min_width. * fix merge of main * Go back to having AP_TRN and AP_WRP as defaults * handle case when use_real_tanh is not defined * make the activations use AP_RND_CONV (and AP_SAT) by default * remove use of use_real_tanh in test since not always supported * fix incorrect default types for Keras (not QKeras) hard_sigmoid * Mostly fix up things for Quartus * get rid of intermediate cast * fix an i++ compilation issue * Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits. * fix activation quantizer * make sat, round defeult activation parameters, don't need to set again * Make the slope and shift not be configurable for HardActivation * some pre-commit fixes * pre-commint //hls to // hls fixes * update CI version * fixes for parsing errors from pre-commits * remove qactivation from list of activation_layers * print_vivado_report function for nicer reports (fastmachinelearning#730) * print_vivado_report function for fancier reports * Fancy reports (#51) * fix uram divide by 0 * add test * fix parsing of vsynth in 2020.1; add test * Update test_report.py * exclude pregenerated reports --------- Co-authored-by: Javier Duarte <jduarte@ucsd.edu> --------- Co-authored-by: Jovan Mitrevski <jmitrevs@fnal.gov> Co-authored-by: Vladimir <vloncar@users.noreply.github.com>
…ning#569) * snapshot of beginnings * make version that works for Vivado, error for Quartus * Change order of precision from quantizer * add hard sigmoid and tanh * fix setting of slope and shift type * revert config parsing--seems a little strange but works * fix hard_sigmoid and hard_tanh for streaming * update pytest for quantized tanh and sigmoid * remove inadvertently included matoplotlib * add special case when W == min_width. * fix merge of main * Go back to having AP_TRN and AP_WRP as defaults * handle case when use_real_tanh is not defined * make the activations use AP_RND_CONV (and AP_SAT) by default * remove use of use_real_tanh in test since not always supported * fix incorrect default types for Keras (not QKeras) hard_sigmoid * Mostly fix up things for Quartus * get rid of intermediate cast * fix an i++ compilation issue * Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits. * fix activation quantizer * make sat, round defeult activation parameters, don't need to set again * Make the slope and shift not be configurable for HardActivation * some pre-commit fixes * pre-commint //hls to // hls fixes * update CI version * fixes for parsing errors from pre-commits * remove qactivation from list of activation_layers
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569) * snapshot of beginnings * make version that works for Vivado, error for Quartus * Change order of precision from quantizer * add hard sigmoid and tanh * fix setting of slope and shift type * revert config parsing--seems a little strange but works * fix hard_sigmoid and hard_tanh for streaming * update pytest for quantized tanh and sigmoid * remove inadvertently included matoplotlib * add special case when W == min_width. * fix merge of main * Go back to having AP_TRN and AP_WRP as defaults * handle case when use_real_tanh is not defined * make the activations use AP_RND_CONV (and AP_SAT) by default * remove use of use_real_tanh in test since not always supported * fix incorrect default types for Keras (not QKeras) hard_sigmoid * Mostly fix up things for Quartus * get rid of intermediate cast * fix an i++ compilation issue * Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits. * fix activation quantizer * make sat, round defeult activation parameters, don't need to set again * Make the slope and shift not be configurable for HardActivation * some pre-commit fixes * pre-commint //hls to // hls fixes * update CI version * fixes for parsing errors from pre-commits * remove qactivation from list of activation_layers * print_vivado_report function for nicer reports (fastmachinelearning#730) * print_vivado_report function for fancier reports * Fancy reports (fastmachinelearning#51) * fix uram divide by 0 * add test * fix parsing of vsynth in 2020.1; add test * Update test_report.py * exclude pregenerated reports --------- Co-authored-by: Javier Duarte <jduarte@ucsd.edu> --------- Co-authored-by: Jovan Mitrevski <jmitrevs@fnal.gov> Co-authored-by: Vladimir <vloncar@users.noreply.github.com>
This is an attempt to fix quantized tanh and add quantized sigmoid activation support for QKeras ingestion. In Quartus the tanh handling probably needs an optimizer to rename it to dense_tanh (and that is why the tests fail--the hacky fix in the writer doesn't work), but for Vivado I think this generally works. Overall, though, I think this fix is only temporary. The method breaks if
granularity='name'
is not used inhls4ml.utils.config_from_keras_model
, though this feature is not just for these quantizers.Any suggestions?