-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tutorial file for Convolutional Networks #14
Comments
Hi, |
Hi @rajeshpandit107! Can you have a look through the notebook here part6_cnns.ipynb, specifically the section "CNNs in hls4ml", and make sure you are using the correct configuration settings? |
@thaarres Thanks a lot for the cnn tutorial. When I try to run the notebook I run into an error on the codeblock under "CNNs in hls4ml" (where the trained models are loaded and The error is: ValueError Traceback (most recent call last) ~/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile, options) [... cut out intermediate calls ...] ~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) ValueError: Unknown config_item: quantized_bits. Please ensure this object is passed to the I was able to run all cells before this successfully without modification, the only difference being I trained for fewer epochs to save time. The cell that causes the error is also marked uneditable, not sure why; I made it modifiable to try to pass |
Sorry - seems that was likely fixed by adding an entry of Exception Traceback (most recent call last) ~/.local/lib/python3.6/site-packages/hls4ml/utils/config.py in config_from_keras_model(model, granularity, default_precision, default_reuse_factor) Exception: ERROR: Unsupported layer type: QConv2DBatchnorm The layer seems to use |
Hi Yonatan!
Thanks for the e-mail and for finding this bug! Indeed, quantised bits is missing from the custom_objects list.
About the second error, could it be that you are on hls4ml v0.5.0 and not on the master branch? That layer was implemented later[*]. The environment.yml[**] should be pointing to the correct version, but it differs from the version used in the master branch of the hls4ml-tutorials repository
[*] https://github.com/fastmachinelearning/hls4ml/blob/master/hls4ml/model/hls_layers.py#L1823
[**]https://github.com/thaarres/hls4ml-tutorial/blob/master/environment.yml
On 9 Jul 2021, at 02:49, Yonatan Nozik ***@***.******@***.***>> wrote:
Sorry - seems that was likely fixed by adding an entry of 'quantized_bits': quantized_bits to custom_objects. I'm not completely sure if it did since I now get an error when trying to run the line hls_config_q = hls4ml.utils.config_from_keras_model(qmodel, granularity='name'), with error:
…________________________________
Exception Traceback (most recent call last)
in
4 hls4ml.model.optimizer.OutputRoundingSaturationMode.saturation_mode = 'AP_SAT'
5
----> 6 hls_config_q = hls4ml.utils.config_from_keras_model(qmodel, granularity='name')
7 hls_config_q['Model']['ReuseFactor'] = 1
8 hls_config['Model']['Precision'] = 'ap_fixed<16,6>'
~/.local/lib/python3.6/site-packages/hls4ml/utils/config.py in config_from_keras_model(model, granularity, default_precision, default_reuse_factor)
127 for keras_layer in keras_layer_config:
128 if keras_layer['class_name'] not in supported_layers:
--> 129 raise Exception('ERROR: Unsupported layer type: {}'.format(keras_layer['class_name']))
130 if keras_layer['class_name'] in skip_layers:
131 continue
Exception: ERROR: Unsupported layer type: QConv2DBatchnorm
________________________________
The layer seems to use use_bias=True when the qKeras model is initially defined (again I didn't modify any code except for fewer training epochs), so not sure why that would be, maybe unless my adding quantized_bits earlier messed something up - I'm not too familiar with the qKeras / hls4ml workings, though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#14 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABKLVQBTBFC2JTCQ3R5JOKLTWZBTFANCNFSM43ZVLDVQ>.
|
Hi @thaarres, running I haven't been using a conda environment, so I suppose I should probably try that. |
Hi @thaarres - apologies for the repeated mentions. I revisited the notebook in the proper conda environment and was able to synthesize the default models (I did have to reduce the number of filters/neurons per layer to reduce the memory usage on my machine). However, there seems to be some error with modifying the models to use the resource strategy, since all my attempts to use it have resulted in the following: ***** C/RTL SYNTHESIS ***** This appears to happen regardless of whether I set the overall model strategy to resource or on a per-layer basis. It also seems to happen regardless of the associated ReuseFactor. Although I have no idea if it's related, I tried to omit setting the Keeping the latency strategy while increasing the ReuseFactor results in successful synthesis, but without too much of an effect on the results. I'm not sure what the expected synthesis behavior is of hls4ml is when you specify high reuse with a latency strategy (e.g. can it mostly ignore the ReuseFactor or does ReuseFactor take priority). Do you happen to know if specifying the resource strategy requires any additional modifications besides simply changing Thanks very much. |
Hi @ynozik ! |
Hi @thaarres, yes, that was resolved by changing the Vivado version. |
Hi @thaarres ,in part6_cnns.inpy, the accuracy was quite low after convert to hls model(about 80% -> 20%). |
Fixed in fastmachinelearning/hls4ml#378 |
sorry ,I just reinstall the hls4ml,and thanks a lot ! |
The CNN tutorial notebook is now online with the new hls4ml release: closing this issue |
Please post a tutorial for cnn.
The text was updated successfully, but these errors were encountered: