You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello i'm working on a cnn network and i tried to implement it using hls4ml0.7.1 and vivado2019.2 but by trying different reuse factor i get the this error :
ERROR: [XFORM 203-504] Stop unrolling loop 'MultLoop' (firmware/nnet_utils/nnet_dense_resource.h:52) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 256u>, nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 128u>, config7>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body.
forlayerinmodel_pruned.layers:
iflayer.__class__.__name__in ['Conv2D', 'Dense']:
w=layer.get_weights()[0]
layersize=np.prod(w.shape)
print("{}: {}".format(layer.name, layersize)) # 0 = weights, 1 = biasesiflayersize>4096: # assuming that shape[0] is batch, i.e., 'None'print("Layer {} is too large ({}), are you sure you want to train?".format(layer.name, layersize))
conv_0: 4096
conv_1: 524288
Layer conv_1 is too large (524288), are you sure you want to train?
conv_2: 131072
Layer conv_2 is too large (131072), are you sure you want to train?
dense_0: 131072
Layer dense_0 is too large (131072), are you sure you want to train?
output_dense: 704
importhls4mlimportplotting# First, the baseline modelhls_config=hls4ml.utils.config_from_keras_model(model_pruned, granularity='name')
# Set the precision and reuse factor for the full modelhls_config['Model']['Precision'] ='ap_fixed<22,6>'hls_config['Model']['ReuseFactor'] =4hls_config['Model']['Strategy'] ='resource'# Create an entry for each layer, here you can for instance change the strategy for a layer to 'resource'# or increase the reuse factor individually for large layers.# In this case, we designed the model to be small enough for a fully parallel implementation# so we use the latency strategy and reuse factor of 1 for all layers.forLayerinhls_config['LayerName'].keys():
hls_config['LayerName'][Layer]['Strategy'] ='resource'hls_config['LayerName'][Layer]['ReuseFactor'] =4hls_config['LayerName'][Layer]['Precision'] ='ap_fixed<22,6>'# If you want best numerical performance for high-accuray models, while the default latency strategy is faster but numerically more unstablehls_config['LayerName']['output_softmax']['Strategy'] ='Stable'hls_config['LayerName']['dense_0']['ReuseFactor'] =64plotting.print_dict(hls_config)
cfg=hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] ='io_stream'# Must set this if using CNNs!cfg['HLSConfig'] =hls_configcfg['KerasModel'] =modelcfg['OutputDir'] ='pruned_cnn/'cfg['XilinxPart'] ='xcu250-figd2104-2L-e'hls_model=hls4ml.converters.keras_to_hls(cfg)
hls_model.compile()
INFO: [HLS 200-489] Unrolling loop 'InitAccum' (firmware/nnet_utils/nnet_dense_resource.h:37) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 256u>, nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 128u>, config7>' completely with a factor of 128.
INFO: [HLS 200-489] Unrolling loop 'MultLoop' (firmware/nnet_utils/nnet_dense_resource.h:52) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 256u>, nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 128u>, config7>' completely with a factor of 131072.
ERROR: [XFORM 203-504] Stop unrolling loop 'MultLoop' (firmware/nnet_utils/nnet_dense_resource.h:52) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 256u>, nnet::array<ap_fixed<22, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 128u>, config7>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body.
ERROR: [HLS 200-70] Pre-synthesis failed.
command 'ap_source' returned error code
while executing
"source build_prj.tcl"
("uplevel" body line 1)
invoked from within
"uplevel \#0 [list source $arg] "
INFO: [Common 17-206] Exiting vivado_hls at Mon Oct 30 13:35:42 2023...
CSynthesis report not found.
Vivado synthesis report not found.
Cosim report not found.
Timing report not found.
The text was updated successfully, but these errors were encountered:
hello i'm working on a cnn network and i tried to implement it using hls4ml0.7.1 and vivado2019.2 but by trying different reuse factor i get the this error :
here is my code :
i tried to pruned this network and i load the pruned network here is the whole code
The text was updated successfully, but these errors were encountered: