-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: [XFORM 203-504] Stop unrolling loop 'MultLoop' #904
Comments
Vivado has an unroll limit of 4096 hardcoded that cannot be circumvented easily. The warning message ( |
thanks for your answer @calad0i as you mentioned and i read the documentation of hls4ml and other answers i found that i should use the resource strategy and and some higher reuse factors so i tried reuse factor of 1024 for all layers and reuse factor of 704 for out put dense as it's the maximum reuse factor for this layer and i passed that error but i face new one :
and also i tried reuse factor of 128 for all layers and 704 for out put dense and i faced this error
its very important for me to solve this problem and i hope if you could help me guys @vloncar @jmduarte @thesps |
Is this homework assignment? 😄 Your model is too big, you need to reduce the number of filters, a lot. There may be something else going on since that loop shouldn't be unrolled, but even with that resolved I would not expect this model to work. |
no it's a part of my project :) :) @vloncar
or layer in model_pruned.layers:
if layer.__class__.__name__ in ['Conv1D', 'Dense']:
w = layer.get_weights()[0]
layersize = np.prod(w.shape)
print("{}: {}".format(layer.name, layersize)) # 0 = weights, 1 = biases
if layersize > 4096: # assuming that shape[0] is batch, i.e., 'None'
print("Layer {} is too large ({}), are you sure you want to train?".format(layer.name, layersize))
|
Try with 10-50k parameters, not half a million. All weights will be stored on chip, so you can't really go large |
thank you so much @vloncar i changed the model architecture and as you said i tried to reduce model parameters to less than 50K here is my new model:
and the configuration that i used is : import hls4ml
import plotting
# First, the baseline model
hls_config = hls4ml.utils.config_from_keras_model(model_pruned, granularity='name')
# Set the precision and reuse factor for the full model
hls_config['Model']['Precision'] = 'ap_fixed<22,6>'
hls_config['Model']['ReuseFactor'] = 96
hls_config['Model']['Strategy'] = 'resource'
# Create an entry for each layer, here you can for instance change the strategy for a layer to 'resource'
# or increase the reuse factor individually for large layers.
# In this case, we designed the model to be small enough for a fully parallel implementation
# so we use the latency strategy and reuse factor of 1 for all layers.
for Layer in hls_config['LayerName'].keys():
hls_config['LayerName'][Layer]['Strategy'] = 'resource'
hls_config['LayerName'][Layer]['ReuseFactor'] = 96
hls_config['LayerName'][Layer]['Precision'] = 'ap_fixed<22,6>'
#hls_config['LayerName'][layer]['Trace'] = True
# If you want best numerical performance for high-accuray models, while the default latency strategy is faster but numerically more unstable
hls_config['LayerName']['output_softmax']['Strategy'] = 'Stable'
hls_config['LayerName']['dense_0']['ReuseFactor'] = 64
hls_config['LayerName']['dense_1']['ReuseFactor'] = 64
hls_config['LayerName']['output_dense']['ReuseFactor'] = 16
plotting.print_dict(hls_config)
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream' # Must set this if using CNNs!
cfg['HLSConfig'] = hls_config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'pruned_cnn/'
cfg['XilinxPart'] = 'xcu250-figd2104-2L-e'
cfg['strategy'] = 'resource'
#cfg['Interface'] = 'axi_stream'
hls_model = hls4ml.converters.keras_to_hls(cfg)
hls_model.compile() the code passed the pre-synthesis part and all the errors disappeared but after two days the code is still running on synthesis part and it can't pass the following warning :
should i stop the code and run with different reuse factor for conv layer ? the code is running on a 12th Gen Intel® Core™ i9-12900K × 24 cpu . |
thank you @vloncar i could pass the synthesis by reducing the model parameters. 👍 :):):) |
@vloncar regarding your suggestionto @behnamarefy , I also reduced the number of parameters in my model. However, I still encounter the same issue, as indicated in solution1.log. Additionally, here is the content of yml configuration that i used is :
Model: "LeNet5_MNIST"
the cmd used in terminal : |
hello i'm working on a cnn network and i tried to implement it using hls4ml0.7.1 and vivado2019.2 but by trying different reuse factor i get the this error :
here is my code :
i tried to pruned this network and i load the pruned network here is the whole code
The text was updated successfully, but these errors were encountered: