-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run inference from python interface #17
Comments
Any updates on this? |
Yes, you can use the BNN-PYNQ repo for deploying the bit file. In https://github.com/Xilinx/BNN-PYNQ/blob/master/bnn/bnn.py you'll need to comment out "self.bnn.load_parameters(os.path.join(params, network))" on line 183 or 293 depending on the network you're deploying. The reason being, the FINN tool bakes the weights into the bit file, and the BNN-PYNQ assumes they are loaded at runtime. |
I'm closing all issues relating to the v.01 version in preparation for the new release. Please note that v0.1 is now deprecated and unsupported. |
Performance fixes for streaming MVAU
Hi,
I have succesfully generated the bitstream and the tcl file on my own executing the following command (suggested in the guide of this repo):
"python FINN/bin/finn --device=pynqz1 --prototxt=FINN/inputs/cnv-w1a1.prototxt --caffemodel=FINN/inputs/cnv-w1a1.caffemodel --mode=synth"
I thought to load the bitstream on PYNQ-Z1 and try to run inference.
Can I use BNN-PYNQ (https://github.com/Xilinx/BNN-PYNQ), changing the loading of bitstream, to make inference of any network or you provide a python interface, like BNN-PYNQ, to run inference on a network?
Thanks,
Sara
The text was updated successfully, but these errors were encountered: