Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VUnitCoSim #568

Draft
wants to merge 9 commits into
base: master
Choose a base branch
from
Draft

VUnitCoSim #568

wants to merge 9 commits into from

Conversation

umarcor
Copy link
Member

@umarcor umarcor commented Oct 13, 2019

Based on #578 and #581.

This PR contains features to co-simulate a VHDL design from Python and inspect internal buffers in real-time with a JavaScript (Vue.js) web frontend. This content is work-in-progress and it might not fit in this repo; so it might end in a separate location.


Python-GHDL co-execution on any GNU/Linux platform is supported, as long as GHDL is built with the option in ghdl/ghdl#805. A C file is the 'middleware' between Python and VHDL. All the functionality is implemented either in the testbench or in the Python script. The main difference compared to regular examples, is that the binary is built with ghdl.elab_e. Then, the location is retrieved and it is loaded as a shared library in the python script. This allows to use ctypes to bind Python and C resources to each other, just as we do between VHDL and C with VHPIDIRECT.

The example implements two executions:

  • REGULAR EXECUTION: the binary is loaded as a shared library, but it is executed with default CLI arguments, similat to a regular VUnit execution.
  • PYTHON ALLOCATION: the buffer is allocated in Python (instead of C) after loading the binary as a shared library but before executing main. When the simulation is executed, the Python variable is directly modified by the testbench.

It is further possible to set callbacks between C and Python, i.e., GHDL triggers a C function through VHPI, which itself triggers a Python method through ctypes. This can be useful to, e.g., allow a VHDL procedure push a value/message to a Python queue. Precisely, I think this is the one of the objectives of @LarsAsplund.


As commented in #469 (comment), these enhancements work only partially with VHDL 1993. It is possible to load some data to memory from Python and execute GHDL; but GHDL exits with abort, which prevents any Python post-check from being executed. This might be fixed in the future, since it is not an expected behaviour (see ghdl/ghdl#803). For now, I think it is enough to suggest using this feature with VHDL 2008 only.

Some notes about this are added to examples/vhdl/external_buffer/sigabrt.md.


A Python source is added, vunit/cosim.py which implements some utility functions to co-execute a GHDL binary that uses the external models. This includes functions to:

  • Open/load and close/unload an dynamically loadable object (executable binary or shared library).
  • Convert a list of strings to a format which is suitable for calling a foreign function (e.g. main or ghdl_main).
  • Convert a list of numbers to an array of bytes/integers, suitable to be shared with GHDL, and vice versa.

Accordingly, a cosim.py file is added to example external_buffer. This uses some of the functions above, along with the output of ghdl.elab_e. It can be tested with:

# cd examples/vhdl/external_buffer
# python3 run.py -v --build
# python3 cosim.py tb_external_string

Note that the simulation is executed twice. First, main is executed, which relies on the code in main.c to allocate, initialize and check the content of the buffers. Second, those tasks are done in Python and ghdl_main is executed directly, by just ignoring that main exists. This is a very interesting feature, because it is possible to use multiple entrypoints to the simulation (either in C or in Python), by using the same binary.


vunitcosim

The UUT is the external_buffer example explained above. It reads block_len bytes from a buffer and writes them back twice. Once in addresses block_len to 2*^block_len-1 and another one in addresses 2*block_len to 3*^block_len-1. The first time the numbers are incremented by one, and the second time they are incremented by two.

In the screencast, first, the design/test is compiled with python3 run.py -v --build. This is a normal VUnit execution with option ghdl.elab_e (#467). The output is an executable binary generated by GHDL and the CLI args that VUnit would use, which are saved in args.txt.

Then, with cosim.py, the binary is loaded in Python dynamically and the args are read from the txt file. First, main is executed, which allocates data and prints it from C sources. Then, buffers are allocated in Python and ghdl_main is executed directly, effectively ignoring any C application code. You can see in the screencast that the C code print lines of type %i: %i, while the python code print py %i: %i. Note that initial values are different: C starts with 11, 22, 33, 44, 55 and Python starts with 111, 122, 133, 144, 155.

In this design, one external string buffer is used for the data, and one external integer vector buffer is used to pass parameters such as the block length.

Everything above is already implemented in this PR.

Last, in the screencast a Flask server is started with serve.py. This provides a Vue.js frontend that allows to co-simulate interactively. The executable binary and args are loaded as in cosim.py, but the simulation is executed step-by-step, instead of running continuously until the end. It is possible to run the simulation for a given number of clock cycles only, or to have it re-triggered. Each time it is triggered, the data in the frontend is updated. Both modes are shown in the screencast: first 'for' is used a couple of times, and then 'every' is used until the simulation ends.

Some notes:

  • Since GHDL does not support step-by-step execution, yet, I achieved it through clock gating in the testbench. I implemented addition and comparison functions/procedures for a custom type composed of two 32 bit integers which are defined as part of an external array/buffer of integers. As a result, there is a 64 bit counter and a limit. The limit can be modified from VHDL/GHDL, from Python and/or from Vue.js, so any of the actors in the co-simulation can make it advance. These optional utilities are added to VUnit as a package and an entity in vunit/vhdl/run/src.
    • At some point, GHDL might support step-by-step execution: ghdl/ghdl#800.
  • The Vue.js project is the same for any example. I added it temporally to examples/vhdl/vue. For each of the examples/designs, only a subcomponent needs to be added, so it is easier for non-javascript developers to add views for their designs.
  • In the screencast, the content of the buffer that is shared between Python and GHDL is shown as a table. This is exactly the buffer where GHDL is writing during simulation, not a (delayed) copy of it. There is a delay between Python and Vue.js, but not between GHDL and Python.
  • The view as a table is just a proof of concept. It is possible to represent the arrays of bytes/integers with any custom fancy animation/model that the user might want to implement.

vunitcosim_hls

This other example is based on an AXI Master core written in Vivado HLS and exported to VHDL. VUnit's verification components and the external memory from #470 are used. Furthermore, the std_logic signals in the top entity of the core are shown in the Vue frontend.

Note that, the buttons/leds representing the std_logic signals show the status at the clock cycle when the content is updated. The status between updates is not recorded. However, it is possible to inspect it with GtkWave: since this is a VUnit simulation, after all, it is possible to generate a waveform by providing the corresponding CLI arg.

Instead of a single table, the memory is split in two views. This is just an example to show the flexibility: it is possible to allocate a single large/huge buffer to be shared between Python and GHDL, but only show some blocks of it at certain pointed addresses.


vunitcosim_hsconv2

This last example shares a large amount of data through an AXI Stream Slave and reads it from an AXI Stream Master, using VUnit's verification components. Precisely, a sintetic hyperspectral cube in BIP (band interleaved by pixel) format of size 256 x 128 x 20 x 16 bit is shared. With 'sintetic' I mean that the example is not a real hyperspectral image; instead I took 20 images, stacked them together and converted the cube to BIP format.

Since it is not possible to share 16 bit types betwen GHDL and a foreign language, actually an external integer vector is used (32 bits per value). Nonetheless, the point is that Python is used to read the shared buffer and encode each spatial image as a base64 encoded PNG. These PNG images are passed to the Vue frontend, which displays them directly. The user can interactively (during simulation) select which of the bands/layers to display.

Four images are shown, from left to right: the input buffer, the expected reference output, the buffer where GHDL is writing the results, and the difference between the reference and the results.

Once again, this is an example of the flexibility of the approach, to show that we can reuse the Vue project and common cosim.py functions for quite different designs. It is possible to show tables along with the images, for better inspection of the data.


All of the examples above do read data from the simulation, but they also write some data back. Precisely, each time 'Run' is clicked, four integer values are modified in a shared integer vector. This means that it is possible to allow the frontend to modify the content of the tables and write it back to GHDL while the simulation is waiting to advance. This is not implemented, tho.

I am currently focused on providing structured access to internal buffers of the design. I want to use a single parameter to add some 'virtual copy' to internal BRAMs/FIFOs, so it is possible to inspect them during simulation. I find that all the simulators have problems recording large memories in waveforms, because they need to store lots of metadata. This approach would work around it by allowing to inspect them at different times of the simulation, but without recording everything.

For future work, I think it would be very interesting to further develop the Vue frontend, in order to provide multiple views for application specific cases. Since it is possible to match a VHDL peripheral with a Vue model, all kinds of possibilities arise: some VGA/HDMI receiver/monitor, a line follower robot with a circuit, the 3D model of a BLDC motor (with e.g. Three.js)... However, I won't be able to work on this. Hope some other finds it interesting enough.

Regarding VUnit, I think that this Vue frontend is just a fancy example of what might be possible with external strings and integer vectors. But I believe that the interesting part upstream are the features that allow cosim.py, not serve.py. This is because cosim.py does not require any additional dependency, but serve.py needs Flask, and optionally Pillow and numpy.


To Do:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants