Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support up to N memory models for verification components #462

Open
umarcor opened this issue Mar 30, 2019 · 15 comments · May be fixed by #470
Open

Support up to N memory models for verification components #462

umarcor opened this issue Mar 30, 2019 · 15 comments · May be fixed by #470

Comments

@umarcor
Copy link
Member

umarcor commented Mar 30, 2019

I have a C application that allocates some memory space. Then, GHDL is started inside the application, so the allocated space is available to the simulation. In the testbench, I execute buffer.init() in order to initialize a shared variable of type array of integers with the address pointer provided by the C application. From there on, I can read/write in the application memory space with buffer.get(y) and buffer.set(y, to_integer(signed(o))).

I would like to integrate that with an AXI4 Master UUT, using VUnit's verification components. I have successfully instantiated axi_write_slave and axi_read_slave. I have created a constant of type memory_t, then a signal buf : buffer_t := allocate(memory, 1024) and last constant axi_slave : axi_slave_t := new_axi_slave(memory => memory).

So far everything works as expected, but I need to copy all the data from buffer to memory/buf before starting the UUT and copy it back when the task is done. On the one hand, I am allocating twice the required space. On the other hand, it is not easy to use the shared memory for synchronization between the software and the UUT, since it is effectively duplicated, so coherency is a problem.

Now, I would like to associate axi_slave to the buffer I have already allocated, instead of requiring buf. I.e., I want the verification components to read from and write to the application memory space directly. Is it possible to wrap my buffer.get and buffer.set functions so that the verification components can use them directly?

@kraigher
Copy link
Collaborator

So the essence of this issue is that you want the VUnit memory model to remain in sync with an external memory model. Thus we could consider adding such a concept to the memory model (memory_t).

Unfortunately the VHDL language up to 2008 does not support inheritance or injecting different behaviour via the instantiation of a different type of object sharing the same interface. We could maybe make the memory model a generic package but then all code that use memory_t needs a package generic which could cause compatibility problems. In my experience package generics are poorly supported in simulators and the implementation has many bugs. VUnit:s strategy has been to avoid using features that are bleeding egde and cause simulator incompatibility.

The classic approach to this problem in VHDL has been to extract the variation point into a separate package that can be replaced by another package via a compilation switch. In this case we could add an "external_memory" package with a default NOP implementation that does nothing. Via the Python API the user could point to another file which would be compiled in the place of the default implementation. The consequence of this would be that all test benches within a run.py file would need to have the same external memory model. However if the need arises we could always add support for up to N external memory models where N is finite. When creating the memory_t object which is done dynamically the user could select which external memory model (if any) that should be active.

Would the solution outlined by me above fulfil your needs?

@umarcor
Copy link
Member Author

umarcor commented Mar 30, 2019

You definitely know more about VHDL than me, so please let me know if the following makes any sense. I am using the following package as the 'glue' between C and VHDL:

package pkg_c is
  function get_p(f: integer) return integer;
    attribute foreign of get_p :
      function is "VHPIDIRECT get_p";

  constant stream_length : integer := get_p(0);

  type buffer_t is array(integer range 0 to stream_length-1) of integer;
  type buffer_p is access buffer_t;

  impure function get_b(f: integer) return buffer_p;
    attribute foreign of get_b :
      function is "VHPIDIRECT get_b";

  type buffet_t_prot is protected
    procedure init ( i: integer );
    procedure set ( i: integer; v: integer);
    impure function get (i: integer) return integer;
  end protected buffet_t_prot;

  shared variable buffer: buffet_t_prot;

end pkg_c;

package body pkg_c is
  function get_p(f: integer) return integer is begin
    assert false report "VHPI" severity failure;
  end get_p;

  impure function get_b(f: integer) return buffer_p is begin
    assert false report "VHPI" severity failure;
  end get_b;

  type buffet_t_prot is protected body
    variable var: buffer_p;
    procedure init ( i: integer ) is begin
      var := get_b(i);
    end procedure;
    procedure set ( i: integer; v: integer ) is begin
      var(i) := v;
    end procedure;
    impure function get ( i: integer ) return integer is begin
      return var(i);
    end get;
  end protected body buffet_t_prot;
end pkg_c;

Where get_p is a function defined in C that returns some parameter (integer). Parameter get_p(0) is the length of the already allocated buffer. get_b is another function defined in C that returns a pointer. get_b(0) retrieves the address of the buffer already allocated in C.

I was thinking that it might be possible to remove buffet_t_prot and use integer_vector_access_t which is already defined in VUnit. To do so, I would just need a procedure/function which is equivalent to init above. I.e., that allows me to set the base address of the access and the length. Would it be possible to have something like impure function preallocated(base: natural, length: natural) integer_vector_access_t?

package pkg_c is
  function get_p(f: integer) return integer;
    attribute foreign of get_p :
      function is "VHPIDIRECT get_p";

  constant stream_length : integer := get_p(0);

  impure function get_b(f: integer) return buffer_p;
    attribute foreign of get_b :
      function is "VHPIDIRECT get_b";
end pkg_c;

package body pkg_c is
  function get_p(f: integer) return integer is begin
    assert false report "VHPI" severity failure;
  end get_p;

  impure function get_b(f: integer) return buffer_p is begin
    assert false report "VHPI" severity failure;
  end get_b;
end pkg_c;

And, in the testbench:

  signal buf : buffer_t := preallocated(get_b(0), get_p(0));

I don't get very well how does integer_vector_access_t relate to integer_vector_ptr_t, or which of them are used by the verification components. I'm sorry if this is too naive.

As you say, I think that generic packages are unfortunately a bad approach, due to the lack of support. VUnit's strategy is good.

The solution you propose (providing an alternative implementation) would fit. Precisely, I was about to start doing that before opening this issue. When I had a look at https://github.com/VUnit/vunit/blob/master/vunit/vhdl/data_types/src/integer_vector_ptr_pkg-body-200x.vhd#L17-L18, I thought that there might be an easier approach in this use case. I hope you can clarify.

@kraigher
Copy link
Collaborator

A few clarifications:

  1. buffer_t in the VUnit memory model is just an address range in the memory model and not a storage mechanism. The memory model is just one big flat address space into which buffer_t are allocated by giving them an address range. The verification components just interact with the memory model not the buffers. The buffers are just there to get better error messages and to allocate addresses in the test bench. Thus an individual buffer_t cannot use another external buffer, it has to be done for the entire memory model.

  2. integer_vector_ptr is a general purpose data type for dynamic vectors of integers that is used in many places in VUnit. It is kind of a work around to limitations of the VHDL language. It is not possible to use an external pointer in place of an integer_vector_ptr without manually modifying the code.

Thus the only feasible solution is the one I proposed which was adding an external memory concept to the memory model where the user could compile-in their own version. The interface to such an external model would have to be defined. In the simplest case it would just need functions to read/write to/from a byte of external memory

@umarcor
Copy link
Member Author

umarcor commented Apr 1, 2019

Thanks a lot for the clarifications. I don't really know what's the complexity behind building a 'new' memory model, as I am not sure about what does a 'memory model' comprise. If it is just a couple of functions to read/write from/to a byte of a memory, that's easy to achieve. Precisely, the snippets above might suffice. It is also easy to provide similar functions for shorts and integers, if required.

Via the Python API the user could point to another file which would be compiled in the place of the default implementation. The consequence of this would be that all test benches within a run.py file would need to have the same external memory model.

This might be an issue for my use case. Currently, I have a single UUT that includes the verification components, and I have two top-level testbenches. One of them is a regular VUnit testbench. The other one is meant to be built with GCC along with an external C wrapper. I first compile the VUnit testbench, and I then manually analyse the remaining C-related files. Finally, I build the binary with the objects in vunit_out/ghdl/libraries/vunit_lib and the new objects.

If a single memory model must be used:

  • I'd need to recompile VUnit between testbenches.
  • I'd have to support a CLI arg in the python script, in order to pass the option to VUnit.

However if the need arises we could always add support for up to N external memory models where N is finite.

This would be a much better approach. If I understand it correctly, this would allow to add some files in the python script which define some additional memory model. Then, in the VHDL testbench, when new_memory is executed, an optional argument can be used to select the model. This would support an actual use case:

  • Have multiple VCs in the testbench that use a VUnit memory, which represents an external DDR connected to the PL in a Zynq device.
  • Have other VCs that use the external memory, which represents the DDR in the PS of a Zynq device.

Regarding the allocation, I have three questions:

  • Currently, I'am allocating with variable buf: vunit_lib.memory_pkg.buffer_t := allocate(memory, 1024);. However, I don't use buf at all, because I write/read with write_word(memory, 4*x, tmp); and read_word(memory, 4*(50+x), 4);. Would it be possible to add allocate as a procedure too? Alternatively, having it as an optional argument to new_memory would alse be handy.

  • When new_memory("external") is used, how would the base address be provided? Should it be a specific function (as init above), should an instance (variable) be provided to new_memory?

  • I think that VUnit currently allocates memory dynamically with no limit (that might be the underlying issue in Crash with latest VUnit ghdl/ghdl#752). With a external memory, the size might be limited. Is it required for a external memory model to support reallocation? Independently of the limit, would it be possible to use buffer_t with either of the memory models?

@slaweksiluk
Copy link
Contributor

@umarcor @kraigher
I spent some time thinking about extending Vunit memory mapped slave VC's to connect them with external simulation model.

My first thought was about extend VC's itself. For example divide AvalonMM Slave on two architectures: current, with memory backend, and new one with Vunit message fifo backed - it would simply send all read write transactions to some predefined actor. I think this would be similar to this pull request #425

Anyway, this approach has significant drawback. It crates two architectures for each Memory mapped slave - while Avalon MM interface is the same (in fact above PR adds new Avalon MM models, with different files names). Switching only memory model does not have this disadvantage.

In my case I need to connect AVMM Slave to some external model (via linux pipes, I will make this project public soon). Please notice, AVMM Slave https://github.com/VUnit/vunit/blob/master/vunit/vhdl/verification_components/src/avalon_slave.vhd only uses write_word and read_word' subprograms. So in my case all I need is simple mock memory pkg https://github.com/VUnit/vunit/blob/master/vunit/vhdl/verification_components/src/memory_pkg.vhd with new_memory, read_word, write_word subprograms. No allocation of any memory as I handle it in external model.

@kraigher in case of memory_pkg.vhd selected by Python switch, will it be still possible to use old memory_pkg.vhd with allocated memory, alongside with users custom memory_pkg.vhd in the same project? Wouldn't it generate name space conflict?

Just as a comment to VHDL generics packages support status. Please be aware that ghdl already support generics packages with generic subprograms: https://github.com/slaweksiluk/misc/blob/master/ghdl-playground/gen-type-package/a.vhd, but I don't know what is the status of other simulators. I will try to rewrite memory_pkg.vhd as generic package if I find some time.

@kraigher
Copy link
Collaborator

kraigher commented Apr 2, 2019

Firstly I want to clarify to @umarcor what we call a memory model in VUnit since you asked what it is comprised of. At its simplest a memory model is just a fixed length array of bytes forming a linear address space. The basic operations are to read or write one byte in one address just like a real memory. Accessing words comprised of several bytes is just an operation built on top of the single byte operations and requires the endianness of the word to be defined for the access. Just like in real hardware a DRAM or SRAM just stores bytes. It is in the interpretation of the bytes by the user of the memory such as a CPU where the endianness comes into play. At its core this is what the VUnit memory model consists of; an array of bytes. In the VUnit case this array does not need a fixed pre-determined size but will grow with the needs. The VUnit memory model has procedures to read and write bytes but they require the user to specify the endianness for the call.

On top of this basic memory modelling of the byte array the VUnit memory model has a few additional features built on top of it such as:

  • Automatically allocate addresses for buffers so that the user does not have to hard code addresses
  • Permission flags for each address to catch unintended accesses by the DUT
  • Expected value byte for each memory byte to catch the wrong data written as it happens and not after the entire test sequence.

What we are discussing here is sending the basic byte operations to an external byte array in a C-library or similar instead of an internal byte array. It does not affect the additional features mentioned in the list above since they build on top of this. Since every access to the VUnit memory model boils down to either reading or writing a byte it would be simple to substitute those calls at the lowest level with another call that reads and writes the byte to the external memory model. The permission and expected value bytes could still be stored in the original VUnit memory model as they are an additional layer of functionality.

Regarding the specific questions form @umarcor

* Currently, I'am allocating with `variable buf: vunit_lib.memory_pkg.buffer_t := allocate(memory, 1024);`. However, I don't use `buf` at all, because I write/read with `write_word(memory, 4*x, tmp);` and `read_word(memory, 4*(50+x), 4);`. Would it be possible to add allocate as a procedure too? Alternatively, having it as an optional argument to `new_memory` would alse be handy.

Allocate would work exactly as today with an external memory model. Allocation just returns a free address range. A free range is one that was not previously allocated. In case of the internal memory model the byte array is re-sized if it is to small. In the case of a fixed size external array it could not be re-sized so the fixed size must be enough. This could be reflected in the API towards an external memory model.

* When `new_memory("external")` is used, how would the base address be provided? Should it be a specific function (as `init` above), should an instance (variable) be provided to `new_memory`?

By convention all memory models have a base address of 0. It is buffer allocated within a memory that can have a non-zero base address. Does the memory itself need a base address? I see this as an orthogonal issue to having an external model. If a memory needs a base address it would need it also when you have a VUnit internal memory model.

I think that VUnit currently allocates memory dynamically with no limit (that might be the underlying issue in ghdl/ghdl#752). With a external memory, the size might be limited. Is it required for a external memory model to support reallocation? Independently of the limit, would it be possible to use buffer_t with either of the memory models?

I do not see a requirement that the memory model cannot be of a fixed size. In the internal model we have the opportunity to grow it dynamically as buffers are allocated. Having a fixed size for an external model is possible it just needs to be reflected in the package API towards the external model.

Regarding the questions from @slaweksiluk

Anyway, this approach has significant drawback. It crates two architectures for each Memory mapped slave - while Avalon MM interface is the same (in fact above PR adds new Avalon MM models, with different files names). Switching only memory model does not have this disadvantage.

Having the flexibility within the memory model itself is more scalable as it requires no extra work for each memory-mapped VC using the model and can be done in one place.

@kraigher in case of memory_pkg.vhd selected by Python switch, will it be still possible to use old memory_pkg.vhd with allocated memory, alongside with users custom memory_pkg.vhd in the same project? Wouldn't it generate name space conflict?

I think the goal must be that at least the regular internal model and an external model can co-exist in the same project (run.py). Ideally it would support any amount of external models. I believe it is feasible with a small amount of generated code that makes read/write byte calls to different packages with external memory models based in a case statement. It is upon creation of the memory_t object that the user would select which memory model to be used for that specific instance by passing a flag. The memory mapped models already take a memory_t generic as I have foreseen the use case of having multiple independent memories and thus you could configure one VC an internal memory_t and another VC with an external memory_t.

Just as a comment to VHDL generics packages support status. Please be aware that ghdl already support generics packages with generic subprograms: https://github.com/slaweksiluk/misc/blob/master/ghdl-playground/gen-type-package/a.vhd, but I don't know what is the status of other simulators. I will try to rewrite memory_pkg.vhd as generic package if I find some time.

Using generic packages is the alternative to generating code. If it works in all supported simulators since versions 2 years back I can see it as an acceptable solution. At least for commercial simulators, for GHDL you could always say it is feasible to use the latest version.

@umarcor
Copy link
Member Author

umarcor commented Apr 2, 2019

In my case I need to connect AVMM Slave to some external model (via linux pipes, I will make this project public soon). Please notice, AVMM Slave /vunit/vhdl/verification_components/src/avalon_slave.vhd@master only uses write_word and read_word' subprograms. So in my case all I need is simple mock memory pkg /vunit/vhdl/verification_components/src/memory_pkg.vhd@master with new_memory, read_word, write_word subprograms. No allocation of any memory as I handle it in external model.

This is exactly the same use case as mine. The fact that you use linux pipes and I use diffferent mechanisms (either built-in or RPC), is negligible. Precisely, I believe that you are using pipes because you want to co-execute the simulation with some other process. I do that with pthreads or golang goroutines.

@kraigher in case of memory_pkg.vhd selected by Python switch, will it be still possible to use old memory_pkg.vhd with allocated memory, alongside with users custom memory_pkg.vhd in the same project? Wouldn't it generate name space conflict?

According to this comment, it is not possible, unless up to N external memory models are supported.

@umarcor umarcor changed the title Replace memory_t and/or buffer_t in a AXI Master simulation with axi_read_slave and axi_write_slave Support up to N memory models for verification components Apr 4, 2019
@slaweksiluk
Copy link
Contributor

@umarcor exactly, that's exactly what I do. I already have prepared linux pipe interface for driving MM Master VC from python. I started to implement external memory model for MM Slave VC, but it has to wait till proper generic packages support in ghdl. Unfortunately it won;'t be easy - there are multiple bugs. However I will at least post issues at ghdl repo and hopefully Tristan will have some time to fix it. If no, implementing packages switching in python is the only way.

@eine
Copy link
Collaborator

eine commented Apr 5, 2019

@slaweksiluk, related to ghdl/ghdl#608, I'd like to ask the opposite question to you: why are you using VUnit instead of cocotb if you want co-execution/co-simulation between python and GHDL? I am asking it just because of curiosity. I don't expect a comparison between the projects. Instead, I am trying to understand whether it makes any sense to:

  • Add co-execution capability to VUnit by generalizing this 'external memory model' approach in a structured manner. With 'generalizing' I mean providing a mechanism to co-execute GHDL with any other software, not only Python.
  • Combine VUnit and cocotb. Is it possible?

@kraigher
Copy link
Collaborator

kraigher commented Apr 5, 2019

My view is that any amount of co-existing external memory models could be supported using a small amount of generated code without needing generic packages. We would have to define a package API to which the external memory model would conform.

@umarcor
Copy link
Member Author

umarcor commented Apr 5, 2019

@kraigher are you willing to propose that API to which the additional models should conform? Should I give it a try?

@kraigher
Copy link
Collaborator

kraigher commented Apr 5, 2019

You are welcome to give it a try. My overall strategy is to get more people involved in VUnit. Teach a man to fish vs. giving him a fish. You could make a prototype that replaces the calls to the internal byte array with calls to your external array. That way we ensure we are making something that will solve your case. After this we can talk about the next steps to make it "production ready" to be merged.

@umarcor
Copy link
Member Author

umarcor commented Apr 5, 2019

That sounds as a plan. I will start with forking and setting up a testing environment. I'll reach to you when I have some working example.

@umarcor
Copy link
Member Author

umarcor commented Apr 7, 2019

Hi @kraigher! I have a working prototype that allows to use an external memory if new_memory(id: integer) is used. Otherwise, it defaults to the standard implementation. However, I had to disable multiple checks for it to work. The main issue is that, since no allocation is required in VHDL, the helper checks in VUnit think that the memory is empty, even if it is not.

I'd like to push it to a branch, so that I can ask several things. However, the UUT I am using is generated from Vivado HLS. So, I don't think I can upload those VHDL sources. Do you know of any available UUT written in VHDL which uses AXI Master?

@kraigher
Copy link
Collaborator

kraigher commented Apr 7, 2019

There is the AXI DMA example that uses both the read and write slaves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants