Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to provide a custom 'grt.ver' file as an argument to 'ghdl -e' #800

Closed
umarcor opened this issue Apr 21, 2019 · 15 comments · Fixed by #801
Closed

Allow to provide a custom 'grt.ver' file as an argument to 'ghdl -e' #800

umarcor opened this issue Apr 21, 2019 · 15 comments · Fixed by #801

Comments

@umarcor
Copy link
Member

umarcor commented Apr 21, 2019

Is your feature request related to a problem? Please describe.
I am wrapping GHDL in a C application: ghdl_main is called from main. Several C functions are bind to VHDL through VHPIDIRECT, but I believe that this is irrelevant.

The executable is successfully built with ghdl -e. However, when I load it as shared library (e.g., from a Python script), symbols are not found. This is because of /usr/local/lib/ghdl/grt.ver. If I modify it to add the name of the functions that I want to be visible (see #640 (comment)), it works.

I found no option to provide a custom grt.ver script to ghdl -e.

Describe the solution you'd like

I think it would be desirable to provide a script through an arg such as --version-script, in order to avoid multiple users in a system being required to have separate installations of GHDL just to avoid conflicts with this file.

Describe alternatives you've considered

It is possible to work around this issue by using gcc or llvm directly, instead of ghdl -e. As commented in #640, the output of --list-link can be customized through sed. Thus, the version script can be removed, or adapted.

Nonetheless, in the current use case the binary is compiled through VUnit, which is not aware of neither gcc nor llvm. Therefore, I'd like to keep calling ghdl -e, to avoid adding 'compiler support' to VUnit.

@tgingold
Copy link
Member

I am not sure this is going in the right direction, for at least two reasons:

  • It is not a complete solution. With ghdl -e, an executable is created. It is not supposed to be used as a shared library loaded by python. If you want a shared library, all the coded generated has to be compiled with -fpic.

  • It doesn't work with the mcode back end.

If you just want to export more symbols, I am pretty sure you can do it right now without new options: just write them in a linker script file and link with it.

@umarcor
Copy link
Member Author

umarcor commented Apr 23, 2019

  • It is not a complete solution. With ghdl -e, an executable is created. It is not supposed to be used as a shared library loaded by python. If you want a shared library, all the coded generated has to be compiled with -fpic.

All the code is compiled with -fpic. As per #670, GHDL is built through:

mkdir -p /tmp/ghdl && cd /tmp/ghdl
curl -fsSL https://codeload.github.com/ghdl/ghdl/tar.gz/master | tar xzf - --strip-components=1
OPT_FLAGS=-fPIC LIB_CFLAGS=-fPIC ./dist/travis/build.sh -b llvm-6.0 -p ghdl-llvm-fPIC
tar -xzf ghdl-llvm-fPIC.tgz -C /usr/local

So, all the object files produced by GHDL are rellocatable.

You are correct when you say that an executable file is not expected to be used as a shared library, but the fact is that, since both of them are ELF files with a similar format, it can be done (at least with GLIBC 2.27). This is not exclusive to Python. It is possible with C, Python or golang, because all of them use dlopen.

Nonetheless, I get your point. Instead, is it possible to allow passing -shared as an option to ghdl -e? AFAIK, the difference between an executable ELF and a shared library is that the former contains a main function. Therefore, the same 'executable' without an explicit entrypoint would work. The actual entrypoint would be ghdl_main, as it happens now when GHDL is wrapped in a C application.

  • It doesn't work with the mcode back end.

Yes, this is something exclusive to GCC or LLVM, because it is to be used with VHPIDIRECT.

If you just want to export more symbols, I am pretty sure you can do it right now without new options: just write them in a linker script file and link with it.

I am not linking explicitly. I am not using neither GCC nor LLVM, but ghdl -e. Therefore, GHDL does the --bind and --list-link internally, which sets the 'default' --version-script. Do you mean that I should do ghdl -e --std=08 -Wl,--version-script=myfile.ver?

@tgingold
Copy link
Member

So if the ghdl and its libraries are built with -fpic you can indeed load it.

I'd prefer not to add options/code like the -shared one. At some point the compile and link model may be deprecated.

Yes, using -Wl,-Wl,--version-script=file.ver might work.

@umarcor
Copy link
Member Author

umarcor commented Apr 23, 2019

So if the ghdl and its libraries are built with -fpic you can indeed load it.

Is there any restriction not to make this a default?

I'd prefer not to add options/code like the -shared one.

I'll try to find some alternative such as (https://lief.quarkslab.com/doc/latest/tutorials/08_elf_bin2lib.html).

I will also try building GHDL with some foreign C sources, but without an explicit main. I think that GHDL will add it's own in this context, but I am not sure about it. It does not work.

At some point the compile and link model may be deprecated.

What do you mean? On the one hand, I am asking for features related to building a shared library that includes GHDL with some extensions. I understand if you consider this to be out of the scope of GHDL, and it should be done independently. I'm ok with it. On the other hand, I am building some tools that depend on being able to do so, even if it needs to be done with GCC/LLVM, and not possible with GHDL only. I would be worried if this was deprecated in the future, i.e, if it was not possible to link objects generated after ghdl -a to some foreign application.

Yes, using -Wl,-Wl,--version-script=file.ver might work.

I will try it.

@umarcor
Copy link
Member Author

umarcor commented Apr 24, 2019

Yes, using -Wl,-Wl,--version-script=file.ver might work.

I will try it.

I tried it and it fails:

/usr/bin/ld: anonymous version tag cannot be combined with other version tags
collect2: error: ld returned 1 exit status
/usr/local/bin/ghdl: compilation error

This is because grt.ver is anonymous, so no additional file can be provided. However, it is possible to add any string and it will work. E.g:

ANY {
  global:
vpi...
...;
  local:
    *;
}

and, my own file:

VHPI {
  global:
main;
read_char;
write_char;
read_integer;
write_integer;
set_string_ptr;
get_string_ptr;
set_intvec_ptr;
get_intvec_ptr;
  local:
	*;
};

@tgingold, would you accept a PR that adds some name/string/version (say ANY) to the default grt.ver file?

@tgingold
Copy link
Member

Yes, a PR to add some version to grt.ver is OK.

The compile & link model used by LLVM and gcc backends prevents from doing post elaboration optimizations. And this model is not used by commercial simulators. Improving the simulation speed means getting rid of that model. That's why it could be deprecated in the future.

I'd like to understand better why do you need to dynamically load a ghdl binary. I have no problem with that but I think this is a very particular use of ghdl and I'd prefer not to put too much effort on it.
How would your use case work with commercial simulators ?

@umarcor
Copy link
Member Author

umarcor commented Apr 24, 2019

Yes, a PR to add some version to grt.ver is OK.

Great.

The compile & link model used by LLVM and gcc backends prevents from doing post elaboration optimizations. And this model is not used by commercial simulators. Improving the simulation speed means getting rid of that model. That's why it could be deprecated in the future.

If I don't get it wrong, this means that:

  • GCC and LLVM will be deprecated and the single backend will be mcode.
  • Support for VHPI direct will need to be added to mcode.
  • Support for non-x86 devices will need to be added to mcode.
  • Overall, GHDL itself will be the only tool which will be able to generated a binary that can run with the GHDL runtime.

I understand by you want to get rid of the compile & link model, but I would feel sorry if these features were dropped without replacement. They make GHDL unique, just as Verilator for Verilog.

I'd like to understand better why do you need to dynamically load a ghdl binary. I have no problem with that but I think this is a very particular use of ghdl and I'd prefer not to put too much effort on it.

There are several use cases, but these are the two most important I can think of:

On the one hand, seamless co-simulation with routines written in any language (C, C++, Python, Golang, etc.). There are a lot of examples about how to use GHDL + VHPIDIRECT + OS pipes/buffers + PickYourLanguage, where an (opinionatedly) unnecesary layer of complexity is added: the pipes. One needs to set the name of the pipe(s) in the VHPI C sources and in the external language, have one of them created them and the other part pick them.

The exception to the above is C, with which it is possible to provide a main function that calls ghdl_main at any point. In this case, the memory space of the main application is used by GHDL, so "sharing" implies just passing a reference. No middleware pipes are involved. This is not only easier to understand and implement, performance should also improve.

Well, it turns out that it is possible to follow the same approach with Python and/or Golang, if the GHDL binary is loaded as a shared library. Both languages have support for C types, so they can interact directly with main, ghdl_main or any other of the helper functions defined in the C sources.

For example, this is implemented in: https://github.com/dbhi/vunit/blob/feat-external-arrays/examples/vhdl/external_buffer. You will find that the C file contains expected functions to be used in a GHDL + VHPIDIRECT + C environment: https://github.com/dbhi/vunit/blob/feat-external-arrays/examples/vhdl/external_buffer/src/test/main.c. If the design is built and executed as usual, buffer allocation is done as shown in the main file. However, the same binary can be used from python, as follows: https://github.com/dbhi/vunit/blob/feat-external-arrays/examples/vhdl/external_buffer/run.py#L93-L107. Note that function set_string_ptr, and all other functions in the C file, are visible to GHDL (VHDL), to C, and to Python, because I used the custom grt.ver shown in my previous comment.

Overall, I think that this is a much easier approach than using cocotb (VPI) for basic co-simulation cases. Precisely, VUnit verification components (which can be used independently of any other VUnit features) allow GHDL to be used as a direct replacement for any C function. This is so because interactions with top-level interfaces (AXI, Wishbone, Avalon, UART) are converted to simple calls to function calls. The range of possibilities that this offers is endless. For example:

On the other hand, GHDL does not have multi-thread support, and it might be hard to handle it at a low level. However, for large designs composed of multiple subsystems, it is common to test each subsystem independently and then integrate all of them. We can imagine a project with a few 'large' subsystems (let's say two) which are connected to each other through AXI interfaces, and which are expected to be deployed in a Zynq device (with two ARM cores). We have two VUnit projects, one for each subsystem which use VHPIDIRECT to bind the AXI interfaces to C functions. Now, we just want to co-simulate four threads, hopefully attaching one to each of the cores in a 4-core workstation.

A naive approach is to execute two GHDL binaries (one for each of the subsystems) and a binary with two processes simulating one of the ARM cores each. This requires at least two pipes to be setup, to allow the three binaries to communicate to each other. Depending on the requirements of the design, this can get pretty complex easily.

Alternatively, we might try to analyze the sources of the two subsystems separatedly, and then integrate them in a single GHDL binary, defining separate threads in the main function. I think this is simply imposible: each of the subsystems has it's own ghdl_main and we cannot tell them appart during compilation.

Hence, the solution is, once again, to load the binary of each of the subsystems as a shared library. This provides a kind of 'namespacing', so two different ghdl_main can cohexist in the orchestrator. Moreover, as commented in the first point, all of these GHDL binaries loaded/executed in parallel can share the same user memory space. Say, for example, that we want to simulate the DDR in a Zynq board. We just allocate a region in the orchestrator and then pass the reference to the four parallel threads.

How would your use case work with commercial simulators ?

I think it would not. AFAIK, no commercial simulator supports neither to embed a design in a foreign application nor to build it as a shared library. With commercial simulators, pipes are normally used.

Nevertheless, I think that the reference here should not be 'HDL simulators', but 'HDL compilers' instead. Verilator somehow supports generating shared objects, but it is because the output of the tool itself are C++ sources and a makefile. Then GCC/LLVM is executed explicitly in order to generate the binary. See 'EXAMPLE C++ EXECUTION' in https://www.veripool.org/projects/verilator/wiki/Manual-verilator#VERILATION-ARGUMENTS and https://www.veripool.org/boards/2/topics/1910-Verilator-simulator-in-a-dynamic-object-so-.

Regarding the second use case (having independent standalone pre-built objects/binaries integrated together) has been discussed but is not supported yet: https://www.veripool.org/boards/2/topics/412-Verilator-verilating-compiling-modules-into-separate-object-files. Note that this is precisely what I want to avoid by using multiple shared libraries. I'm putting the reference just for completeness.


For the record, installing building GHDL with OPT_FLAGS=-fPIC LIB_CFLAGS=-fPIC ./dist/travis/build.sh -b llvm-6.0 -p ghdl-llvm-fPIC allows to generate binaries with ghdl -e or to build shared libraries with gcc -shared. Both of them can be sucessfully dynamically loaded on recent systems.

However, on older systems (Ubuntu Trusty), installing GHDL as above seems not to be enough. The binary generated with ghdl -e can be executed, but I cannot be dynamically loaded. Moreover, gcc -shared fails, so the shared library cannot be created. It complains about libgrt.a not being built with -fPIC.

@tgingold
Copy link
Member

As a preliminary comment, getting rid of the normal compile & link flow is right now just a plan (based on optimization purpose).

I think this would mean getting rid of the gcc build, but not of llvm as llvm supports jit.

I understand your use case (cosimulation) and I agree it would be stupid not to allow it.

Right now, it could be possible to create a shared library as a result of -e.

For the mcode backend, maybe a shared library of ghdl_mcode should be provided, like python is providing a shared library of the interpreter.

I will think about it.

@umarcor
Copy link
Member Author

umarcor commented Apr 24, 2019

As a preliminary comment, getting rid of the normal compile & link flow is right now just a plan (based on optimization purpose).

Of course. I was not panicking. The point is that I am building some tools on top of GHDL, precisely because it provides these unique features. I should be careful when 'advertising' it if the underlying dependencies are expected to break, even if it will not happen in the near future.

I think this would mean getting rid of the gcc build, but not of llvm as llvm supports jit.

This is good news. As long as LLVM is supported, I believe that all the features will work, except for code coverage. Or, is it possible to use gcov with LLVM backend?

I understand your use case (cosimulation) and I agree it would be stupid not to allow it.

Glad to hear that. Note that all of the use cases I commented are already supported by GHDL on latest systems (e.g. Ubuntu 18). This issue was about making it easier to make additional symbols visible. But binaries generated by GHDL can already be dynamically loaded.

Right now, it could be possible to create a shared library as a result of -e.

I saw that you pushed some commits, but I cannot see a difference. Do you mean that it could be potentially supported but a user cannot do it now?

For the mcode backend, maybe a shared library of ghdl_mcode should be provided, like python is providing a shared library of the interpreter.

I will think about it.

Since VHPI is not supported by mcode, I am not sure about how useful this could be. Sure, it would be interesting to set multiple groups of generic values and execute multiple tests. However:

  • It would be difficult to actually share data between Python (or any other loader) and the simulation without some explicit knowledge about the interfaces. VHPI provides a mechanism to define such interfaces using C syntax and semantics.
  • In the test I have done, it is not possible to call main multiple times, once the binary/library is loaded. I.e., it is required to unload it and load it again for each execution. I don't know if this is something that can be improved in GHDL.

Nevertheless, none of these points are critical. As commented, it is currently possible to use GHDL on some systems for the purpose I have described. It's ok if you take as long as you want to fix this, or if you just focus on other more important features.

@tgingold
Copy link
Member

tgingold commented Apr 26, 2019 via email

@umarcor
Copy link
Member Author

umarcor commented Apr 26, 2019

Two features are missing from the LLVM backend: debugging and coverage. To be added.

Thanks for the remark.

I still think that with --bind/--list-link, you can create a shared library. Might not be trivial but I don't see showstopper.

Yes, you are correct. It is also possible with ghdl -e alone. I wrote the comment above before #804. Now that #804 is merged, it is possible for ghdl -e to generate a binary which is identified as DYN (on any platform).

I'm now trying to make it slightly easier by simplifying the syntax in #805.

About abort(): I don't think it is used during normal operations (even simulation failure). A reproducer would be useful.

#803 is a reproducer. Nevertheless, do not worry about it for now. This is only happenning with VHDL 1993; which is a constraint that does not bother me for now, since I need VHDL 2008 in most of the practical use cases. I will open a more specific issue once I have solved #805.

Restarting the simulation could also be implemented.

That'd be interesting. I think it is tightly related to #154. I can imagine an API such as:

ghdl_main();
ghdl_runfor(10);
ghdl_restart();
ghdl_rununtil(100);
ghdl_destroy();

Is this something I might try to implement or is some deep understanding of GHDL internals required?

@tgingold
Copy link
Member

tgingold commented Apr 27, 2019 via email

@umarcor
Copy link
Member Author

umarcor commented Apr 7, 2020

@tgingold, after re-reading this discussion, I feel I did not understand it properly. Currently, I'm using ghdl -e to generate executable binaries with LLVM backend. However, when I want to generate a shared libary, I'm using GCC with --bind and --list-link. Would it be currently possible to pass something such as -Wl,-shared to ghdl -e instead, not to call GCC explicitly?

@tgingold
Copy link
Member

tgingold commented Apr 8, 2020

Yes, you can try to use -Wl,-shared. I suppose it should work.

@radonnachie
Copy link
Contributor

radonnachie commented Apr 9, 2020

I confirm this as working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants