-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to provide a custom 'grt.ver' file as an argument to 'ghdl -e' #800
Comments
I am not sure this is going in the right direction, for at least two reasons:
If you just want to export more symbols, I am pretty sure you can do it right now without new options: just write them in a linker script file and link with it. |
All the code is compiled with mkdir -p /tmp/ghdl && cd /tmp/ghdl
curl -fsSL https://codeload.github.com/ghdl/ghdl/tar.gz/master | tar xzf - --strip-components=1
OPT_FLAGS=-fPIC LIB_CFLAGS=-fPIC ./dist/travis/build.sh -b llvm-6.0 -p ghdl-llvm-fPIC
tar -xzf ghdl-llvm-fPIC.tgz -C /usr/local So, all the object files produced by GHDL are rellocatable. You are correct when you say that an executable file is not expected to be used as a shared library, but the fact is that, since both of them are ELF files with a similar format, it can be done (at least with GLIBC 2.27). This is not exclusive to Python. It is possible with C, Python or golang, because all of them use Nonetheless, I get your point. Instead, is it possible to allow passing
Yes, this is something exclusive to GCC or LLVM, because it is to be used with VHPIDIRECT.
I am not linking explicitly. I am not using neither GCC nor LLVM, but |
So if the ghdl and its libraries are built with I'd prefer not to add options/code like the Yes, using |
Is there any restriction not to make this a default?
I'll try to find some alternative such as (https://lief.quarkslab.com/doc/latest/tutorials/08_elf_bin2lib.html).
What do you mean? On the one hand, I am asking for features related to building a shared library that includes GHDL with some extensions. I understand if you consider this to be out of the scope of GHDL, and it should be done independently. I'm ok with it. On the other hand, I am building some tools that depend on being able to do so, even if it needs to be done with GCC/LLVM, and not possible with GHDL only. I would be worried if this was deprecated in the future, i.e, if it was not possible to link objects generated after
I will try it. |
I tried it and it fails: /usr/bin/ld: anonymous version tag cannot be combined with other version tags
collect2: error: ld returned 1 exit status
/usr/local/bin/ghdl: compilation error This is because
and, my own file:
@tgingold, would you accept a PR that adds some name/string/version (say |
Yes, a PR to add some version to grt.ver is OK. The compile & link model used by LLVM and gcc backends prevents from doing post elaboration optimizations. And this model is not used by commercial simulators. Improving the simulation speed means getting rid of that model. That's why it could be deprecated in the future. I'd like to understand better why do you need to dynamically load a ghdl binary. I have no problem with that but I think this is a very particular use of ghdl and I'd prefer not to put too much effort on it. |
Great.
If I don't get it wrong, this means that:
I understand by you want to get rid of the compile & link model, but I would feel sorry if these features were dropped without replacement. They make GHDL unique, just as Verilator for Verilog.
There are several use cases, but these are the two most important I can think of: On the one hand, seamless co-simulation with routines written in any language (C, C++, Python, Golang, etc.). There are a lot of examples about how to use GHDL + VHPIDIRECT + OS pipes/buffers + PickYourLanguage, where an (opinionatedly) unnecesary layer of complexity is added: the pipes. One needs to set the name of the pipe(s) in the VHPI C sources and in the external language, have one of them created them and the other part pick them. The exception to the above is C, with which it is possible to provide a Well, it turns out that it is possible to follow the same approach with Python and/or Golang, if the GHDL binary is loaded as a shared library. Both languages have support for C types, so they can interact directly with For example, this is implemented in: https://github.com/dbhi/vunit/blob/feat-external-arrays/examples/vhdl/external_buffer. You will find that the C file contains expected functions to be used in a GHDL + VHPIDIRECT + C environment: https://github.com/dbhi/vunit/blob/feat-external-arrays/examples/vhdl/external_buffer/src/test/main.c. If the design is built and executed as usual, buffer allocation is done as shown in the Overall, I think that this is a much easier approach than using cocotb (VPI) for basic co-simulation cases. Precisely, VUnit verification components (which can be used independently of any other VUnit features) allow GHDL to be used as a direct replacement for any C function. This is so because interactions with top-level interfaces (AXI, Wishbone, Avalon, UART) are converted to simple calls to function calls. The range of possibilities that this offers is endless. For example:
On the other hand, GHDL does not have multi-thread support, and it might be hard to handle it at a low level. However, for large designs composed of multiple subsystems, it is common to test each subsystem independently and then integrate all of them. We can imagine a project with a few 'large' subsystems (let's say two) which are connected to each other through AXI interfaces, and which are expected to be deployed in a Zynq device (with two ARM cores). We have two VUnit projects, one for each subsystem which use VHPIDIRECT to bind the AXI interfaces to C functions. Now, we just want to co-simulate four threads, hopefully attaching one to each of the cores in a 4-core workstation. A naive approach is to execute two GHDL binaries (one for each of the subsystems) and a binary with two processes simulating one of the ARM cores each. This requires at least two pipes to be setup, to allow the three binaries to communicate to each other. Depending on the requirements of the design, this can get pretty complex easily. Alternatively, we might try to analyze the sources of the two subsystems separatedly, and then integrate them in a single GHDL binary, defining separate threads in the Hence, the solution is, once again, to load the binary of each of the subsystems as a shared library. This provides a kind of 'namespacing', so two different
I think it would not. AFAIK, no commercial simulator supports neither to embed a design in a foreign application nor to build it as a shared library. With commercial simulators, pipes are normally used. Nevertheless, I think that the reference here should not be 'HDL simulators', but 'HDL compilers' instead. Verilator somehow supports generating shared objects, but it is because the output of the tool itself are C++ sources and a makefile. Then GCC/LLVM is executed explicitly in order to generate the binary. See 'EXAMPLE C++ EXECUTION' in https://www.veripool.org/projects/verilator/wiki/Manual-verilator#VERILATION-ARGUMENTS and https://www.veripool.org/boards/2/topics/1910-Verilator-simulator-in-a-dynamic-object-so-. Regarding the second use case (having independent standalone pre-built objects/binaries integrated together) has been discussed but is not supported yet: https://www.veripool.org/boards/2/topics/412-Verilator-verilating-compiling-modules-into-separate-object-files. Note that this is precisely what I want to avoid by using multiple shared libraries. I'm putting the reference just for completeness. For the record, installing building GHDL with However, on older systems (Ubuntu Trusty), installing GHDL as above seems not to be enough. The binary generated with |
As a preliminary comment, getting rid of the normal compile & link flow is right now just a plan (based on optimization purpose). I think this would mean getting rid of the gcc build, but not of llvm as llvm supports jit. I understand your use case (cosimulation) and I agree it would be stupid not to allow it. Right now, it could be possible to create a shared library as a result of -e. For the mcode backend, maybe a shared library of ghdl_mcode should be provided, like python is providing a shared library of the interpreter. I will think about it. |
Of course. I was not panicking. The point is that I am building some tools on top of GHDL, precisely because it provides these unique features. I should be careful when 'advertising' it if the underlying dependencies are expected to break, even if it will not happen in the near future.
This is good news. As long as LLVM is supported, I believe that all the features will work, except for code coverage. Or, is it possible to use gcov with LLVM backend?
Glad to hear that. Note that all of the use cases I commented are already supported by GHDL on latest systems (e.g. Ubuntu 18). This issue was about making it easier to make additional symbols visible. But binaries generated by GHDL can already be dynamically loaded.
I saw that you pushed some commits, but I cannot see a difference. Do you mean that it could be potentially supported but a user cannot do it now?
Since VHPI is not supported by mcode, I am not sure about how useful this could be. Sure, it would be interesting to set multiple groups of generic values and execute multiple tests. However:
Nevertheless, none of these points are critical. As commented, it is currently possible to use GHDL on some systems for the purpose I have described. It's ok if you take as long as you want to fix this, or if you just focus on other more important features. |
Two features are missing from the LLVM backend: debugging and coverage. To be added.
I still think that with --bind/--list-link, you can create a shared library. Might not be trivial but I don't see showstopper.
About abort(): I don't think it is used during normal operations (even simulation failure). A reproducer would be useful.
Restarting the simulation could also be implemented.
|
Thanks for the remark.
Yes, you are correct. It is also possible with I'm now trying to make it slightly easier by simplifying the syntax in #805.
#803 is a reproducer. Nevertheless, do not worry about it for now. This is only happenning with VHDL 1993; which is a constraint that does not bother me for now, since I need VHDL 2008 in most of the practical use cases. I will open a more specific issue once I have solved #805.
That'd be interesting. I think it is tightly related to #154. I can imagine an API such as: ghdl_main();
ghdl_runfor(10);
ghdl_restart();
ghdl_rununtil(100);
ghdl_destroy(); Is this something I might try to implement or is some deep understanding of GHDL internals required? |
Is this something I might try to implement or is some deep understanding of GHDL internals required?
No, I don't think it require a deep understanding. This is about run time, so you only need to refer to the code of grt/
|
@tgingold, after re-reading this discussion, I feel I did not understand it properly. Currently, I'm using |
Yes, you can try to use |
I confirm this as working. |
Is your feature request related to a problem? Please describe.
I am wrapping GHDL in a C application:
ghdl_main
is called frommain
. Several C functions are bind to VHDL through VHPIDIRECT, but I believe that this is irrelevant.The executable is successfully built with
ghdl -e
. However, when I load it as shared library (e.g., from a Python script), symbols are not found. This is because of/usr/local/lib/ghdl/grt.ver
. If I modify it to add the name of the functions that I want to be visible (see #640 (comment)), it works.I found no option to provide a custom
grt.ver
script toghdl -e
.Describe the solution you'd like
I think it would be desirable to provide a script through an arg such as
--version-script
, in order to avoid multiple users in a system being required to have separate installations of GHDL just to avoid conflicts with this file.Describe alternatives you've considered
It is possible to work around this issue by using
gcc
orllvm
directly, instead ofghdl -e
. As commented in #640, the output of--list-link
can be customized throughsed
. Thus, the version script can be removed, or adapted.Nonetheless, in the current use case the binary is compiled through VUnit, which is not aware of neither
gcc
norllvm
. Therefore, I'd like to keep callingghdl -e
, to avoid adding 'compiler support' to VUnit.The text was updated successfully, but these errors were encountered: