-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVC -M support doesn't work with set_sim_option #946
Comments
I hacked this to work in Docker by creating a bash script "nvc" executable that added in the -M as the first command line argument before the generated VUNIT command line arguments, but looking at https://github.com/VUnit/vunit/blob/master/vunit/sim_if/nvc.py there doesn't appear to be a way to do that without changing nvc.py, though a mechanism was created for the heap size, but Heap alone (without -M) didn't resolve my out of memory issue with my large design. |
I see where is /cc @nickg |
So I tried sending it into elab_flags as in the commented out line below:
But that's an interesting point, I don't think there's any error checking on nvc.heap_size, so I probably could do "128m -M64m" |
Perhaps we should add a diff --git a/vunit/sim_if/nvc.py b/vunit/sim_if/nvc.py
index c3391fe05da2..de0e3ed1ecef 100644
--- a/vunit/sim_if/nvc.py
+++ b/vunit/sim_if/nvc.py
@@ -39,6 +39,7 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
]
sim_options = [
+ ListOfStringOption("nvc.global_flags"),
ListOfStringOption("nvc.sim_flags"),
ListOfStringOption("nvc.elab_flags"),
StringOption("nvc.heap_size"),
@@ -225,6 +226,8 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
source_file.get_vhdl_standard(), source_file.library.name, source_file.library.directory
)
+ cmd += source_file.compile_options.get("nvc.global_flags", [])
+
cmd += ["-a"]
cmd += source_file.compile_options.get("nvc.a_flags", [])
@@ -252,6 +255,7 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
cmd = self._get_command(self._vhdl_standard, config.library_name, libdir)
cmd += ["-H", config.sim_options.get("nvc.heap_size", "64m")]
+ cmd += config.sim_options.get("nvc.global_flags", [])
cmd += ["-e"]
|
Hi, I am hitting the same issue with |
This can be set for compilation/simulation to pass additional global arguments to the nvc command. See issue VUnit#946.
This can be set for compilation/simulation to pass additional global arguments to the nvc command. See issue VUnit#946.
This can be set for compilation/simulation to pass additional global arguments to the nvc command. See issue #946.
Should be fixed by #948. @mschiller-nrao @Blebowski can you please confirm that the current master branch works for your use cases? |
(Now I just need to build a docker image with these two combined for CI.... or just wait until these filter into the existing docker images) |
@mschiller-nrao container image There are also Those images are relatively similar to the one used for CI in this repo. See https://github.com/VUnit/vunit/blob/master/.github/workflows/images.yml#L60-L70. The relation between all container images is shown graphically in https://hdl.github.io/containers/dev/Graphs.html. Nevertheless, should you want/need something more specific, such as a Typically, images in hdl/containers are updated once a week. However, if I see any relevant issue, such as this once, I can manually trigger the workflows to have the desired set of images updated. In this case, NVC was not included in sim/osvb until yesterday, so all the images were generated in the last 12-24h and they include latest NVC and VUnit. |
@umarcor NICE! I wasn't aware of sim/scipy I was using ghdl/vunit:llvm-master for ghdl and ghcr.io/vunit/dev/nvc:latest (which I had to install vunit in my CI script to make work) for nvc originally, but jury rigged this to work with both GHDL and NVC with my own dockerfile: FROM ghdl/vunit:llvm-master But it'll be better to use a publicly available image than my jury rigged local image. |
@mschiller-nrao for completeness, I maintain hdl/containers, ghdl/docker and the CI in this repo. To make it less cumbersome than it actually is, I use the same base image "everywhere". That is currently Debian Bullseye "slim", but I'm transitioning to Debian Bookworm "slim" in the last weeks. In ghdl/docker and hdl/containers, when a tool is built, it's saved as a "package" image (a NOTE: This is only true since yesterday. I had not combined GHDL and NVC packages/pre-builts in a single image before, so I had not realised they were built with different versions of LLVM. Now both of them are built with LLVM 11 on Debian Bullseye and with LLVM 14 on Debian Bookworm. So, the equivalent to your Dockerfile is:
As you can see, it's almost the same. Therefore, I would recommend that you use sim/scipy, but keep your dockerfile around. Should you need to quickly test some update to NVC, VUnit or GHDL, your dockerfile will let you do so with 24h delay at most since GHDL's last update. Conversely, sim/scipy might need up to one week (depending on my availability). |
sim/scipy worked out of the box on my CI system. So that's pretty effective... Too bad the commercial tools don't have convenient images like this. Still have to build my own for Questasim and Vivado (and keep it internal for licensing reasons). But at least the opensource tools should be more reliable when I'm working on an opensource project and need both like github actions and my internal gitlab runners to use an image. (I tend to do first verification in Questasim manually, and then get GHDL and now NVC working so my Continuous Integration doesn't require a license for Questasim when CI might end up running many many runs depending on how prolific engineers are checking in files. My CI is configured to allow Questa to be ran, but it doesn't run automatically. An engineer has to manually run the pipeline for Questa and Vivado to avoid licensing issues. Though in the future when I'm further along on my program I do intend to make Vivado auto run on production branches and such). |
It appears that nvc simulator only supports -M as a global option (eg before the "command" that tells nvc to analyze, elaborate or simulate). This suggests that it needs to be implemented like "heap_size" is currently implemented so that the -M64m (or whatever to set the max elaboration size) can be set for a design.
This is necessary to support large designs in nvc
Eg this does not work:
But it would've worked if -H 64m -e -M 64m was -H 64m -M 64m -e
The text was updated successfully, but these errors were encountered: