Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVC -M support doesn't work with set_sim_option #946

Closed
mschiller-nrao opened this issue Jul 12, 2023 · 13 comments
Closed

NVC -M support doesn't work with set_sim_option #946

mschiller-nrao opened this issue Jul 12, 2023 · 13 comments

Comments

@mschiller-nrao
Copy link

It appears that nvc simulator only supports -M as a global option (eg before the "command" that tells nvc to analyze, elaborate or simulate). This suggests that it needs to be implemented like "heap_size" is currently implemented so that the -M64m (or whatever to set the max elaboration size) can be set for a design.

This is necessary to support large designs in nvc

Eg this does not work:

/usr/local/bin/nvc --work=wpfb_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/wpfb_lib --std=2008 --map=vunit_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/vunit_lib --map=xpm:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/xpm --map=altera_mf:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/altera_mf --map=ip_xpm_mult_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/ip_xpm_mult_lib --map=ip_stratixiv_mult_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/ip_stratixiv_mult_lib --map=ip_xpm_ram_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/ip_xpm_ram_lib --map=ip_stratixiv_ram_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/ip_stratixiv_ram_lib --map=common_components_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/common_components_lib --map=common_pkg_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/common_pkg_lib --map=dp_pkg_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/dp_pkg_lib --map=dp_components_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/dp_components_lib --map=technology_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/technology_lib --map=casper_counter_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_counter_lib --map=casper_adder_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_adder_lib --map=casper_multiplier_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_multiplier_lib --map=casper_multiplexer_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_multiplexer_lib --map=casper_requantize_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_requantize_lib --map=casper_ram_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_ram_lib --map=casper_filter_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_filter_lib --map=casper_mm_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_mm_lib --map=casper_sim_tools_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_sim_tools_lib --map=casper_pipeline_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_pipeline_lib --map=casper_diagnostics_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/casper_diagnostics_lib --map=r2sdf_fft_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/r2sdf_fft_lib --map=wb_fft_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/wb_fft_lib --map=wpfb_lib:/builds/8/sim/casper_dspdevel/vunit_out/nvc/libraries/wpfb_lib -H 64m -e -M 64m tb_tb_vu_wbpfb_unit_wide-tb -gg_diff_margin=3 -gg_coefs_file_prefix_ab=/builds/8/sim/casper_dspdevel/casper_wbpfb/data/mem/hex/run_pfb_m_pfir_coeff_fircls1_16taps_32points_16b -gg_coefs_file_prefix_c=/builds/8/sim/casper_dspdevel/casper_wbpfb/data/mem/hex/run_pfb_complex_m_pfir_coeff_fircls1_16taps_32points_16b -gg_data_file_a=UNUSED -gg_data_file_a_nof_lines=0 -gg_data_file_b=UNUSED -gg_data_file_b_nof_lines=0 -gg_data_file_c=/builds/8/sim/casper_dspdevel/casper_wbpfb/data/run_pfb_complex_m_noise_complex_8b_16taps_32points_16b.dat -gg_data_file_c_nof_lines=1600 -gg_data_file_nof_lines=1600 -gg_enable_in_val_gaps=True -gg_twid_file_stem=/builds/8/sim/casper_dspdevel/r2sdf_fft/data/twids/sdf_twiddle_coeffs -gg_wb_factor=4 -gg_nof_points=32 -gg_nof_chan=0 -gg_nof_wb_streams=4 -gg_nof_taps=16 -gg_fil_backoff_w=1 -gg_use_reorder=True -gg_use_fft_shift=False -gg_use_separate=False -gg_fft_out_gain_w=0 -gg_guard_w=2 -gg_guard_enable=True -grunner_cfg=active python runner : true,enabled_test_cases : ,output path : /builds/8/sim/casper_dspdevel/vunit_out/test_output/wpfb_lib.tb_tb_vu_wbpfb_unit_wide.u_rnd_wb4_complex_noise_streams_6befaea9767119a6029df77bc93f88bd63940509/,tb path : /builds/8/sim/casper_dspdevel/casper_wbpfb/,use_color : true --no-save --jit -r --exit-severity=error --ieee-warnings=off
** Fatal: unrecognised elaboration option -M

But it would've worked if -H 64m -e -M 64m was -H 64m -M 64m -e

@mschiller-nrao
Copy link
Author

mschiller-nrao commented Jul 12, 2023

I hacked this to work in Docker by creating a bash script "nvc" executable that added in the -M as the first command line argument before the generated VUNIT command line arguments, but looking at https://github.com/VUnit/vunit/blob/master/vunit/sim_if/nvc.py there doesn't appear to be a way to do that without changing nvc.py, though a mechanism was created for the heap size, but Heap alone (without -M) didn't resolve my out of memory issue with my large design.

@umarcor
Copy link
Member

umarcor commented Jul 12, 2023

I see where is -H 64m -e coming from: https://github.com/VUnit/vunit/blob/master/vunit/sim_if/nvc.py#L254-L256. But I don't see where is -M 64m coming from. @mschiller-nrao did you set the elab_flags to -M 64m? Did you try setting heap_size to 64m -M 64m instead? It might fail because it expects a list of arguments, instead of them space-separated in a single string; but it's worth a try.

/cc @nickg

@mschiller-nrao
Copy link
Author

mschiller-nrao commented Jul 12, 2023

So I tried sending it into elab_flags as in the commented out line below:

vu.set_compile_option("ghdl.a_flags", ["-frelaxed", "-fsynopsys", "-fexplicit", "-Wno-hide"])
vu.set_compile_option("nvc.a_flags",["--relaxed"])
vu.set_sim_option("ghdl.elab_flags", ["-frelaxed", "-fsynopsys", "-fexplicit", "--syn-binding"])
vu.set_sim_option("ghdl.sim_flags", ["--ieee-asserts=disable","--max-stack-alloc=4096"])
vu.set_sim_option("nvc.heap_size", "64m")
#vu.set_sim_option("nvc.elab_flags", ["-M64m"])
vu.set_sim_option("disable_ieee_warnings",True)
vu.set_sim_option("modelsim.vsim_flags.gui",["-voptargs=+acc"])
vu.main()

But that's an interesting point, I don't think there's any error checking on nvc.heap_size, so I probably could do "128m -M64m"

@nickg
Copy link
Contributor

nickg commented Jul 12, 2023

Perhaps we should add a nvc.global_flags option? Something like:

diff --git a/vunit/sim_if/nvc.py b/vunit/sim_if/nvc.py
index c3391fe05da2..de0e3ed1ecef 100644
--- a/vunit/sim_if/nvc.py
+++ b/vunit/sim_if/nvc.py
@@ -39,6 +39,7 @@ class NVCInterface(SimulatorInterface):  # pylint: disable=too-many-instance-att
     ]
 
     sim_options = [
+        ListOfStringOption("nvc.global_flags"),
         ListOfStringOption("nvc.sim_flags"),
         ListOfStringOption("nvc.elab_flags"),
         StringOption("nvc.heap_size"),
@@ -225,6 +226,8 @@ class NVCInterface(SimulatorInterface):  # pylint: disable=too-many-instance-att
             source_file.get_vhdl_standard(), source_file.library.name, source_file.library.directory
         )
 
+        cmd += source_file.compile_options.get("nvc.global_flags", [])
+
         cmd += ["-a"]
         cmd += source_file.compile_options.get("nvc.a_flags", [])
 
@@ -252,6 +255,7 @@ class NVCInterface(SimulatorInterface):  # pylint: disable=too-many-instance-att
         cmd = self._get_command(self._vhdl_standard, config.library_name, libdir)
 
         cmd += ["-H", config.sim_options.get("nvc.heap_size", "64m")]
+        cmd += config.sim_options.get("nvc.global_flags", [])
 
         cmd += ["-e"]
 

@Blebowski
Copy link

Hi, I am hitting the same issue with -L option to make NVC search for vunit and osvvm library that was installed with nvc --install. nvc.global_flags sounds good.

nickg added a commit to nickg/vunit that referenced this issue Jul 16, 2023
This can be set for compilation/simulation to pass additional global
arguments to the nvc command. See issue VUnit#946.
nickg added a commit to nickg/vunit that referenced this issue Jul 16, 2023
This can be set for compilation/simulation to pass additional global
arguments to the nvc command. See issue VUnit#946.
eine pushed a commit that referenced this issue Jul 17, 2023
This can be set for compilation/simulation to pass additional global
arguments to the nvc command. See issue #946.
@umarcor
Copy link
Member

umarcor commented Jul 17, 2023

Should be fixed by #948.

@mschiller-nrao @Blebowski can you please confirm that the current master branch works for your use cases?

@mschiller-nrao
Copy link
Author

I can confim that the latest NVC (1.10.0.r2.g2372f19d)) and latest vunit (c5edd32) work nicely for me

Thanks @nickg and @umarcor

@mschiller-nrao
Copy link
Author

(Now I just need to build a docker image with these two combined for CI.... or just wait until these filter into the existing docker images)

@umarcor
Copy link
Member

umarcor commented Jul 18, 2023

@mschiller-nrao container image sim/osvb from hdl/containers includes master/main GHDL + NVC + Verilator + Icarus Verilog + VUnit + CoCoTb and pinned OSVVM. It's available in the following registries: gcr.io/hdl-containers/sim/osvb, ghcr.io/hdl/sim/osvb and <docker.io/hdlc/sim:osvb.

There are also sim/scipy, sim/octave, sim/gnuplot, which include matplotlib, numpy, octave... on top of sim/osvb. See https://hdl.github.io/containers/ToolsAndImages.html.

Those images are relatively similar to the one used for CI in this repo. See https://github.com/VUnit/vunit/blob/master/.github/workflows/images.yml#L60-L70. The relation between all container images is shown graphically in https://hdl.github.io/containers/dev/Graphs.html.

Nevertheless, should you want/need something more specific, such as a nvc/vunit image which just installs master VUnit on top of gcr.io/hdl-containers/nvc, that's an easy enhancement. Please, open an issue in hdl/containers.

Typically, images in hdl/containers are updated once a week. However, if I see any relevant issue, such as this once, I can manually trigger the workflows to have the desired set of images updated. In this case, NVC was not included in sim/osvb until yesterday, so all the images were generated in the last 12-24h and they include latest NVC and VUnit.

@mschiller-nrao
Copy link
Author

mschiller-nrao commented Jul 18, 2023

@umarcor NICE! I wasn't aware of sim/scipy I was using ghdl/vunit:llvm-master for ghdl and ghcr.io/vunit/dev/nvc:latest (which I had to install vunit in my CI script to make work) for nvc originally, but jury rigged this to work with both GHDL and NVC with my own dockerfile:

FROM ghdl/vunit:llvm-master
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y build-essential automake autoconf flex check llvm-dev pkg-config zlib1g-dev libdw-dev libffi-dev libzstd-dev git
RUN cd /tmp
RUN cd /tmp;git clone https://github.com/nickg/nvc.git;cd nvc;./autogen.sh;mkdir build && cd build;../configure;make;make install
RUN python3 -m pip install pytest --progress-bar off
RUN python3 -m pip install numpy --progress-bar off
RUN cd /tmp;git clone --recurse-sub\modules https://github.com/VUnit/vunit.git; cd vunit; pip3 install .

But it'll be better to use a publicly available image than my jury rigged local image.

@umarcor
Copy link
Member

umarcor commented Jul 18, 2023

@mschiller-nrao for completeness, I maintain hdl/containers, ghdl/docker and the CI in this repo. To make it less cumbersome than it actually is, I use the same base image "everywhere". That is currently Debian Bullseye "slim", but I'm transitioning to Debian Bookworm "slim" in the last weeks.

In ghdl/docker and hdl/containers, when a tool is built, it's saved as a "package" image (a scratch container with pre-built binaries and assets). Then, whenever that tool is to be included in an image, the package is copied instead of rebuilding it. As a result, all the images are using a single build of GHDL and a single build of NVC. VUnit can always be installed last because it does not need compilation and it depends on colorama only.

NOTE: This is only true since yesterday. I had not combined GHDL and NVC packages/pre-builts in a single image before, so I had not realised they were built with different versions of LLVM. Now both of them are built with LLVM 11 on Debian Bullseye and with LLVM 14 on Debian Bookworm.

So, the equivalent to your Dockerfile is:

As you can see, it's almost the same. Therefore, I would recommend that you use sim/scipy, but keep your dockerfile around. Should you need to quickly test some update to NVC, VUnit or GHDL, your dockerfile will let you do so with 24h delay at most since GHDL's last update. Conversely, sim/scipy might need up to one week (depending on my availability).

@mschiller-nrao
Copy link
Author

mschiller-nrao commented Jul 18, 2023

sim/scipy worked out of the box on my CI system. So that's pretty effective... Too bad the commercial tools don't have convenient images like this. Still have to build my own for Questasim and Vivado (and keep it internal for licensing reasons). But at least the opensource tools should be more reliable when I'm working on an opensource project and need both like github actions and my internal gitlab runners to use an image.

(I tend to do first verification in Questasim manually, and then get GHDL and now NVC working so my Continuous Integration doesn't require a license for Questasim when CI might end up running many many runs depending on how prolific engineers are checking in files. My CI is configured to allow Questa to be ran, but it doesn't run automatically. An engineer has to manually run the pipeline for Questa and Vivado to avoid licensing issues. Though in the future when I'm further along on my program I do intend to make Vivado auto run on production branches and such).

@mschiller-nrao
Copy link
Author

image

Working! with sim/scipy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants