Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update CI to use VUnit for running VHDL testbenches #47

Closed
wants to merge 4 commits into from

Conversation

LarsAsplund
Copy link
Collaborator

This commit updates the CI to use VUnit. Also changed the committed application image to processor_check to match what is built by the CI. Is there an application that would test the design even better? I'm not sure how much the external memories are used by processor_check.

I made some printing to UART optional to speed-up the simulation

@stnolting stnolting added the enhancement New feature or request label May 31, 2021
@stnolting
Copy link
Owner

Thank you very much for your contribution! 👍

Unfortunately, I am not (very) familiar with the VUNIT framework. Right now I am reading through the documentation. Could you give some more information regarding the new testbench setup?

I found the VHDL context keyword. Does VUNIT require VHDL-2008(+)?
Maybe it would be better to setup a new/additional testbench "framework" that uses the improved VUNIT scheme and keep the original testbench + plain GHDL setup as legacy or even default setup.

I see you are checking the UART output for "<RTE> Illegal instruction @ PC=0x00000A42, MTVAL=0xFFF027F3 </RTE>". Why are you expecting an exception?

Is there an application that would test the design even better?

This is a complex question. The basic CPU features are tested using the official RISC-V architecture tests (see githubg riscv-arch-test workflow). The processor_check test program aims to test as many aspects of the SoC as possible. Of course, this program cannot test all possible scenarios... But I am open for any ideas how to improve on that 😉

I'm not sure how much the external memories are used by processor_check.

Right. The default testbench uses a very simple external memory to test the general functionality of the external memory interface: do read/write accesses work at all? do accesses with variable latency work?
Of course it would be better to verify the actual Wishbone transfers in a first step and then check for "address space" issues, but right now I have no idea how to do that.

@LarsAsplund
Copy link
Collaborator Author

I found the VHDL context keyword. Does VUNIT require VHDL-2008(+)?

The basic/oldest functionality of VUnit supports VHDL-93 since VHDL-2008 support was poor when VUnit was first released. After that VHDL-2008 support has improved and we only maintain VHDL-93 for the packages that had it from the beginning. Newer development assumes VHDL-2008. Much of it could probably be VHDL-93 but we don't verify that since all the simulators we support are capable of VHDL-2008. Since VHDL-2008 is backward compatible with earlier versions you can still use -93 for the design in case the synthesis tool has poor VHDL-2008 support (often the case)

Could you give some more information regarding the new testbench setup?

In the first of the 4 commits I make a minimal modification to the testbench such that it is recognized as a VUnit testbench

library vunit_lib;
context vunit_lib.vunit_context;

Simply imports the VUnit functionality.

generic (runner_cfg : string);

This is the string through which the Python part of VUnit can control the testbench. In this simple setup you don't really see much of that control but it is still needed.

test_runner : process
begin
test_runner_setup(runner, runner_cfg);
wait for 15 ms; -- Just wait for all UART output to be produced
test_runner_cleanup(runner);
end process;

Every VUnit testbench needs a main controlling process. I often label it test_runner but it can be labelled anything. Many testbenches already have such a process which can be reused but here I added a new one. Whether such a process already exists or not the important thing is that it starts with test_runner_setup(runner, runner_cfg); and ends with test_runner_cleanup(runner);. The first procedure sets up VUnit based on the "instructions" provided through runner_cfg. test_runner_cleanup is the simulation exit point. Important to know is that a simulation not ending with this call will be considered a failing testbench by VUnit. That is the reason why VUnit works with plain VHDL asserts and doesn't require special assert functions like all other existing framework I know of. A failing assert will stop the simulation before test_runner_cleanup is called and that is what VUnit detects. We do not do any log parsing to figure out what is going on.

The second commit changes the application image to what is generated by the CI. First I just added asserts in the testbench to match the application image originally committed but that seems to be replaced when the CI runs and as a consequence the testbench fails.

I see you are checking the UART output for " Illegal instruction @ PC=0x00000A42, MTVAL=0xFFF027F3 ". Why are you expecting an exception?

I didn't make any effort to understand what the application should be generating. I just ran the simulation and observed the output. That output was then used for the asserts in the next commit. Maybe you can spot why there is an illegal operation? I thought it was part of the verification to test the behavior for illegal instructions.

The purpose of the 3rd commit is to make the testbench self-checking by adding asserts where it used to be reports only. I also moved the UART checking code to its own entity to avoid code duplication.

The fourth commit is where I start using VUnit in the CI. VUnit will scan for VHDL files, figure out their dependencies and compile them in a valid order based on that.

Maybe it would be better to setup a new/additional testbench "framework" that uses the improved VUNIT scheme and keep the original testbench + plain GHDL setup as legacy or even default setup.

In the end there can only be one CI deciding if the project is in a good or bad state so from that point of view it doesn't make sense to maintain two different CIs. Regardless of approach the testbench must be self-checking or the CI wouldn't be fully automated. From that point of view commit 2 and 3 are independent of CI approach. Commit 1 and 4 are the commits doing the VUnitification. This was done as non-intrusive as possible and the VHDL testbench is essentially reused as is. Still, your existing non-VHDL code for setting up simulator, compiling, and running the simulation is fully replaced. The proposed value of doing so is less code to maintain locally and a framework allowing your CI to grow in complexity without you having to update your current setup to support more features.

This is a complex question. The basic CPU features are tested using the official RISC-V architecture tests (see githubg riscv-arch-test workflow). The processor_check test program aims to test as many aspects of the SoC as possible. Of course, this program cannot test all possible scenarios... But I am open for any ideas how to improve on that

Since VUnit promotes starting with verification early in the development cycles on the low-level units of the design, the typical project ends up having a lot of testbenches, each containing many individual test cases. Having tests at that level makes it easier to reach corners hard to reach from the system level. It doesn't make sense to start writing a lot of low-level testbenches for this project now that it is in a done-ish state. However, once you have a bug report you should really consider to write a testbench at the lowest possible level where the bu can be detected and in the process see if there are other interesting tests missing in the same region of the code. Another approach is to see if there are parts of the code not well covered by current tests and start writing tests for those. The current system-level testbench is also rather slow. Having many smaller testbenches allow VUnit to run them in parallel in different threads to speed up the CI. I typically run my regression tests with 10 parallel GHDL threads.

@LarsAsplund
Copy link
Collaborator Author

The two PRs building on this is where I start changing the VHDL code a bit more and introduce more VUnit features. These are not strictly needed but provide some additional value as outlined in the commit comments. There is no point in making changes to working code if the outcome is something done in another style but without any additional value. "Value" is somewhat subjective so you would have to be the judge of that.

@umarcor
Copy link
Collaborator

umarcor commented Jun 2, 2021

Unfortunately, I am not (very) familiar with the VUNIT framework. Right now I am reading through the documentation. Could you give some more information regarding the new testbench setup?

@stnolting, please, let me explain my very personal relation with VUnit.

I started using it in 2017 or so, because I was using ModelSim/QuestaSim at uni, and I wanted to try learn using GHDL as a serious alternative. Until that point, it was so painful to maintain separated scripts for building/simulating with each of the tools. VUnit made it so easy to handle, because:

  • The same 5-10 line run.py script works regardless of the simulator.
  • An environment variable allows selecting one or the other.
  • Only 5 VHDL lines need to be added to the testbench.

For several months, that was the only reason for using VUnit: having a non-intrusive and easy-to-maintain script that allowed me to use "any" of the simulators. It was very valuable to me how little intrusive it was. I did not have to "learn" VUnit, and I could ignore the majority of the project. I must say it was very annoying for me the requirement to install Python just because of that (I did not use Python for any other thing back then). But the benefits were much more important.

Then, in 2018, I had to test/verify some accelerators with FIFO I/O. Hence, I learnt about VUnit's Verification Components, and I used the AXI Stream as explained in #52. Again, I focused on instantiating just the VCs I needed. I could ignore all other VUnit features. Moreover, since I was testing a core with FIFO interfaces, in fact, I did not care about most AXI Stream features, so a minimal VC would work.

In 2019, I started researching cosimulation features with GHDL. I contributed some modifications to some internal VUnit types, in order to expose a foreign API (see VUnit/cosim). That evolved into documenting ghdl/ghdl-cosim, and later, into enhancing the VHDL standard (see IEEE-P1076/VHDL-Issues#10). That's when I started to actually understand and use other VUnit features, such as the Logging library, the Check library or the Communication library (see http://vunit.github.io/vhdl_libraries.html). The internals of VUnit (as OSVVM) are a cathartic breakthrough for any "synthesis/RTL only hardware/FPGA designer". You never look at the language the same way again.

During that whole process, my main focus was using VUnit locally. I was not really interested in CI, because I did not have any private service setup where I could actually test the non-open source projects I was working at. Also, I am interested in "enhanced" testbench infrastructure, for "fancy real-time inspection" of internal data/signals (see VUnit/vunit#568). However, I was very interested in containers, since I use/used Windows 10 and "latest" GHDL were not available on windows back then. Therefore, I got involved in providing containers with GHDL, and with VUnit. In the end, I came to be a co-maintainer of both GHDL and VUnit. I'm no expert in their codebases (yet), but I try to take care of CI, documentation and cosimulation. No surprise those are the areas I brought up in this repo too 😄.

Therefore, the "problem" here is that we are providing you in 2 weeks all the knowledge that I needed 4-5 years to gain, and Lars twice as much, at least. Once again, take it easy! You will find that the sequence of PRs from Lars follows kind of the same steps as I mentioned above. I did it one step each year, here it's one step per PR and even multiple in the same 😄.

Regardless of approach the testbench must be self-checking or the CI wouldn't be fully automated. From that point of view commit 2 and 3 are independent of CI approach. Commit 1 and 4 are the commits doing the VUnitification.

@stnolting, so, feel free to e.g. ask this PR to be split in two. If Lars can't do it, I can help.

Right. The default testbench uses a very simple external memory to test the general functionality of the external memory interface: do read/write accesses work at all? do accesses with variable latency work?
Of course it would be better to verify the actual Wishbone transfers in a first step and then check for "address space" issues, but right now I have no idea how to do that.

@stnolting, VUnit provides Verification Components for Wishbone, AXI and Avalon. All of those are based on virtual buses. Virtual buses are based on the communication libray, which also provides virtual queues. There is a virtual RAM model that can be connected to the bus. The memory model supports permissions (per byte, IIRC). All the specific VCs (Wishbone, AXI, Avalon) can talk to the same virtual bus. With "virtual" in all these cases I mean "non synthesisable VHDL code", i.e. VHDL software.

As a result, you can plug just a Wishbone slave VC, in order to test your Wishbone interface. That's similar to the usage of the UART VC that Lars contributed/proposed. However, you can go much further than that. You can model all the communication infrastructure in the SoC (multiple memories, peripherals, etc.) by using VCs and virtual components only. Some VCs have a configurable stall probability. As a result, you can simulate bottlenecks and unwanted behaviour in the interconnect infrastructure of the SoC (the one between NEORV32 and all the external cores/components). Note, again, that you can do these, you don't need to. That's the beauty of the non-intrusive and extensible design philosophy in VUnit. That philosophy is the strongest asset of the project.

Overall, VHDL is a hardware description language, so it allows to naturally model any highly concurrent system. A highly concurrent system is not only the RTL of a design for FPGA, but any abstraction level, from gates or nuclear/chemical processes to "agents" passing messages to each other. We are all too biased to thinking that VHDL is for synthesis of digital electronics only. That's unarguably the main use case, but not the only one.

In the end there can only be one CI deciding if the project is in a good or bad state so from that point of view it doesn't make sense to maintain two different CIs.

@LarsAsplund I think it does make sense for didactic purposes. We can have a single workflow with multiple jobs. So, we have a single point for deciding the good or bad state (the workflow). In there, we have multiple procedures for testing the same sources. That is useful for people to understand how to use NEORV32, regardless of their tooling background. Moreover, it allows us to detect regressions in the VUnit configuration/usage and/or in the custom scripts.

I do agree that using manual build/testing procedures is not desirable, and in the long term VUnit testbenches will contain many more tests. As I commented above, I believe that VCs are very valuable in this project, because there are several standard interfaces to be tested. So, all the actual unit and system testing ought to be done with VUnit (and/or with OSVVM). Yet, I think it's good to have a minimal bash script that runs just one or two testbenches. That can be useful for any user willing to pick NEORV32 as a black box and just replace the software. It will also be useful for very constrained development environment where Python is not available/allowed.

By the same token, if someone added support for FuseSoC/edallize, tsfpga, PyFPGA or any other project manager, I believe it makes sense to have them in a tools subdir, rather than neglect them because of "duplication". Time will tell which of them are worth keeping. Thanks to CI, we know when any of them is rotten.

@tmeissner
Copy link
Collaborator

I found the VHDL context keyword. Does VUNIT require VHDL-2008(+)?
Maybe it would be better to setup a new/additional testbench "framework" that uses the improved VUNIT scheme and keep the original testbench + plain GHDL setup as legacy or even default setup.

@stnolting The RTL code of neorv32 is already VHDL-08 compliant. The open-source implementation flow for the UPduino v3 uses GHDL with --std=08 option to analyse the RTL sources. Only needed change was the renaming of force by fixing #24

@LarsAsplund
Copy link
Collaborator Author

@umarcor

It will also be useful for very constrained development environment where Python is not available/allowed.

You're right. This happens from time to time and I keep forgetting about it. To avoid having every bash supported testbench in two versions, one with the five VUnit lines and one without, we could use the VUnit standalone mode which was added to support that use case. I can add the default value needed for runner_cfg to support that

To fully support that use case we should also make VUnit a submodule/subtree to make neorv32 more self-contained.

I better leave the bash scripting to someone else but note that GHDL isn't the most commonly used simulator out there so those scripts should probably support some other simulators as well, At the very minimum provide a compile order list. That can be generated by the the CI with the help of VUnit.

@umarcor
Copy link
Collaborator

umarcor commented Jun 4, 2021

To fully support that use case we should also make VUnit a submodule/subtree to make neorv32 more self-contained.

Since the expected main use case is to use VUnit in the natural mode, I don't think it's woth adding it as a submodule. Most users should get it through pip or as a container. We can provide an script that clones it if some user wants to use the standalone mode.

After the latest changes to master, this PR has conflicts. @LarsAsplund, shall I rebase it for you?

@LarsAsplund
Copy link
Collaborator Author

Since the expected main use case is to use VUnit in the natural mode, I don't think it's woth adding it as a submodule. Most users should get it through pip or as a container. We can provide an script that clones it if some user wants to use the standalone mode.

People starting to use VUnit in standalone mode are usually not doing it for themselves. It's more a way to introduce VUnit into a larger organisation without friction. As long as you're only introducing a pile of VHDL code it's almost like adding a new external IP and everyone knows how to deal with that. If you're the first to create a dependency on Python then there can be a whole bunch of management decisions in the way. Whether or not you use Python on your local machine is no issue. I think containers would face the same problems so I guess cloning the repo from the bash script seems like the only option.

Note also that my initial intent with suggesting the standalone mode was to support the minimal bash script CI without without duplicating testbenches

Yet, I think it's good to have a minimal bash script that runs just one or two testbenches.

Nevertheless, I can add the standalone mode option and rebase while I'm doing it

@LarsAsplund
Copy link
Collaborator Author

@stnolting How should we handle the application image VHDL file? The one in the repo is not the same as one created when I run the CI. This means that creating a testbench that pass the CI but also when you clone the repo becomes more complicated. The blinking test is good because it's fast. The processor check is slower but has better functional coverage. Should we have two different applications and solve it with a generic to the testbench containing the expected output or is there a single application that serves both purposes sufficiently well? The generic would default to blinking test but the CI would provide another value.

@stnolting stnolting linked an issue Jun 5, 2021 that may be closed by this pull request
@stnolting
Copy link
Owner

stnolting commented Jun 5, 2021

How should we handle the application image VHDL file? The one in the repo is not the same as one created when I run the CI.

That is a problem related to #35.

Right now the neorv32_application_image.vhd in the repository contains the sw/example/blink_led example program. I thought it would be nice to have a minimal example that shows some kind of life signs when you just synthesize/simulate the processor without any modifications.

However, the default configuration of the processor uses the build-in bootloader, so this minimal example fall-back is irrelevant.

For "verification" I am using the sw/example/processor_check program, which is automatically compiled and inserted into neorv32_application_image.vhd by sw/example/processor_check/check.sh when running the CI workflow.

This means that creating a testbench that pass the CI but also when you clone the repo becomes more complicated. The blinking test is good because it's fast. The processor check is slower but has better functional coverage.

As far as I understand, the new testbench setup checks the UART output for some fixed string (expected => "Blinking LED demo program" & cr & lf,) to assert simulation success/failure, correct? Of course this is very specific for the actual application software being executed.

💡 How about this:
Any simulation scenario involving the default testbench includes the CPU executing some program. That means there will always be some main() function that returns at some time. We could write main()'s return value to some new CPU hardware "thingy", that communicates this to the testbench: if the application has run successfully it returns zero and the simulation was successful. If there was an error, main() has exited with a return code of != 0 and the simulation failed.
(Most some example programs in sw/example already do this - but the actual value being returned is completely ignored by the CPU)

So this would require no software changes at all as the return value is handled by low-level crt0.S start-up code.

From a hardware point of view the simplest solution would be a new write-only CPU register that does some VHDL assert depending on the value (=the main() return value) written to it. But I am not sure if the testbench framework can detect this. 🤔

Another viable solution would be to add a new top entity signal (something like a 2-bit memory-mapped GPIO) that is reserved for simulation only and let crt0.S output the return value via this:

sim_status_o : std_ulogic_vector(1 dwnto 0; -- "0-": NOP, "10": simulation success, "11": simulation failed

@stnolting
Copy link
Owner

if the application has run successfully it returns zero and the simulation was successful. If there was an error, main() has exited with a return code of != 0 and the simulation failed.

Of course this a relatively poor metric for assuming a simulation success/failure. It only shows the "opinion of the CPU" if everything went fine or not.

But I think the VUnit framework provides some great options that could be added, too. For example to check CPU-external actions like (Wishbone) bus transactions or the actual UART output.

@LarsAsplund
Copy link
Collaborator Author

Yes, that is a bit if a self-fulfilling prophesy. What I did now as a first step is to provide a --ci-mode option to run.py which is passed to the testbench with a generic. Based on that the testbench decides what to expect from the UART. A better solution would be to pass the expected string as a generic to the testbench but that is not trivial when the string contains control characters. Generic values are passed to the simulation in the command line and having control characters in such a command makes the simulator confused. To do this the string has to be encoded first and then decoded in the testbench. VUnit has functionality for this but I wanted the first commit to stick to the basics. Once any expected string can be passed to the simulation we can decide how that expectation is generated. By the CPU or explicitly provided by us.

@umarcor
Copy link
Collaborator

umarcor commented Jun 5, 2021

So I guess cloning the repo from the bash script seems like the only option.

That is/was used in this repo for other purposes. It's not ideal, but it's acceptable enough. In the end, the expected use case is for people to be able to install open source EDA tooling either through their system package manager, or through containers. Custom installation through shell scripts is just for proving that it can be done.

It's a similar case with Makefiles. We are using Makefiles only for now, because that's what @stnolting is more familiar with, and the priority now is for him to understand the tools (GHDL, VUnit, Yosys, nextpnr, asciidoctor...). Yet, it is starting to be obvious where make/bash syntax are problematic, and why most open source projects tend towards using some Python based infrastructure for managing simulation and synthesis. Hence, we will get there after we feel comfortable with the tools.

@umarcor
Copy link
Collaborator

umarcor commented Jun 5, 2021

With regard to @LarsAsplund last comment, I agree. We can find mechanisms for communicating the testbench with both the hardware and the software without requiring any additional RTL in the UUT. The purpose of verification is to test a black-box with known interfaces. It's ok to use checks and logging features in each component, for unit testing purposes. However, something such as adding an explicitly memory mapped register feels overkill.

Moreover, VHDL does already allow accessing any signal in the hierarchy through external names. That feature was not implemented in GHDL yet. As soon as it is available, it will be so obvious how to solve the challenge we have here.


neorv32 = prj.add_library("neorv32")
neorv32.add_source_files(root / "*.vhd")
neorv32.add_source_files(root / "../rtl/**/*.vhd")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be root / ".." / "rtl" / "**"/ "*.vhd", so it works on Windows (not MSYS2).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix that

@stnolting
Copy link
Owner

@LarsAsplund

Yes, that is a bit if a self-fulfilling prophesy

Well, yeah somehow it is 😄
But somehow I like the idea of printing a message in the simulation console when the software main application returns and with what return code. I like that this approach is completely independent of the actual SoC configuration - so even setups without UART could use this. But I also see that this is just a very basic information that cannot / should not be used alone to tell if simulation was successful or not.

Based on that the testbench decides what to expect from the UART.

So this would be the main criteria if simulation went fine, right? 🤔

Generic values are passed to the simulation in the command line and having control characters in such a command makes the simulator confused.

That would be very handy!


@umarcor

We are using Makefiles only for now, because that's what @stnolting is more familiar with, and the priority now is for him to understand the tools (GHDL, VUnit, Yosys, nextpnr, asciidoctor...).

True 😅 and thank you!

We can find mechanisms for communicating the testbench with both the hardware and the software without requiring any additional RTL in the UUT. The purpose of verification is to test a black-box with known interfaces. It's ok to use checks and logging features in each component, for unit testing purposes. However, something such as adding an explicitly memory mapped register feels overkill.

Actually, the current method already uses "additional" hardware to communicate with the simulation: the simulation mode of the UART, which evolves to another read/write flag in the UART's control register. This flag will also be synthesized but remains without an actual function.
(The flag provides read-capability since the test program uses this information to run specific tests that should not be executed on real hardware because they might have SoC-external side effects.)

In contrast, implementing a simulation-only side-channel for printing main's return code would result in no additional hardware (write-only register -> written value is never actually used by some further logic; just for generating the content of an assert note).

@stnolting
Copy link
Owner

I (finally) haved dived a little bit deeper into VUnit and I really have to say it is a very nice framework! 👍
I am looking forward to use its capabilities for testing the AXI-Lite interface (as well as the hopefully upcoming AXI-stream and AXI-Full interfaces).

Well, yeah somehow it is 😄
But somehow I like the idea of printing a message in the simulation console when the software main application returns and with what return code. I like that this approach is completely independent of the actual SoC configuration - so even setups without UART could use this. But I also see that this is just a very basic information that cannot / should not be used alone to tell if simulation was successful or not.

Let's ignore this. I have to admit this is not the way to go.
I do not have a background in "advanced" verification. so I am used to testbenches that output ok at the end if everything went fine and just crash if there was an error. But I am very happy to get the chance to get into more sophisticated simulation/verification concepts 👍

Anyway. I am not sure, but you PR looks quite complete. VHDL-2008 support needs to be enabled in the GHDL simulation scripts, but I think this is almost all?! 🤔

The actual concept of "verifying" the processor should not be the main concern right now. As soon as there is a stable VUnit setup running one can improve on that.

@stnolting stnolting mentioned this pull request Jun 8, 2021
Closed
@LarsAsplund
Copy link
Collaborator Author

LarsAsplund commented Jun 8, 2021

@stnolting

So this would be the main criteria if simulation went fine, right?

Right now it is, A step which I haven't performed yet is to use Wishbone verification components and memory models from VUnit. That would enable us to do some verification of the memory traffic if that is predictable

@LarsAsplund
Copy link
Collaborator Author

I do not have a background in "advanced" verification. so I am used to testbenches that output ok at the end if everything went fine and just crash if there was an error. But I am very happy to get the chance to get into more sophisticated simulation/verification concepts

This is a good start as long as "ok" can't happen if something when wrong. At least you know that you're in a bad state. An improved approach would also be more specific with what went wrong and what went well.

@LarsAsplund
Copy link
Collaborator Author

Anyway. I am not sure, but you PR looks quite complete. VHDL-2008 support needs to be enabled in the GHDL simulation scripts, but I think this is almost all?! 🤔

Right now the VUnit setup works. Now the "other CI" needs to compile VUnit such that it can handle these testbenches in standalone mode.

The actual concept of "verifying" the processor should not be the main concern right now. As soon as there is a stable VUnit setup running one can improve on that.

Agree

@umarcor
Copy link
Collaborator

umarcor commented Jun 8, 2021

Right now the VUnit setup works. Now the "other CI" needs to compile VUnit such that it can handle these testbenches in standalone mode.

Since the software of the complete test does print quite a lot of information, I think it would be useful to keep a testbench which does nothing other than provide a clock and print the UART output. That would not need to use VUnit, and that might be executed in the bash script. Then, we will have multiple testbenches which are to be used with VUnit, where we use the VCs (UART, Wishbone, AXI-Stream, etc.). So, I propose duplicating sim/neorv32_tb.vhd to sim/neorv32_raw_tb.vhd, and removing the VUnit specific features from there. In upcoming VUnit related PRs, we will update neorv32_tb.vhd, and neorv32_raw_tb.vhd can be kept as a minimal testbench example. That will provide less experienced users some example they are familiar with, and which they can use as a reference for understanding the others.

@stnolting
Copy link
Owner

So, I propose duplicating sim/neorv32_tb.vhd to sim/neorv32_raw_tb.vhd, and removing the VUnit specific features from there. In upcoming VUnit related PRs, we will update neorv32_tb.vhd, and neorv32_raw_tb.vhd can be kept as a minimal testbench example. That will provide less experienced users some example they are familiar with, and which they can use as a reference for understanding the others.

I really like this approach! 👍
But I would recommend: neorv32_raw_tb.vhd -> neorv32_simple_tb.vhd or neorv32_minimal_tb.vhd

@umarcor
Copy link
Collaborator

umarcor commented Jun 10, 2021

@LarsAsplund see #64 and #65.

@LarsAsplund
Copy link
Collaborator Author

Yes, I see that you're already working on it. I'll back off such that you can complete.

@umarcor
Copy link
Collaborator

umarcor commented Jun 10, 2021

Basically, I split this PR in 3 steps. #64 was merged, #65 is ready to merge, and the remaining is branch PR1 in my fork: umarcor/neorv32@sim-vunit...umarcor:PR1.

@umarcor
Copy link
Collaborator

umarcor commented Jun 11, 2021

We are all done here 🎉

@LarsAsplund, do you want to rebase #48 or shall I give it a try?

@umarcor umarcor mentioned this pull request Jun 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Possible enhancements to management of test bin/hex files
4 participants