Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UVM support: allow SV testbenches to specify tests in different approaches #328

Closed
krishnan-gopal opened this issue Apr 20, 2018 · 51 comments

Comments

@krishnan-gopal
Copy link

krishnan-gopal commented Apr 20, 2018

UVM testbenches are widely used for verification, and Vunit could be used as a flow-controller for such methodologies, if some adaptations are made.

UVM testbenches have a top-level SystemVerilog module which has a slightly different style than a Vunit-based SystemVerilog testbench. Nevertheless, it would be nice to add the support for this style either using SystemVerilog or Python-based adaptations.
Here are the differences:

  • The toplevel module which is passed to the simulator command calls a run_test (from the UVM base), which dynamically creates the class-based testbench hierarchy. The argument for this function is specified at run-time using the simulator argument +UVM_TESTNAME=. Testcases/testsuites are not declared within the top-module in the SystemVerilog world.
  • Constrained-randomization based testbenches require a seed value to be passed from the simulation command-line at run-time. This cannot be specified within the SystemVerilog/HDL code, and therefore need to come from the Python world
  • The simulation stop command is issued by a $finish task deep in the UVM base-classes. So, a testcase run actually ends the simulation without returning control back to the Vunit runner. Therefore, the Vunit runner assumes that the test simulation just stopped unexpectedly
  • UVM has its own messaging/logging/verbosity. A pass/fail test result is decided by parsing the UVM_ERROR and UVM_FATAL messages within the simulation transcript/log
  • a test-suite description contains the testcase name, along with the number of random seeds it has to be simulated with. The list of seeds may also be specified with fixed values. The testcase along with the seed can be seen as a test-configuration in Vunit terms. This would ideally be in the Python world (for example in a UVM TB specific run.py file).
  • UVM has an internal timeout mechanism, which triggers a UVM_ERROR message, and then exits the simulation. This timeout value can be either set in the testbench code, or using the simulation command-line option +UVM_TIMEOUT=<>/. This should be supported by all simulators. It allows the user to extend the timeout for specific tests which may run longer than a default timeout.
  • the UVM messaging mechanism also comes its own verbosity handling. All messages of type UVM_INFO are affected by the verbosity level set for that particular message or that scope. The verbosity level is specified by the simulation command-line option +UVM_VERBOSITY=<>. All messages of type UVM_WARNING, UVM_ERROR and UVM_FATAL are displayed regardless of the verbosity level.
@kraigher
Copy link
Collaborator

I think it sounds like a good idea to add first-class support of UVM to VUnit . To make it work it would help a lot if an experienced UVM user like yourself provided us with a basic example test bench using UVM so we can start experimenting with integrating it into VUnit. It would be good if the test bench contained tests which should both pass and fail in various ways. Do you think you could contribute with that?

We can use this issue to discuss pros and cons of different ways of integrating UVM.

@krishnan-gopal
Copy link
Author

krishnan-gopal commented Apr 23, 2018

Ok. I have added a very basic example of a UVM testbench. Please extract the zip file and refer to the README.txt file inside it. The examples have been tested with Questasim 10.6c, but should work in previous versions as well. Please let me know in case of doubts/issues/comment

Example uvm_tb_1 is a very Vunit-compliant format, where the testcases are declared within the tb_top module. Basically, the adaptations are done in the SystemVerilog world. The run.py is very similar to most Vunit python run scripts.

Example uvm_tb_2 is one where I took the liberty of trying out different features for UVM and also regression/flow support. I did not change the SystemVerilog/UVM world, so I went a little wild with the python code in run.py file and got many of the fancy functionality working fine. I am a novice with Python, so please feel free to modularize it or make it more elegant/user-friendly.

Both the examples can be run with 2 testcases - my_test1 (should pass always) and my_test_failing (should fail always). I have added a list of issues with description in ISSUES.txt

vunit_uvm_example.zip

PS: If you want to try in out with Modelsim (with a SystemVerilog HDL license), you can set SIMULATOR = "MODELSIM" instead of "QUESTASIM" in the run.py file

@kraigher
Copy link
Collaborator

Thanks this will make it easier for us. I will have a look.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal

  • The simulation stop command is issued by a $finish task deep in the UVM base-classes. So, a testcase run actually ends the simulation without returning control back to the Vunit runner. Therefore, the Vunit runner assumes that the test simulation just stopped unexpectedly

Are there any other somewhat established methods of exiting an UVM testbench? This is a bit related to #293

  • UVM has its own messaging/logging/verbosity. A pass/fail test result is decided by parsing the UVM_ERROR and UVM_FATAL messages within the simulation transcript/log

Is the parsing functionality part of UVM or is this something that the simulator vendors provide? Is this the only way to find out the status of a test run? Is there no way to programmatically ask UVM for the number of errors?

@krishnan-gopal
Copy link
Author

@LarsAsplund

Are there any other somewhat established methods of exiting an UVM testbench? This is a bit related to #293

Not that I know of. To prevent the $finish from being executed automatically, you can add this code before the run_test() code line in tb_top.sv. So, the control is going back to the Vunit SV runner. Still, I see this only as a work-around for development purposes. Btw, Vunit still returns a fail here, so this needs to be investigated.

uvm_root root;
root = uvm_root::get();
root.finish_on_completion = 0;

Its not related to #293, which deals with recognizing known failures and flagging them to be seen as an OK instead of a FAIL.
The requirement here is that Vunit does not assume any PASS/FAIL on its own, and the evaluation of the ERRORS is left completely to an external Non-vunit mechanism.

Is the parsing functionality part of UVM or is this something that the simulator vendors provide? Is this the only way to find out the status of a test run? Is there no way to programmatically ask UVM for the number of errors?

UVM does not do any parsing of the transcript. It has an internal counter/registry which keeps track of all the UVM_<> messages issued within the testbench. Using that, its possible to query this within the UVM/SystemVerilog code towards the end (in a post-simulation check phase) in order to evaluate a PASS/FAIL for the test. This is left to the user
The transcript parsing is done outside the UVM/SystemVerilog code after the simulation actually exits. Some vendors provide support commands for it, but there's no established mechanism. By specifying rules/patterns, the user can parse the transcript for error/warning/fatal messages (also those which are not issued by the UVM world and do not contain UVM_<> tags). This also allows the user to 'exclude' certain error messages which are actually known failures and shouldnt contribute to a PASS/FAIL decision (similar to #293)

BTW, I have two more UVM fundamentals in the list of differences in the first message. . Please have a look.

@kraigher
Copy link
Collaborator

@krishnan-gopal

Do you think it reasonable to add some VUnit specific stuff to an UVM test bench or do you think VUnit needs to run unmodified UVM test benches for it to be useful? Since it seems UVM does not standardize test pass/fail and leaves it up to the user it seems we should add a standardized mechanism for it.

Regarding test case discovery it would be convenient if test case names can be extracted robustly via some simple parsing of the verilog files such that the test case names are known without running any simulation. This is to make for example the --list command work or the per-test-case configuration of generics. By the way does UVM want test cases to run in individual simulations or in the same simulation?

Regarding pass fail mechanism it is not good practise to just rely on the absence of errors to say a test passed. There must also be a completion message for a test to be considered a pass. In VUnit this means that any test that does not finish and call the cleanup code will fail. Is there something similar in the UVM world where the test must reach the final statement for a test to be deemed sucessful?

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal
Just like @kraigher mention in 3 I prefer that we exit in a VUnit controlled manner. This is how we handle all other frameworks in VHDL. They are not allowed to stop when a test completed successfully. When they fail they either stop or they provide a way to extract error information such that VUnit knows the status.

Your suggestions indicates that it should be possible to do something similar.

The relation to #293 is that a UVM testbench would be an "expected failure" that must be handled in a special way if we can't do something similar and this is what I would like to avoid.

@kraigher
Copy link
Collaborator

@krishnan-gopal
Regarding:

  // NOT NICE:
  // all the testcases used in the TB must be declared and listed here
  // In UVM this is usually done outside the HDL world

What is the problem with listing the test inside the test bench in UVM? In this case they are always kept together with the file through which they are run. In VUnit this is parsed so that the test names are also programatically accessible in the Python-world. In this way typical simple test benches would need no Python-configuration.This can be made single source of information such as:

  // NOTE: Proposal only
  import vunit_uvm_pkg::*;
  `include "vunit_uvm_defines.svh"
  `UVM_TEST_SUITE begin
    `UVM_TEST_CASE(my_test1);
    `UVM_TEST_CASE(my_test_failing)
  end

Here UVM_TEST_SUITE and UVM_TEST_CASE would be VUnit-specific macros. Beneath some of the macros there might could be the option to notify UVM to disable $finish if possible and let VUnit handle that instead. There might also be cleanup code that asks UVM for the test status programatically.

If desirable even the seed could be an argument to the macro if a specific seed is desired for the test but I guess. This could then be overridden in the Python-world or a test could be run for several values of the seed.

@krishnan-gopal
Copy link
Author

krishnan-gopal commented Apr 24, 2018

Hi. Here are my thoughts on your comments. I have also tried to summarize it to answer your doubts.

Do you think it reasonable to add some VUnit specific stuff to an UVM test bench or do you think VUnit needs to run unmodified UVM test benches for it to be useful? Since it seems UVM does not standardize test pass/fail and leaves it up to the user it seems we should add a standardized mechanism for it.

In general, its to be noted that the HDL part (tb_top.sv) of SystemVerilog (which also contains the instance of the design) is not OOP-based. This mean that every user will have to write additional Vunit-specific code within the top module for every testbench used. I would prefer writing a single adapter/layer which can be re-used for all class-based OOP testbenches like UVM, and of course Python is the ideal candidate for that code. I am also a fan of having testbench-code independent of the flow/process in case the project decides to move to another tool. However, maybe its possible to put the Vunit-specific UVM-code into a seperate SystemVerilog module/class/scope keeping the rest of the TB code intact (for example using a second tb_top module). Let me try out some options

By the way does UVM want test cases to run in individual simulations or in the same simulation?

Each testcase runs in an individual simulation

Regarding test case discovery it would be convenient if test case names can be extracted robustly via some simple parsing of the verilog files such that the test case names are known without running any simulation. This is to make for example the --list command work or the per-test-case configuration of generics.

I would like the --list to work too. Functionally, I would define a testcase as a combination of the test-class (the argument specified with +UVM_TESTNAME) and seed. Remember that the test-class is usually run over several seeds, sometimes thousands, and these seed values may not be known prior to launching the regression run. I would like Vunit to recognize a test-class+seed combination as a single testcase, so that I can get dedicated simulation results, debug waveforms, etc. There's no way I can pre-calculate all possible seed values for each testcase and specify them with compile-time HDL code. Is there someway I can declare only the test-classes in the SystemVerilog code (so the --list works for example) and then specify seed values dynamically during simulation time through the python code ?

Regarding pass fail mechanism it is not good practise to just rely on the absence of errors to say a test passed. There must also be a completion message for a test to be considered a pass. In VUnit this means that any test that does not finish and call the cleanup code will fail. Is there something similar in the UVM world where the test must reach the final statement for a test to be deemed sucessful?

There is no completion message explicitly needed in UVM. Its all handled implicitly deep within the base-classes. The testcase is only responsible for creating and sending the stimuli and does not usually check anything. The checking mechanism is done passively by a dedicated component(s) which monitor transactions and issue error messages.
UVM fundamental (you can read about UVM phasing and Objections on the UVM reference)
Once the test-case simulation is launched and the run_test() task is called, the testbench is brought through several so-called 'UVM phases'. In the UVM methodology, each component in the testbench is allowed to 'raise an objection' which needs to be eventually 'lowered' in-order for the entire testbench simulation to move from one-phase to the next. All the UVM components run in parallel with the test-case code, and after the test-case run_phase() task (where all the sequences/stimuli are executed) is complete (or if a timeout is triggered), the clean-up phases are automatically executed where the number of errors (messages) can also be fetched, and a summary of different message types is displayed. The final phase can be reached only after all the previous phases are successfully completed (either with or without error messages). Only after that is the $finish system task called (without returning control to Vunit runner). In case of a timeout trigger, the clean-up phase will register and display a UVM_ERROR message which mentions the timeout information.
SUMMARY: Regardless of any error type or timeout, the clean-up phases will always be executed and the summary/report is displayed. After that and before the final phase, the error count can be obtained (or externally in the transcript) and an absence of errors implies a PASS.

@krishnan-gopal krishnan-gopal changed the title UVM support: alllow SV testbenches without test cases UVM support: allow SV testbenches without test cases Apr 24, 2018
@LarsAsplund
Copy link
Collaborator

@krishnan-gopal

I can declare only the test-classes in the SystemVerilog code (so the --list works for example) and then specify seed values dynamically during simulation time through the python code ?

If you use TEST_CASE to specify the test cases you can use configurations to generate the seeds. When using --list you'll see all combinations of test case and configuration

SUMMARY: Regardless of any error type or timeout, the clean-up phases will always be executed and the summary/report is displayed.

In VUnit we also have the concept of phases and it's also possible to raise an objection for the VHDL test runner although we call it locking a phase. However, in VHDL we will not reach the cleanup phase if something happens that stops the simulation. That's why the Python test runner considers an exit without passing the cleanup phase an error. What happens with a UVM testbench if you do something like dereferencing a null pointer?

@krishnan-gopal
Copy link
Author

@LarsAsplund

If you use TEST_CASE to specify the test cases you can use configurations to generate the seeds. When using --list you'll see all combinations of test case and configuration

You mean that I use SystemVerilog-code to generate randomized seed values at compile-time ? How can I do that ? I thought that a configuration has to be fixed at compile-time in order to be recognized ?

That's why the Python test runner considers an exit without passing the cleanup phase an error. What happens with a UVM testbench if you do something like dereferencing a null pointer?

Let me check this, and try to find an easier way. Maybe thats why checks are done externally by parsing the transcript. The report() phase should display a UVM_ERROR : 0 for a PASS which can be seen in the transcript.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal

You mean that I use SystemVerilog-code to generate randomized seed values at compile-time ? How can I do that ? I thought that a configuration has to be fixed at compile-time in order to be recognized ?

No, I meant configurations created in Python but I think I misunderstood you. Are you looking for the ability to specify a test case that will be listed as a single entry when doing --list and then, during the SV simulation, decide that the test case needs to run n times with different seeds so that the test report will list n entries for that test case?

@krishnan-gopal
Copy link
Author

krishnan-gopal commented Apr 25, 2018

exactly. But the simulation doesn't need to run n times. It only has to take the seed from the command line and treat it as a dedicated test case to run a single simulation. However in regression mode, each test case would be run over each specified/random seed, and that case the test report will list all those seeds.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal
Based on what you know about adding Python configurations and custom command line options. What do you feel is missing?

@krishnan-gopal
Copy link
Author

I think in the example in uvm_tb_2 (thought its rather dirty python code) I have tried to add the basic necessities of running a UVM testbench. I dont think anything more is needed for a basic UVM testbench usage. There are a few things left to do:

  • I am still checking if there's a simple way to fetch the PASS/FAIL result. I feel that its better to parse the transcript for tags/labels of error messages. It should be noted that not all the error messages in a UVM testbench have the UVM_ERROR tag. There could be a non-UVM component which has its own format. So, a customizable list of error patterns can be specified and parsed.
  • map the Vunit verbosity argument to the simulator argument +UVM_VERBOSITY=<>

@kraigher
Copy link
Collaborator

@krishnan-gopal The VUnit -v/--verbose flag traditionally has no impact on the RTL-level verbosity. It just controls whether the simulator output is shown on stdout even for passing tests or not.

@kraigher
Copy link
Collaborator

kraigher commented Apr 25, 2018

@krishnan-gopal The philosophy in VUnit has always been to stop on errors and failures such that the test is failed due to not reaching the test runner cleanup. Maybe that is not feasible in SystemVerilog.

@krishnan-gopal
Copy link
Author

@kraigher

The VUnit -v/--verbose flag traditionally has no impact on the RTL-level verbosity. It just controls whether the simulator output is shown on stdout even for passing tests or not.

Sorry, I think its --log-level then.

The philosophy in VUnit has always been to stop on errors and failures such that the test is failed due to not reaching the test runner cleanup. Maybe that is not feasible in SystemVerilog.

Exactly. I need to look into different options here. Maybe we can make it more Vunit-compatible or UVM-compatible, but I feel we should leave this to the user to decide. Any way to do that ? maybe using hooks/callbacks ?

@kraigher
Copy link
Collaborator

@krishnan-gopal --log-level just controls the logging within the Python framework. The purpose is mainly to debug VUnit Python code. There has never been a way to control verbosity within the HDL-domain from Python. In the VHDL-world we have a logging framework but the verbosity of that is controlled via HDL-code calls. The Python-side is completely unaware of the our VHDL logging framework.

@krishnan-gopal
Copy link
Author

krishnan-gopal commented Apr 26, 2018

@kraigher
I see. Can you suggest some way we can pass this UVM_VERBOSITY from the command line over to the simulation call ?
I think we need a --log-level argument for deciding the HDL-world verbosity, which is seperated from that of the Python world verbosity. I can imagine this is needed regardless of whether we use UVM or not.

@kraigher
Copy link
Collaborator

@krishnan-gopal It is really easy to add custom command line arguments (https://vunit.github.io/python_interface.html#adding-custom-command-line-arguments) to your run.py script if you want to have a CLI option to for example set a sim_option that controls the UVM_VERBOSITY.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal

I think we need a --log-level argument for deciding the HDL-world verbosity, which is seperated from that of the Python world verbosity. I can imagine this is needed regardless of whether we use UVM or not.

I wouldn't mind having a standard option to do verbosity control from the command line. Ideally that option would accept a fixed set of values but the problem is that VUnit supports the creation of custom levels in addition to the standard ones and our levels differs from the levels in UVM. Another issue is that VUnit has started to move away from the concept that log levels has an order. The reason is that our log levels contains error related levels (warning, error, failure) and non-error related levels (trace, debug, info). It's a bit awkward to think of orders when you're mixing different things. Errors are more severe than warnings but there is no severity order between info and debug. Debug contains more details than info but it's not clear if warnings is more detailed information than debug or vice versa. And what happens if you add a custom status level? We still support the concept of a verbosity level but the primary concept of verbosity control is that levels at shown or hidden on an individual basis. What is the UVM approach?

Coming back to the discussion regarding exit strategy. Do you see a fundamental drawback with handing over the exit to the top level? Doing that doesn't prevent you from parsing the transcript in a post-hook and failing if errors are found. Did you test what happens when dereferencing a null pointer? I'm sure a transcript parser could find that but wouldn't it be simulator specific what error message to look for?

@krishnan-gopal
Copy link
Author

@LarsAsplund

What is the UVM approach?

Log-levels do have an order. In UVM all error, warning and fatal messages (and messages which dont use the UVM messaging methodology like $display) will be displayed out regardless of verbosity.
The +UVM_VERBOSITY=<> argument (verbosity-level) from the command-line controls the printing of all UVM_INFO type messages. Also, this verbosity-level can be set within the testbench code for a particular hierarchy/scope.
The UVM_INFO message function in the SystemVerilog code accepts a verbosity as an argument (message-verbosity). The UVM base classes are full of such UVM_INFO messages with varying verbosities. If the message-verbosity of a UVM_INFO message is lower than the verbosity-level of its scope, then the message will be printed.
The verbosity-level and the message-verbosity use an enumerated type 'uvm_verbosity', so you can either use numeric or the enum to set the value.

typedef enum
{
UVM_NONE = 0,
UVM_LOW = 100,
UVM_MEDIUM = 200,
UVM_HIGH = 300,
UVM_FULL = 400,
UVM_DEBUG = 500
} uvm_verbosity;

Coming back to the discussion regarding exit strategy. Do you see a fundamental drawback with handing over the exit to the top level? Doing that doesn't prevent you from parsing the transcript in a post-hook and failing if errors are found.

I am trying out different means of doing that without writing any additional invasive code, but I dont see a drawback of doing that either. I feel its a reasonable deviation from the regular UVM approach, but then I am a lone UVM user

Did you test what happens when dereferencing a null pointer? I'm sure a transcript parser could find that but wouldn't it be simulator specific what error message to look for?

I did. In such a case, the cleanup phases are not invoked, and the simulator exits with a simulator-specific error message. So, UVM is not involved here anymore.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal

Log-levels do have an order. In UVM all error, warning and fatal messages (and messages which dont use the UVM messaging methodology like $display) will be displayed out regardless of verbosity.
The +UVM_VERBOSITY=<> argument (verbosity-level) from the command-line controls the printing of all UVM_INFO type messages.

By removing warning, error and fatal from verbosity control UVM avoids the the ordering concern I mentioned. I guess you may end up with many warnings though. Makes me think about Vivado which can issue warnings for making optimizations which in most cases is exactly what I want it to do. They've solved it by limiting the number of warnings of the same type. A thought for the future maybe...

Also, this verbosity-level can be set within the testbench code for a particular hierarchy/scope.

VUnit also support hierarchical logging so it similar to UVM in that respect.

I feel its a reasonable deviation from the regular UVM approach, but then I am a lone UVM user

From what you explain it seems like you would improve over the regular UVM approach by relying less on parsing to catch error messages which you might not have seen before. I understand your concern though and I've seen it before. Since VUnit is free and non-invasive it doesn't require a corporate decision to start using it. The normal case is that an individual starts using it, then it spreads to the rest of the team members and then the company. A common concern for these individuals (in addition to not having to make their testbenches too "weird looking") is how they can make use of VUnit, commit such code to the code repository without forcing others to learn about the Python test runner. The answer has been that the testbench must be "well-behaved" if you run it without the Python test runner. This is something you can do with a VUnit testbenches in VHDL but currently that's not possible with the SV testbench. What are your thoughts on this?

@krishnan-gopal
Copy link
Author

@LarsAsplund

This is something you can do with a VUnit testbenches in VHDL but currently that's not possible with the SV testbench. What are your thoughts on this?

In the end, the runner setup must be done one way or another. Its possible to encapsulate this additional SV code into a dedicated module, so that the rest of the testbench still looks the same. I have been trying out different approaches. Let me get back to you with a suggestion once I've figured it out.

@krishnan-gopal
Copy link
Author

@LarsAsplund, @kraigher

I am working now on making Vunit understand my errors suring the clean-up phase. The way I see it, errors are of 3 categories:

  1. UVM_ERROR - messages that are generated and tracked by the UVM base. These can be 'fetched' during the clean-up phase quite easily to make Vunit register a FAIL
  2. Internal errors - for example, a null pointer, memory allocation which causes the simulation to stop with a tool-specific message. In this case Vunit recognizes it as a FAIL since its clean-up phase was not called.
  3. untraceable functional erros - these include error messages that do not cause the simulation to stop, and cannot be 'fetched' within the SystemVerilog code. This includes errors generated by third-party IPs/blackboxes, VIPs or any HDL code that uses a $error system task.

It seems that I can make Vunit's PASS/FAIL recognize category 3 only by parsing the transcript. Meaning, I have to do this in the Python-world. Do you have an example where this is done ?

I see that the file test_output/<Some long name+HASH>/output.txt is the name assigned to the simulation log. I am looking for a way to access this from the Python-script. Is there some vendor-independent way I can get the pointer to this file within the Python-world ?

@kraigher
Copy link
Collaborator

kraigher commented May 2, 2018

The post_check hook takes an output_path argument which is the folder containing the output.txt file. You can read about it on the website.

@krishnan-gopal
Copy link
Author

@kraigher
I already tried this method, but it looks like the output.txt file is empty (i.e not yet flushed) at the time when post_check is invoked for that simulation. So I am not able to read its contents. Is there a way around that ?

@kraigher
Copy link
Collaborator

kraigher commented May 2, 2018

It is because the output from the post check itself is also goes to the output.txt. Maybe we need to change that by moving the post check to an outer layer.

@krishnan-gopal
Copy link
Author

krishnan-gopal commented May 2, 2018

I created a new issue for that #332

@krishnan-gopal
Copy link
Author

krishnan-gopal commented May 4, 2018

@kraigher , @LarsAsplund
Coming back ot the discussion on a PASS/FAIL, these are the methods I tried (I will update this section as I find out more)

  1. I found that the easiest and least invasive way is to let the simulation simply break on an error so that Vunit's cleanup is not called and therefore we get a fail. You can do this by passing +UVM_MAX_QUIT_COUNT=1 on the simulation command line as an argument. It is specific to UVM and supported by all simulators.
  2. The cleanest and most feature-rich way is to use the post_check(output) function and parse it for error message patterns. The advantage here is that we can use an 'error pattern' file/strings which specifies the errors which can be ignored so that we can mask out known errors.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal That would be inline with the default behavior for how we do it in the VHDL version. We support counting the different log levels, globally as well as on subsystem level, but the default is to stop on the first error or failure but continue on all other levels.

For those wanting to continue on an error you can always extract the logs programmatically or maybe in your case parse a log file and then assert on those counters just before you end the simulation. This is how it's done when other non-VUnit error checking mechanisms for VHDL are used.

@krishnan-gopal
Copy link
Author

krishnan-gopal commented May 8, 2018

@kraigher, @LarsAsplund
I just tried out the fix for #332. It works great. I've updated the section above where I've listed out the possible error parsing/detection methods.
I would say we have reached a point of full UVM support (with a few reasonable deviations in usage)

@krishnan-gopal
Copy link
Author

By the way, one general question. Is there a specific purpose for using parameters in the Verilog/SystemVerilog test-configuration to pass arguments from the command-line ?
In VHDL of course you need generics for this purpose. But Verilog has the $plusargs system task which can be used to pass arguments from the command-line into the code. The advantage of doing this is that $plusargs can be processed without re-elaborating the whole design+TB, whereas parameters/generics are evaluated during elaboration.
Let me know your thoughts on that.

@kraigher
Copy link
Collaborator

kraigher commented May 8, 2018

@krishnan-gopal I am not against using plusargs instead of generics. In our current System Verilog test benches the parameter is a private implementation detail so it could be changed without any test bench modifications. The reason we used a parameter was that it would work the same as in VHDL and parts of the Python code can be unaware if it is a VHDL or Verilog test bench since the same mechanism is used. If we use plusargs we need to introduce a special handling of VHDL and Verilog test benches.

@krishnan-gopal krishnan-gopal changed the title UVM support: allow SV testbenches without test cases UVM support: allow SV testbenches to specify tests in different approaches May 8, 2018
@kraigher
Copy link
Collaborator

kraigher commented May 8, 2018

@krishnan-gopal So what more help do you need to achieve a workable solution?
I think the strategy should be to first solve what is blocking you from using the Public API to achieve what you want. Once that is done we can see what concepts from the custom code you write in your run.py file above the Public API that is commonly useful enough to be merged into VUnit.

@kraigher
Copy link
Collaborator

kraigher commented May 8, 2018

@krishnan-gopal It is important to note there is loose coupling between the Python and HDL world in VUnit. Basically the Python part provides a string to the test bench containing all configuration in a dictionary like structure. The HDL parts writes the results to an output file which is specified by the configuration string. Once simulation is done the Python side reads the output file and determines if test cases where passed, failed or skipped. The format of the output file is:

test_start:test1
test_start:test2
test_suite_done

All test pass if all tests start and test_suite_done is reached. A test is considered failed if it is the last test started and test_suite_done was not reached. A test is considered skipped if it was not started.

@krishnan-gopal
Copy link
Author

@kraigher

I think the strategy should be to first solve what is blocking you from using the Public API to achieve what you want. Once that is done we can see what concepts from the custom code you write in your run.py file above the Public API that is commonly useful enough to be merged into VUnit.

The only change I had to do was in the vunit_pkg.sv in live 143. This was suggested by Lars.

      if (phase == init) begin
         if (test_cases_to_run[0] == "__all__") begin
            test_cases_to_run = test_cases_found;
     if (phase == init) begin
         if (test_cases_to_run[0] == "__all__") begin
            test_cases_to_run = test_cases_found;
         end else if (test_cases_to_run[0] == "") begin
            $fwrite(trace_fd, "test_suite_done\n");
            cleanup();
            return 0;

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal That was a bit of quick fix to get forward. However, the ability to run a test suite without test cases is a change that we need.

@krishnan-gopal
Copy link
Author

@LarsAsplund Yes, I agree with that.
I also think think that the definition of whats really a testcase should be made flexible (look in the run.py of example 2) so that the user can easily specify the mapping between the tests "" Vunit command-line argument and how it is used by the simulator's command line. In my case its +UVM_TESTNAME=, but it could be different for other users.

@kraigher
Copy link
Collaborator

@krishnan-gopal I think the minimal thing we need to add is "test less" SystemVerilog test benches. Then you can create your test cases in Python using add_config and set simulator flags with correct +UVM_TESTNAME

@kraigher
Copy link
Collaborator

@krishnan-gopal I added support for "test less" SystemVerilog test benches now. You can have a look at this example to see how you could use it: https://github.com/VUnit/vunit/tree/uvm/examples/verilog/uvm

@krishnan-gopal
Copy link
Author

I just tried it out. It works great. Thanks!

@krishnan-gopal
Copy link
Author

@kraigher, @LarsAsplund
Is there some way for me to remove the "onerror quit" from the default simulation options without manually re-declaring the whole set of default options in my run.py ?

The problem with constrained-random testbenches is that its not possible to predict which testcase (test+seed combination) will generate a known error message. So, known error messages are rather testbench-specific and not testcase-specific in UVM. So, #293 doesnt help here.

I was able to use the output file stream in my post_check function and parse it for UVM-specific as well as HDL-$error generated error patterns from an "error pattern" file.
I was also able to exclude specific patterns from an "Exclusion rules" pattern file, so that a known-error message (UVM-specific or other) can be excluded and doesnt contribute to a FAIL decision.
Whenever there is a $error message triggered, the testcase always quits with a FAIL due to the "onerror quit", without taking my "error exclusion" rules into account.

@kraigher
Copy link
Collaborator

@krishnan-gopal You should set the sim_option (http://vunit.github.io/python_interface.html?highlight=sim_options#simulation-options) vhdl_assert_stop_level to failure. On modelsim this also affects $error in verilog causing simulation to stop and inhibit the test_suite_done criteria from being fulfilled.

@kraigher
Copy link
Collaborator

@krishnan-gopal Had any success with the vhdl_assert_stop_level sim option?

@krishnan-gopal
Copy link
Author

Yes. It works as expected. Now I am able to have full control over error patterns and excluding them for a PASS/FAIL decision, and this is a more elegant alternative to #293

@krishnan-gopal
Copy link
Author

I am also working on a way to build testbenches for uVCs. This is another use-case where Vunit could help out a lot. I will come back to you once I have a good methodology/flow.

@LarsAsplund
Copy link
Collaborator

@krishnan-gopal Can we close this issue for now and maybe open a new if your uVC experiments come up with new issues? The initial use case seems covered.

@krishnan-gopal
Copy link
Author

I agree. We can close this issue.
With the last pull from Vunit, I was able to run the entire flow with my UVM testbench. The only change needed was in the SystemVerilog code in the top module which calls the run_test:

  import vunit_pkg::*;
  `include "vunit_defines.svh"

  `TEST_SUITE begin
    uvm_root root;
    root = uvm_root::get();
    root.finish_on_completion = 0;
    run_test();
  end 

@LarsAsplund
Copy link
Collaborator

Good. I think it looks clean, it sticks with the public APIs of UVM and it's idiomatic VUnit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants