Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sigasi VUnit Integration Discussion #347

Closed
davidmedinasigasi opened this issue Jun 25, 2018 · 35 comments
Closed

Sigasi VUnit Integration Discussion #347

davidmedinasigasi opened this issue Jun 25, 2018 · 35 comments

Comments

@davidmedinasigasi
Copy link
Contributor

I propose to add a '--vunitFiles' to list all the libraries and files that an external tool like Sigasi needs to import this project.

The difference with '-f' parameter is that will only list the files that are built into vunit.

This is one of the steps for the Vunit integration with Sigasi Studio.

@kraigher
Copy link
Collaborator

I would prefer if such an interface used the Python API. The command line interface is for end user usability not for tool integration.

For example:

from vunit import get_all_builtins
for library_name, file_name in get_all_builtins():
    print(library_name, file_name)

@lievenlemiengre
Copy link

lievenlemiengre commented Jun 25, 2018

We don't want all builtin libraries, we only want to find libraries that are actually used.

What files are included and what libraries are available depends on the used VHDL version (and perhaps there may be more options in the future?).

Both are configured in run.py.

How about this: we could run a script like the one above to get the list of builtin library names. Using that list we could do "py run.py -f" and filter using the known builtin libraries.

@kraigher
Copy link
Collaborator

Is your use model that the user maintains a run.py file that Sigasi does not touch?

@lievenlemiengre
Copy link

Yes

@kraigher
Copy link
Collaborator

In this case would it not be better to add a general --export-to-json flag that exported in JSON format all the information that a higher level tools would want to know?

By the way what is the need to differentiate between built-in files and user added files? Is it not between Sigasi-managed and VUnit-managed files that you want to differentiate? If you provided an project.csv to VUnit like you suggest in #348 then you can subtract those files from the ones reported by --files to get the set of built-in or non-"Sigasi-managed" files.

@FranzForstmayr
Copy link
Contributor

What's about a python method, which exports the vhdl files in a specific format. For example i would like a .tcl Script, which i can load in vivado, so that all source files are added to my project. Same method could be used for other vendor tools.

@kraigher
Copy link
Collaborator

After further reflection I do not think using --list and --files for communicating with Sigasi is a good idea for the future and would rather see a --export-to-json flag to export data from VUnit to other tools. This means I do not think we should merge #365.

My reasons are as follows:

  1. The --list and --files are mutually exclusive which means you need two invocations of run.py to get both. For large projects the run.py can be a bit sluggish especially if users add a lot of data-generation.

  2. The --list and --files flags are Machine-to-User and not Machine-to-Machine and can change based on estetics etc. Maybe we would like to add color or extra information in the future. This would break the Sigasi integration.

Based on this reasons I think we should define a Machine-to-Machine data format for --export-to-json that conveys all information that Sigasi needs as well as a version of the format if we need to change it in the future.

Sorry to make this more complicated for you but I think we will be thankful in the future.

@kraigher
Copy link
Collaborator

Regarding the format of the JSON export I think it should be something like this:

{
    "version": "3.7.1",

    "files": [
        {
            "name": "tb_example.vhd",
            "library_name": "lib"
        },
        {
            "name": "tb_example_many.vhd",
            "library_name": "lib"
        }
    ],

    "test_cases": [
        {
            "name":  "lib.tb_example.all",
            "file_name": "tb_example.vhd",
            "lineno": 18,
            "multi_test_simulation": false
        },
        {
            "name":  "lib.tb_example_many.test_pass",
            "file_name": "tb_example_many.vhd",
            "lineno": 22,
            "multi_test_simulation": false
        },
        {
            "name":  "lib.tb_example_many.test_fail",
            "file_name": "tb_example_many.vhd",
            "lineno": 25,
            "multi_test_simulation": false
        },
    ]
}

The version would be the version of VUnit.
The names of the test cases corresponds to what you provide on the command line.
In case you are wondering the "multi_test_simulation" boolean would indicate that the test was part of a test suite where all tests run in the same simulation.

Using this format we can hide a lot of complexity in how VUnit creates tests case names based on configurations and such.

@kraigher
Copy link
Collaborator

I can also think of another reason why using --files and --list is not a good idea. The user may add print statements to their run.py file and that output will also end up in stdout along with the files and list output.

@kraigher
Copy link
Collaborator

I will have an implementation of --export-json ready by today so you do not have do implement it yourself.

kraigher added a commit that referenced this issue Aug 26, 2018
@kraigher
Copy link
Collaborator

I have implemented the --export-json command line argument on the export-to-json branch (https://github.com/VUnit/vunit/tree/export-to-json). You can read about the feature in the non-rendered documentation on the branch. https://raw.githubusercontent.com/VUnit/vunit/export-to-json/docs/cli.rst

@eine
Copy link
Collaborator

eine commented Aug 26, 2018

@kraigher, since all the files must belong to one and only one library, would it be sensible to use this format instead?

 "files": {
  "lib": [
    { "name": "tb_example_many.vhd" },
    { "name": "tb_example.vhd" }
  ]
}

or

 "files": {
  "lib": {
    "tb_example_many.vhd": { },
    "tb_example.vhd": { }
  }
}

Equivalently, for tests:

    "test_cases": {
        "lib.tb_example.all": {
            "file_name": "tb_example.vhd",
            "lineno": 18,
            "multi_test_simulation": false
        },
        "lib.tb_example_many.test_pass": {
            "file_name": "tb_example_many.vhd",
            "lineno": 22,
            "multi_test_simulation": false
        },
        "lib.tb_example_many.test_fail": {
            "file_name": "tb_example_many.vhd",
            "lineno": 25,
            "multi_test_simulation": false
        },
    }

or

    "test_cases": {
        "lib": {
          "tb_example.all": {
            "file_name": "tb_example.vhd",
            "lineno": 18,
            "multi_test_simulation": false
          },
          "tb_example_many.test_pass": {
            "file_name": "tb_example_many.vhd",
            "lineno": 22,
            "multi_test_simulation": false
          },
          "tb_example_many.test_fail": {
            "file_name": "tb_example_many.vhd",
            "lineno": 25,
            "multi_test_simulation": false
          },
        }
    }

@kraigher
Copy link
Collaborator

@1138-4eb I see the files and tests as lists of objects with fields. In your files proposal I do not see a nice way of adding more information about each file object without breaking backwards compatibility of the format or adding some other information on the side. With the format I implemented new fields can be added without breaking backwards compatibility.

@eine
Copy link
Collaborator

eine commented Aug 26, 2018

@kraigher the main idea was to use library names to group objects. I now edited it according to your comment, in order to allow the addition of more fields without breaking backwards compatibility.

@kraigher
Copy link
Collaborator

After your edit of the files proposal you have removed the disadvantage I mentioned. I cannot think of any objective difference between your current proposal and what I implemented. It is a matter of preference if you want to group files by libraries or view the library as a property of the file.

Regarding your proposal to break out the library name from the test cases I would prefer to only use full test names in the export to avoid having the IDE needing to duplicate the VUnit logic for creating full test names from the fragments which would inhibit VUnit from changing how names are created in the future.

@eine
Copy link
Collaborator

eine commented Aug 26, 2018

I cannot think of any objective difference between your current proposal and what I implemented. It is a matter of preference if you want to group files by libraries or view the library as a property of the file.

My use case is to generate a list of files to be used in a TCL script that runs a synthesis tool. E.g.

for {set i 0} {$i < [llength $libs]} {incr i} {
  eval "set alib $[lindex $libs $i]"
  foreach j $alib {
    add_input_file [lindex $dirs $i]/hdl/$j.vhd -format {VHDL_2008} -work [lindex $libs $i]
  }
}

In order to get all the files that belong to a library, with you current proposal the name must be parsed. So, all the list/array must be processed to get/discard the files corresponding to a library. E.g. to ignore those in library 'test' or 'sim'.

A different context where grouping file/test according to the libraries is useful, is to show a tree view in a GUI. If the user wants to see all of them without hierarchy, it is easier for a tool to append fields rather than split them. However, as you say, changing how names are created might be a breaking change.

Note that these suggested changes are related to:

files_dict = dict()
for x in vu.get_compile_order():
   try:
      files_dict[x.library.name].append(x.name)
   except:
      files_dict[x.library.name]=[x.name]

with open('compile_files.json', 'w') as outfile:
    json.dump(files_dict, outfile, indent=2)

@FranzForstmayr
Copy link
Contributor

I have to generate a tcl script for the same reason too.
However, I would like to have a property vhdl_standard by the files, because Vivado handles 2008 files differently.

Using a JSON config file and a 'generic' run.py. See config.json and cfg_run.py.

This, or an import_json method would even solve my already closed issue #353

@kraigher
Copy link
Collaborator

kraigher commented Sep 5, 2018

@davidmedinasigasi @lievenlemiengre Have any thoughts on my recent comments regarding --export-json?

kraigher added a commit that referenced this issue Sep 5, 2018
@davidmedinasigasi
Copy link
Contributor Author

@kraigher, We think the --export-json is the best option and we would look into using that instead the console commands in a future release.
Because we want full integration with VUnit, I have some questions regarding the export-json:

  • Are you planning to add the VUnit version to the json?
  • Do you think it's possible to list the configurations in the json file?

@kraigher
Copy link
Collaborator

kraigher commented Sep 6, 2018

@davidmedinasigasi

  1. Sure we can add the VUnit version to the export file in addition to the export format version.
  2. I guess we could list the configurations in the json export but with the current format my hope was that it would not be necessary. With the current format the advantage is that the IDE does not really have to know so much about how VUnit creates tests but only have a simple mapping from test name to file name and line number. With configurations you will get multiple test names mapping to the same file and line number and the IDE can offer a drop-down menu to select which of those tests to run or if all should be run. The IDE does not really need to know that there were several configurations.

kraigher added a commit that referenced this issue Sep 6, 2018
kraigher added a commit that referenced this issue Sep 7, 2018
kraigher added a commit that referenced this issue Sep 10, 2018
kraigher added a commit that referenced this issue Oct 3, 2018
@kraigher
Copy link
Collaborator

kraigher commented Oct 3, 2018

@davidmedinasigasi @lievenlemiengre I just pushed the export json stuff to the master branch now. I also changed the test case line number location into a character offset + length to make it more precise.

@kraigher kraigher changed the title Interface to query Vunit files Sigasi VUnit Integration Discussion Oct 5, 2018
@davidmedinasigasi
Copy link
Contributor Author

@kraigher I'm currently updating the VUnit integration with Sigasi with the new export-json. So far everything is working as expected with the json, but I'm getting wrong offsets in Windows and I think it's because of the newlines.

@kraigher
Copy link
Collaborator

You are probably correct. I will investigate.

@kraigher
Copy link
Collaborator

@davidmedinasigasi I tried here with files containing CRLF (\r\n) and I could not reproduce the problem. Sure the problem is not on your side?

@tivervac
Copy link
Contributor

tivervac commented Nov 6, 2018

@kraigher I just tested the run project on both windows and linux, both with linux and windows newlines. No matter which of the configurations you run this command on, it always returns the following entry in the json:

# This is python 3
python run.py --export-json test.json
cat test.json
...
        {
            "attributes": {},
            "location": {
                "file_name": "/home/titouanvervack/git/vunit/examples/vhdl/run/tb_running_test_case.vhd",
                "length": 15,
                "offset": 677
            },
            "name": "lib.tb_running_test_case.Test scenario B"
        },
...

Obviously this offset should change depending on the kind of newlines you use (cr nl or just nl), but it doesn't.

@kraigher
Copy link
Collaborator

kraigher commented Nov 6, 2018

@tivervac I could reproduce the problem and pushed a fix to master.

@davidmedinasigasi
Copy link
Contributor Author

@kraigher Thanks! When can we expect to see this in a release?

@kraigher
Copy link
Collaborator

kraigher commented Nov 7, 2018

@davidmedinasigasi If you can confirm that it works I will make a release now.

@davidmedinasigasi
Copy link
Contributor Author

@kraigher It works perfectly

@kraigher
Copy link
Collaborator

kraigher commented Nov 9, 2018

Good it is part of the latest release.

@davidmedinasigasi
Copy link
Contributor Author

@kraigher, I just noticed that when VUnit generates a Json for a system verilog test, the macro offsets are included in the test offset, is that intended?

@kraigher
Copy link
Collaborator

What do you mean exactly?

@davidmedinasigasi
Copy link
Contributor Author

Does VUnit take in account the macro to calculate the test offset?

@kraigher
Copy link
Collaborator

I am still not sure what you mean. The test case scanning if I recall correctly does just find the TEST_CASE string with a regex and the lexical position will be the string which is the test case name not including the TEST_CASE macro token. VUnit will not have done any verilog preprocessing to find the test case.

@LarsAsplund
Copy link
Collaborator

The Sigasi integration is mature enough that this issue can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants