Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is next? #12

Open
JimLewis opened this issue Apr 18, 2020 · 11 comments
Open

What is next? #12

JimLewis opened this issue Apr 18, 2020 · 11 comments
Labels
Question Further information is requested

Comments

@JimLewis
Copy link

The problem as I see it is that if you ask vendors such as
Mentor/Siemen's Ray Salemi, FPGA Solutions Manager for Mentor's Questa,
the question, "Any word on Mentor supporting the new features in VHDL-2019?"

The response we get is:
"we tend to add features as customers start using them.
Do you have any that stand out? "

We need a means to document what "stands out" to the
entire user community. I think what you have here is a start
for doing this.

My vision is:
We need a means for the user community to demonstrate
their support for features and to give users confidence that
they are not the only one expressing interest in this feature.

We need a set of test cases that are test feature capability
that can be added to by users who find bugs that were not
illuminated by the initial tests. That sounds like where this
group is headed anyway.

Next, we need a way for people to express interest in a given
test case. Demonstrating user support. But only one vote
per user/github account.

We need scripts for running the tests on different tools.
We need individual users to be able to run said tests on
their local platform.

We need a means for the user to indicate whether a
test has passed or failed on their platform.

When a test passes, we need a mechanism so a user
can indicate this and we can at a minimum add it to
our internal tracking.

When a test case fails, we need a mechanism to indicate
it failed, to internally track what tool the test failed for,
and a means for an individual user to produce a vendor
product support request using their user name and have
the information for the report automatically collected.
In addition to submitting the issue to tech support,
it would also be nice to submit the issue to the vendors
discussion boards to help generate additional support
for the feature - or to add to an existing discussion.

We need some level of tabulated reporting regarding
interest level and support of a particular feature.

My only concern is what does a vendor consider to be
benchmarking. I always thought it was performance.
If it does not include language feature support, then
it would be nice to have each tool listed in a matrix and
indicate number of passing and failing tests for a particular
feature.

Even if we cannot report against a particular vendor,
we can tabulate:

  • user support of a feature,
  • number of tools for which a particular test case passes,
  • number of tools for which a particular test case fails,
  • total number times this test case has been run and passed,
  • total number times this test case has been run and failed,
  • and total number of bug reports submitted,

Under support of code, it would also be nice to have an
extended listing of why this feature is important to you
and/or your project team.

Even without listing which vendor does or does not support
a feature, one thing we can clearly demonstrate to both
users and the vendors is the user support for the implementation
of a feature.

Given this objective, I think we need a more sexy name than
compliance tests - something like VHDL User Alliance Language Support Tests

@LarsAsplund
Copy link
Contributor

I think it would be nice if we could separate features from the tests. Features are the things people vote for and tests are only used to verify vendor support. A features would typically have more than one test case.

Issues have voting capabilities. They are not ideal for feature voting because issues are something that you can complete and close. I know there have been discussions on adding voting capabilities to other parts of Gitlab but I don't think that has been done yet. I'm thinking the wiki would be the best place,

Running the test cases locally with your own simulator or with many different simulators/versions in a CI and tracking the status is what VUnit do. The problem is how/where we run the commercial tools. Github's CI has the ability let people run CI jobs on their local computers. I'm not sure if Gitlab has a solution like that but it would be a way to distribute the CI tasks to those having the required licenses while still having an automated solution

Creating a bug report is just a matter of pointing to the failing CI run. Everything needed to recreate the bug is there.

I'm ok with changing the name as suggested

@JimLewis
Copy link
Author

@LarsAsplund
Agree with separating features from tests - it is what I had in mind too.

Reporting test errors to vendors is only a side issue - my main goal is to give individual users a means to express and tabulate interest in the feature - and then report it to the vendor. Tabulating it to us allows us to quantify interest and promote the feature to the community - reporting it to the vendors gives them a means to believe our numbers - if they are actually keeping any of the reports.

Currently from a user perspective a vendor receives a feature request - denies that this is actually a VHDL feature and then deletes it.

@JimLewis
Copy link
Author

Is tabulating requests from multiple people WRT the same issue something we can automate?

@Nic30
Copy link

Nic30 commented Apr 19, 2020

@JimLewis I was hoaping that the https://github.com/VHDL/Compliance-Tests will have interface like https://github.com/SymbiFlow/sv-tests

@JimLewis
Copy link
Author

@Nic30 That is ok, however, it misses tabulating the number of users who have noted that the feature does not work. This is important to do.

Vendors claim to be "market driven". However, they have people who are paid to transition the market to SystemVerilog - this is where they make more money.

They make claims that their market is happy with VHDL-2008 and has not asked for anything in the new standard. How do you prove this is a bogus claim. How do you help your customers trust you when you claim this is a bogus claim.

On one presentation, a Vendor claimed that OSVVM was not a methodology. They claimed there are more SystemVerilog engineers available - even in Europe. Considering that in the European FPGA market that 30% use VHDL + OSVVM and only 20% use SystemVerilog +UVM, that is a fairly egregious claim.

If we have numbers we can refute their claims. Without numbers, we loose customers to their continuous FUD.

@Nic30
Copy link

Nic30 commented Apr 21, 2020

@JimLewis I send the sv-test to show you the test reports and it's GUI which seems to me as nice. The second thing which seems to me as a good idea is a test for each code construct based on formal syntax from language std. This is good because it completely tests the tool and passing tests can be seen as some kind of reward.

This covers the points did ask for:

  • number of tools for which a particular test case passes,
  • number of tools for which a particular test case fails,

This is not related to VHDL/SV war, any vendor interest or claim. ( However I may be a vendor in your eyes, but I am just a PhD student)

@JimLewis
Copy link
Author

JimLewis commented Apr 21, 2020

@Nic30 For me it is not a V vs. SV type of thing.

How does the community (users and vendor) know if a language addition is really relevant or not? Simple, provide them with a tally of how many people tested the feature and submitted a bug request for it. If they are not submitting a bug report, then they are not so interested in it.

OTOH, this sort of web based bug report submission and counting is not a strength in my skill set. So I am hoping to find someone else who is willing to implement it. In trade of course, I am contributing where I am stronger - VHDL language and VHDL Verification Libraries.

Also I can also make sure that the VHDL language committee produces use models for all new language features.

@JimLewis
Copy link
Author

@Nic30
WRT you being a vendor. Personally I am grateful for anything an open source contributor is willing to make.

OTOH, for a commercial vendor, I expect them to support standards. Some are good. Others are playing a passive aggressive game of tool support - sometimes making things up - some times out right lying.

@eine
Copy link
Contributor

eine commented Jul 20, 2020

Given this objective, I think we need a more sexy name than
compliance tests - something like VHDL User Alliance Language Support Tests

I'd propose something less verbose: VHDL Language Support Tests, and the repo name would be Language-Support-Tests, shortened to LST. This VHDL group is already a user alliance. So that info is already defined in the owner of the repo (the org).

Is tabulating requests from multiple people WRT the same issue something we can automate?

Yes, as long as we use reactions to issues as the measuring mechanism. I think we can decide to take into account all reactions, some kind only, for all the comments in each issue, or for the first comment only. I believe issues can be reacted to, even if closed. Hence, we can use the open/closed state to track whether we have implemented tests/examples for that feature in this repo; and the reactions to count the demand.

However, if we want to track the demand for each feature and vendor, that might be harder to achieve. On the one hand, we would need a separate issue (or a separate comment in the same issue) for each vendor. Similar to VHDL/Interfaces#27. On the other hand, we might not be allowed to do it.

Also I can also make sure that the VHDL language committee produces use models for all new language features.

This is currently the problem with this repo. The body and mwe field of most VHDL 2019 LCSs are empty: https://github.com/VHDL/Compliance-Tests/blob/LCS2019/issues/LCS2019.yml. I don't think we have the capacity to do that for VHDL 2008. However, I believe that a similar file should be one artifact/outcome of the next revision of the standard.

@bpadalino
Copy link
Contributor

This is a very old issue, but I'd like to resurrect conversation around it given that there are some outstanding pull requests which help try to alleviate some of the deficiencies listed previously, specifically:

Adding the VHDL-2019 tests should provide the mwe. I am unsure what the body is supposed to be for that field. Moreover, I don't find it unreasonable for the current VHDL-2008 tests to have a similar file which has a body and mwe to describe what the test is doing.

I like the table similar to sv-tests and I understand there may be license issues with posting something like that for commercial simulators, but could the overall test count be posted without issue for the commercial ones - instead of broken out? For example, if we grey'd out the test results individually, but just said it received a score of X/Y - would you be comfortable with that?

I am willing to do more work on making this better and trying to drive better support.

So, to reiterate @JimLewis, after those pull requests are merged in - what is next in 2023?

@umarcor
Copy link
Member

umarcor commented Mar 7, 2023

@bpadalino merged #19 and #21 and updated #22. I'm unsure about #20, since we might want to discuss what to do in such cases (tools implementing features differently). With regard to #13, I didn't read the latest updates. I'll go through them now.

Adding the VHDL-2019 tests should provide the mwe. I am unsure what the body is supposed to be for that field.

The body is a multi-line string expected to be written in markdown. It is to be used as the description when a page is generated in the doc or an issue is created in some repo. So, for each LCS:

  • A unique key/id.
  • A title in plain text.
  • A body/description in markdown.
  • A code-block or a list of files in VHDL.

Moreover, I don't find it unreasonable for the current VHDL-2008 tests to have a similar file which has a body and mwe to describe what the test is doing.

Fair enough. Let's wait until we merge #13. Then, we can move issues/LCS2019.yml to ./LCS2019.yml or vhdl_2019/LCS.yml; and create a similar file for 2008.

I like the table similar to sv-tests and I understand there may be license issues with posting something like that for commercial simulators, but could the overall test count be posted without issue for the commercial ones - instead of broken out? For example, if we grey'd out the test results individually, but just said it received a score of X/Y - would you be comfortable with that?

There are several strategies we could use to work around the issue. For instance, we could have a table with columns G, N, Q, M, R and A (and C, S or X in the future). Then, we would add a large warning telling: "the results in this table are computed from results lists provided by users; we don't run any tests on non-free tools and we don't check the source of the lists provided by users".
That would put the responsibility on the users who provide the lists: i.e. we share it among many so that lawyers have potentially a harder time tracking the origin. An alternative strategy is the one used for documenting the bitstream of Xilinx devices: have a large enough corporation sponsor us so that they can provide the lawyers in case any not-as-large EDA company wants to sue us.

IMO we should not waste any time on that. We should not play any game based on hiding data/facts for dubious ethical marketing strategies. We are in no position to confront power. Our working and knowledge sharing model has been going up in the last three decades, particularly in the last one and very particularly in the last 5y. Momentum is with us. We'd better put effort on improving GHDL and/or NVC and/or any other tool whose developers do not ignore their user base.

Also, it's 2023, we have 1-2y to do the next revision of the standard. There is still much work to be done to make the LRM and the IEEE libraries open-source friendly. Libraries are open-source, but not as friendly as they should; and the LRM is not open source yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants