Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code-coverage validation statistics (feature request) #13

Closed
kwatsen opened this issue Jan 9, 2019 · 3 comments
Closed

code-coverage validation statistics (feature request) #13

kwatsen opened this issue Jan 9, 2019 · 3 comments

Comments

@kwatsen
Copy link
Contributor

kwatsen commented Jan 9, 2019

Precondition:

  • a YANG module (example.yang)
  • a bunch of instance example documents (a.xml, b.json, etc.)
  • a script that validates each instance example document against the YANG module
    • this script would run yangson multiple times (distinct invocations)
    • presumably, there would be some special directory (e.g. ./.ycov/) that would be used to accumulate the statistics across runs (the user would be responsible for removing this directory before each fresh run).

Postcondition:

  • a report providing code-coverage like validation statistics.
    • presumably, this report would be provided by a final invocation of yangson that would just output the report (e.g., tree diagram)
  • options (sorted by complexity: easiest to hardest):
    1. a single number representing percentage of nodes tested (e.g., 30% or 80%)
    2. a per top-level statement (data, rpc, notification, yang-data, etc.) percentage number
      • perhaps inlined notifications and action can be included here as well
    3. a tree-diagram like output that tags each node with the number of times it was tested
      • top-level nodes would have the highest numbers.
      • the value here would be in seeing which parts are not tested much, or at all
    4. some combination of all of the above
@llhotka
Copy link
Member

llhotka commented Mar 8, 2019

HI Kent, what would be the expected outcome? To see the extent to which the instance data exemplify the schema?

This would require to collect the statistics during the validation procedure. Interesting idea though, I will think about it. It should be possible to write it so that the intermediate results can be kept in memory.

@kwatsen
Copy link
Contributor Author

kwatsen commented Mar 8, 2019

Hi Lada,

The goal is to facilitate formal YANG module reviews. For example, as a Doctor or a Shepherd, I'd like to know which parts of a YANG schema where NOT exercised by any of the instance example documents included in the draft.

Case in point: I see drafts that include examples for configuration, but no examples for any RPC, action, or notification statements. In another example, I see drafts that include a config example, but it only represents a small part of the config, leaving what the rest of the config might look like to the imagination of the reader.

When developing code, there is a general rule of thumb to shoot for 80% code coverage. I think a similar rule should apply to number and breadth of the instance examples included in drafts. I further believe that the rule should be extended to assert that 100% of all the top-level statements have an example.

However, without a tool to measure/accumulate coverage statistics, such rules cannot be automated (by xiax, for instance) and hence will be difficult to enforce.

PS: while much to the above regards drafts, such a tool would be generally useful in any context.

@llhotka
Copy link
Member

llhotka commented Mar 17, 2019

Implemented in b84d017 and PyPI release 1.3.36.

Basic usage is illustrated in Quick Start, including a printout of the ASCII tree displaying the counters.

If a sequence of instance documents is validated against the same data model, then the counters keep accumulating the numbers. All counters can be cleared using the clear_val_counters method in the DataModel class.

@llhotka llhotka closed this as completed Mar 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants