-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Produce DT validation in a structured format #397
Comments
The following is just a snippet of the output from make dt_binding_check: CHKDT Documentation/devicetree/bindings/media/renesas,vin.yaml |
Great, there's even an error to give an example of what needs to be parsed:
First of all, are there any environment variables or any options that could be used with the validation tool to store the results in a structured format? Then if not, are the errors being output on stderr and the regular DTC/CHECK/... messages on stdout? That could help with detecting errors. If we have to parse this ourselves, it could be done in Python typically. We could have a structure like this (not matching the actual format for sending results, but just to give an idea):
We could also keep the error messages associated with each test case and submit that. It's already a supported feature in kernelci-backend, although I don't think it has been used much for tests not run in LAVA labs. But that can be added as a follow-up, having the status is already a great first step. Test cases can use dots |
Ok so it turned out that the errors were being separated into stderr. Using output redirection, I stored them in a file and this was the result of me printing the file: /home/iduncan/linux/Documentation/devicetree/bindings/bus/baikal,bt1-apb.example.dt.yaml: example-0: bus@1f059000:reg:0: [0, 520458240, 0, 4096] is too long |
OK that's a lot more verbose :) The first step is still to be able to determine whether each file being checked passed the test or not. We need to decide whether warnings count as a failure or not, it's probably a typical case where initially they could be discarded to focus on actual errors, then when all errors are fixed also include warnings. One option is also to provide a "measurement" value with the number of warnings to each test case, so reporting tool can make use of that independently of the pass/fail status. So, maybe to keep things simple, ignore warnings and determine the pass/fail status based on the presence of error messages? |
We're making good progress, moving to Phase 2 as there were some delays mostly due to issues with setting up a local development instance. |
Ok so planning out what I will do to make clear that a file passed or failed: It seems that when I see lines such as "DTC fileA" and "CHECK fileA" sequentially in that order with no mention of fileA in the following line of output, that can be an indicator of a pass I see that the very first line begins with "CHDTC" and it seems to me that it indicates a pass since there was no error message following it. I will try to do the work of printing the Pass/Fail results using a python script and the output will hopefully be in the following format: |
OK, sounds good. Just printing the output from a Python script is a good way to check the logic is producing valid pass/fail results. Storing that in JSON later on will be easy. |
All done with the Formatting of the output. Github wasn't letting me throw the files into this comment so I made a new branch for kernel core and put them in there. The formatting file puts all of the errors and warnings at the bottom of the output. Everything before that is files that passed |
Output from running the file: Filename: Documentation/devicetree/bindings/net/nxp,tja11xx.yaml CHKDT PASS Warnings: Errors: Warning Count: 12 Error Count: 11 |
@isaiahduncan There were some errors which appear to have been parsed correctly by the script, but I don't see any
but then:
Should this not have caused the result to be |
I just made the change to fix that problem in the python file and committed it. Also, I replaced the old output in the earlier comment with the new output from running the script |
OK great, now I see this with the same example as in my previous comment:
which is what I was expecting :) I think the next step is to create a test case name using the file name. This is essentially about replacing some characters to fit in the schema for test case names. I'm not entirely sure what a test case name should look like for the example above, but I know that it's better to avoid slashes {
"Documentation.devicetree.bindings.bus.baikal_bt1-axi_example_dt_yaml.check": "FAIL"
} with this mapping: Then, I'm not sure if it's worth keeping the full path since all the device tree data is stored in {
"bus.baikal_bt1-axi_example_dt_yaml.check": "FAIL"
} I believe this is probably a good place to start, we might need to adjust things slightly as we start producing more results and submitting it. |
To be precise, all the device tree bindings are in that directory. The device tree files for each platform are in |
New Resulting output snippet: "Documentation.devicetree.bindings.hwmon.pmbus.ti_ucd90320_example_dt_yaml.check" : "PASS" |
Here is another link to the formatting file: format.py : https://github.com/isaiahduncan/kernelci-core/blob/formatDTV/format.py |
All the items in the check list have been completed, so this issue can now be closed. |
In order to be able to send the device tree validation results to the backend API, they need to be stored in a structured format and follow the expected schema. Essentially, this means a JSON file with a list of test case names and a status associated with each of them (pass, fail, skip).
It also needs to contain some meta-data such as the kernel revision (tree, branch, git commit) and the version of the validation tool.
As there currently aren't any tests of this kind being run in KernelCI, some changes to the schema may be needed. These changes are outside of the scope of this issue.
Checklist:
The text was updated successfully, but these errors were encountered: