Skip to content

Conversation

da1910
Copy link
Collaborator

@da1910 da1910 commented Feb 23, 2022

With some sed magic the coverage reports are now actually parsed and uploaded. There is a warning about the <sources> element, since the path is incorrect, but this does not seem to cause problems. I did look at removing the element entirely, since it's optional according to the DTD however editing XML with sed is dangerous.

It might be worth farming this out to a bash script and maybe some XSLT, we could also look at getting a switch added to coverage.py to disable the sources element entirely.

@da1910 da1910 requested a review from Andy-Grigg February 23, 2022 16:33
Copy link
Contributor

@Andy-Grigg Andy-Grigg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm on the fence about this.

Generally speaking, I think it makes sense to scan test code with (almost) the same rigor as you do production code. The failure mode is a bit more indirect, but it's possible that a bug in a test could cause a bug in the production code to be missed, and avoiding that testing bug would mean you catch the real bug.

Sonar seems to be doing multiple things though.

For coverage, I think there's benefit in including the test code in the output. Apart from possibly some platform-specific functionality, the test code should generally have 100% coverage. If not, it might be because of a mistake (returning early, not starting with 'test_', etc.).

For general code quality/security though, I agree it should be skipped. All the security issues are related to test code which are creating ephemeral servers/objects in secure environments and so are perfectly safe. This should be ignored.

To summarize, I think ignoring tests is fine from a security perspective, but it would be nice to be able to still get coverage reports for it if possible.

@da1910
Copy link
Collaborator Author

da1910 commented Feb 24, 2022

I'm on the fence about this.

Generally speaking, I think it makes sense to scan test code with (almost) the same rigor as you do production code. The failure mode is a bit more indirect, but it's possible that a bug in a test could cause a bug in the production code to be missed, and avoiding that testing bug would mean you catch the real bug.

Sonar seems to be doing multiple things though.

For coverage, I think there's benefit in including the test code in the output. Apart from possibly some platform-specific functionality, the test code should generally have 100% coverage. If not, it might be because of a mistake (returning early, not starting with 'test_', etc.).

For general code quality/security though, I agree it should be skipped. All the security issues are related to test code which are creating ephemeral servers/objects in secure environments and so are perfectly safe. This should be ignored.

To summarize, I think ignoring tests is fine from a security perspective, but it would be nice to be able to still get coverage reports for it if possible.

In which case we can manually mark the issues as safe if they're specific to "this test is testing insecure behaviour", that's not an unreasonable burden, and as you say there are benefits.

I do still need to work out why it's not reporting test coverage though.

@da1910 da1910 changed the title [CI] Add test exceptions to sonarcloud [CI] Get coverage to report with SonarCloud Feb 24, 2022
@da1910 da1910 merged commit 53d276d into main Feb 24, 2022
@da1910 da1910 deleted the ci/fix-sonarcloud branch February 24, 2022 14:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants