Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Calculation of Comment Ratio #83

Closed
wienans opened this issue Jan 27, 2024 · 2 comments
Closed

Question: Calculation of Comment Ratio #83

wienans opened this issue Jan 27, 2024 · 2 comments

Comments

@wienans
Copy link
Contributor

wienans commented Jan 27, 2024

Hey i tried double check the results of the Tool with radon and stumbled across some huge differences in Comment Ratio.
I checked the source code and found this:

def parse_tokens(self, language, tokens):
super().parse_tokens(language, [])
_n = MetricBaseComments._needles
if language in MetricBaseComments._specific:
_n += MetricBaseComments._specific[language] # pragma: no cover - bug in pytest-cov
for x in tokens:
self.__overall += len(str(x[1]))
if str(x[0]) in _n:
self.__comments += len(str(x[1])) # pragma: no cover - bug in pytest-cov

I wanted to ask if my understanding is correct that you calculate the literal character length of the comment in comparison to the literal character length of the program?

@priv-kweihmann
Copy link
Owner

Yes, that is correct - it might be other tools base that on lines or something else, but for me the essence is characters belonging to comments against the characters of the whole file.

The ratio should be correct, but maybe not comparable to other tools.

In a nutshell that applies to all of the metrics, as it should only be used to compare two or more runs of the tool against each other to decide what is okay and what's not

@wienans
Copy link
Contributor Author

wienans commented Jan 27, 2024

Okay thanks for your answer. Yeah also a understandable decision.

Sure I also only expected that it is supposed to compare with itself. But as it calculates the maintainability index and for example the Microsoft one provides thresholds I wanted to roughly check if the calculations of the sub metrics are somewhat overlapping with what I would expect. And as I actually found it a good idea of the SEI index to increase it via comment ratio I bumped into the comments calculation.

@wienans wienans closed this as completed Jan 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants