Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LGTM code quality badges #558

Closed
wants to merge 1 commit into from
Closed

Conversation

xcorail
Copy link

@xcorail xcorail commented Sep 4, 2018

Hi there!

I thought you might be interested in adding these code quality badges to your project. They will indicate a very high code quality to potential users and contributors.
You can also check the alerts discovered by LGTM.

N.B.: I am on the team behind LGTM.com, I'd appreciate your feedback on this initiative, whether you're interested or not, if you find time to drop me a line. Thanks.

@manuel-mauky
Copy link
Collaborator

Hi,
thanks for your suggestion but I don't really understand how your score is calculated.

I've looked at the alerts and found that 3 out of 4 alerts aren't correct:

[1] tells me that the unit test will always be true. However, this method is not a unit test. It's a helper method in a test class and this method is perfectly valid.

In [2] it says that two lists aren't accessed. However, in the JavaDoc above the lists the reason for this is described in full detail. The lists are used to keep references of objects to prevent them from being garbage collected unintentially.

The only alert that might be correct is [3] where it says that the class overwrites hashCode but not equals. However, the equals method is overwritten in the super-class and the hashCode method is matching this behaviour. Furthermore, the class is a private inner class that is only used internally by the outer class. So it's not possible that anyone can (mis-)use the class by accident.

I know some parts in our code aren't optimal. For example the complexity of the FxmlViewLoader is to high and it's to hard to understand the logic. I'm aware of this negative example and everytime I'm visiting this class I'm thinking on how I can reduce complexity.
But your tool haven't found this.

So at the moment my impression is that at the current state your tool can find easy code-smells but not the real hard problems. And with an false-positive rate of 3 out of 4 I don't know if it really provides any benefit for us.

On the other hand: We have always tried our best to have a our code quality as good as possible and so of course it's harder for tools to find code-smells. I've also used tools like SonarQube in the past to check our code so it's not suprising that your tool don't find many problems.
So to be fair, I think your tool can be really benefitial for some projects and maybe my summary is a bit unfair. And therefore I'm interested to see how your tool evolves but at the moment I don't see why it's nessessary to use it in our project.

@xcorail
Copy link
Author

xcorail commented Sep 5, 2018

Hi @lestard

Thanks a lot for this detailed report. Much appreciated!

I don't really understand how your score is calculated

The code quality grade, or grade, is an estimation of the relative quality of a project when compared to similar reference projects. It is a letter between A+ and E that puts the score in perspective.
You can read more in our documentation, and I recommend also this long but very interesting blog post

However, the value of LGTM is more in the alerts than in these badges.

So at the moment my impression is that at the current state your tool can find easy code-smells but not the real hard problems. And with an false-positive rate of 3 out of 4 I don't know if it really provides any benefit for us.

Our value added compared to other similar tools is precisely the ability to perform deeper analyses with less false positives, and to fix false positives very quickly when they are reported ... Our product also allows you to tweak our analyses to adapt them to your specific context, for example to answer your cases 2 and 3
Allow me to pass your false positive reports to my Java analysis colleagues so that they can check them.

On the other hand: We have always tried our best to have a our code quality as good as possible and so of course it's harder for tools to find code-smells

Indeed, this is what I can see!

maybe my summary is a bit unfair

Your feedback is very valuable and I thank you for the time you spent on it

And therefore I'm interested to see how your tool evolves

If so, I (or my Java colleagues) will give you an update on the false positives you reported

Thanks again

@xcorail
Copy link
Author

xcorail commented Sep 6, 2018

Hello @lestard

I missed something about your case 1: LGTM tells you Test is always true, but when you mouseover the alert (the red giant box), it highlights the particular problem, and you see that the Test it refers to is not the method itself, but the expression DELAY_IN_MS > 0, which happens to be a test, and which is always True.

@manuel-mauky
Copy link
Collaborator

Hi @xcorail
thank you for the correction. You are right. I misread the alert. Now the alert makes a lot more sense to me.

@xcorail
Copy link
Author

xcorail commented Sep 6, 2018

It's more a UX issue on our side, that we are willing to solve. We should be able to write directly in the alert the part that we are now highlighting, so that it reads Test DELAY_IN_MS > 0 is always true

@xcorail
Copy link
Author

xcorail commented Sep 21, 2018

Hey @lestard

Just wanted to let you know that during 6 weeks, for each LGTM alert fixed, we'll make a donation to WWF. You can also win a free ticket to GitHub Universe, with travel and accommodation. Details here: https://competitions.lgtm.com/ghu-2018

@xcorail xcorail closed this Oct 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants