-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LGTM code quality badges #558
Conversation
Hi, I've looked at the alerts and found that 3 out of 4 alerts aren't correct: [1] tells me that the unit test will always be true. However, this method is not a unit test. It's a helper method in a test class and this method is perfectly valid. In [2] it says that two lists aren't accessed. However, in the JavaDoc above the lists the reason for this is described in full detail. The lists are used to keep references of objects to prevent them from being garbage collected unintentially. The only alert that might be correct is [3] where it says that the class overwrites I know some parts in our code aren't optimal. For example the complexity of the FxmlViewLoader is to high and it's to hard to understand the logic. I'm aware of this negative example and everytime I'm visiting this class I'm thinking on how I can reduce complexity. So at the moment my impression is that at the current state your tool can find easy code-smells but not the real hard problems. And with an false-positive rate of 3 out of 4 I don't know if it really provides any benefit for us. On the other hand: We have always tried our best to have a our code quality as good as possible and so of course it's harder for tools to find code-smells. I've also used tools like SonarQube in the past to check our code so it's not suprising that your tool don't find many problems. |
Hi @lestard Thanks a lot for this detailed report. Much appreciated!
The code quality grade, or grade, is an estimation of the relative quality of a project when compared to similar reference projects. It is a letter between A+ and E that puts the score in perspective. However, the value of LGTM is more in the alerts than in these badges.
Our value added compared to other similar tools is precisely the ability to perform deeper analyses with less false positives, and to fix false positives very quickly when they are reported ... Our product also allows you to tweak our analyses to adapt them to your specific context, for example to answer your cases 2 and 3
Indeed, this is what I can see!
Your feedback is very valuable and I thank you for the time you spent on it
If so, I (or my Java colleagues) will give you an update on the false positives you reported Thanks again |
Hello @lestard I missed something about your case 1: LGTM tells you |
Hi @xcorail |
It's more a UX issue on our side, that we are willing to solve. We should be able to write directly in the alert the part that we are now highlighting, so that it reads |
Hey @lestard Just wanted to let you know that during 6 weeks, for each LGTM alert fixed, we'll make a donation to WWF. You can also win a free ticket to GitHub Universe, with travel and accommodation. Details here: https://competitions.lgtm.com/ghu-2018 |
Hi there!
I thought you might be interested in adding these code quality badges to your project. They will indicate a very high code quality to potential users and contributors.
You can also check the alerts discovered by LGTM.
N.B.: I am on the team behind LGTM.com, I'd appreciate your feedback on this initiative, whether you're interested or not, if you find time to drop me a line. Thanks.