Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scapegoat support #8

Closed
mwz opened this issue Feb 10, 2018 · 40 comments
Closed

Scapegoat support #8

mwz opened this issue Feb 10, 2018 · 40 comments

Comments

@mwz
Copy link
Owner

@mwz mwz commented Feb 10, 2018

Here is a task list with high-level items so we can keep track of implementation progress:

Here is an example of an existing Scapegoat plugin https://github.com/arthepsy/sonar-scala-extra.

@mwz mwz added the scapegoat label Feb 18, 2018
@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Feb 19, 2018

Hi,

Did you know that scapegoat can generate an "scalastyle" report?
I had been looking, but I can't find further documentation about that.

Maybe it's possible to integrate the results of scapegoat with the scalastyle sensor, and probably that is easier than updating the old scapegoat sonar plugin that you are using.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Feb 19, 2018

That's interesting and definitely worth looking into. I've never seen this setting before, but if that's really the case, then it would save us a lot of effort.

My idea originally was to integrate the sonar-scala-extra plugin and set up two additional quality profiles - standalone Scapegoat and Scalastyle + Scapegoat for those who would like to use rules from both. I'm not sure if that would be possible though if we were to use scapegoat to generate a Scalastyle report.

@mwz mwz added the enhancement label Feb 24, 2018
@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Mar 23, 2018

By now I'm thinking the best option is to address #35 first, and that will give us confidence about what to do with this.
Probably the best thing will be to create another sensor (com.mwz.sonar.scala.scapegoat.ScapegoatSensor) using the new sonar API, latter we will discuss how the sensor will work.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 9, 2018

After looking at the SonarQube documentation and the source code of the scapegoat plugin and the scalastyle plugin. I realized that to create the new Scapegoat Plugin we need to define two major components, a rules repository and a sensor.

Rules Repository
This will be, probably, the most difficult part, we need to create a repository with all scapegoat rules.
Currently, the scapegoat plugin generates an XML file with all rules, by parsing the README of the Scapegoat project, this approach (IMHO) is completely unsafe and difficult to maintain.

I propose two alternatives:

  1. Create each rule manually, maybe as unique instances (objects) from a case class. Pros: simple and completely type safe. Cons: tedious, long, error prone, tedious, a lot of work to do every time we need to update the rules, tedious... and I already said tedious?

  2. Use macros to do the same but automatically, and instead of reading all rules from a XML file, we can use compile time reflection to get all rules (Inspections) from the scapegoat jar (Adding it as a compile dependency). Pros: Powerful, is a one time job: any time we need to update the rules to a new Scapegoat version, we only need to update the dependency version and re-compile. Cons: Much more difficult than option 1 (Especially for me, because I have zero experience with the scala macros).

@mwz, What do you think?, do you had any other idea?

Sensor
This is easy, as all the hard work is already made by the Scapegoat scalac plugin, we only need to parse the scapegoat.xml report to extract all warnings and create issues in the source code by every warning found.
The only difficulty I found with this is, that to create a NewIssue we need to associate that issue with a RuleKey, we can get that rule key, from an ActiveRule, and we can get the active rule from all ActiveRules (which are returned to us by the SensorContext) using the findByInternalKey(String repository, String internalKey) method.
As you can see, the method requires the repository key, which will be some scala constant like ScapegoatRepositoryKey, and the rule internal key. We need to find a way to map a scapegoat warning in the XML report to that internal key. And that will be all for the sensor.

Thanks for reading; I'll be waiting for your thoughts. 👍

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 9, 2018

Just a few thoughts on generating the rules while I have a minute and I'll probably have a think about the rest tomorrow.

It might be worth to investigate writing an sbt task which downloads the scapegoat jar and using the java.util.jar api collects the entries that we're interested in and filters out some of those rules which we whitelisted because e.g. some rules are not implemented yet like FruitlessMatch or NoClone.

I've never done this before but here is a reference - https://stackoverflow.com/a/3429366/3551902.

I'd probably also look into the fast-classpath-scanner or a similar project which allows you to lookup subclasses at runtime e.g. Finding subclasses or superclasses - if this approach actually works and if you can just get a list of subclasses of the Inspection class, it might turn out that this could be the cleanest solution.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 9, 2018

Ok, I'll take a look a those links tomorrow.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 10, 2018

I've playing a while with scala-macros and with the FastClasspathScanner, and my conclusion is that the easiest way to go is to create a custom SBT task which will use the FastClasspathScanner to get all sub-classes of the Inspection abstract class (you can see a little PoC here), and use that information to generate scala code.

Now, for managing the generated code, I see two options:

  1. The generated code becomes part of the project source code (E.g. in "src/main/scala/com/mwz/sonar/scala/scapegoat/autogenerated/"), and only run the SBT task whenever we want to update the generated code, E.g. when updating the scapegoat version.
  2. The generated code goes to the "src_managed folder", and we configure the compile task to always execute the generation task before. Additionally, we could add some checks to the generation task to only run if there have been changes (like the normal compiler).

@mwz what do you think ?

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 10, 2018

That's nice @BalmungSan - FastClasspathScanner stuff looks promising!

Thanks for exploring some of the options. Having given this some thought, I'm totally pro automating the whole process, ideally, relying on the compilation as much as possible without having to run any manual sbt tasks and generating any source code which you then need to compile separately.

I think that scalameta might be just what we're looking for given the rather poor state of macros in Scala (with the old-style macros being really disgusting and getting phased out and an unknown future of the new scala.macros). With scalameta we should be able to generate all the code that we need at compile-time without any sources. Unfortunately scalameta paradise and the inline macros are no longer under development, but I found them working well for me in the past and they're definitely way easier to work with and understand than the old macros. The paradise docs have been removed from the online scalameta documentation 😢, but you can generate those yourself from the tutorial repo for a refence and examples on how to use the inline macro annotations. I hope that paradise still works ok with the latest scalameta. 🤞

This sounds to me like the best way we can approach this, although if you have any other ideas I'm happy to discuss them.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 10, 2018

@mwz, well I was looking to macros to get the sub-classes of the inspection class, but I couldn't make a simple demo of that, so that’s why I ended proposing FastClasspathScanner.

Regarding the code generation, I was thinking that we could simple write the files using standard Scala/Java IO APIs and using interpolated strings to fill up a template with the information we extract from scapegoat, and the compile that files together with the plugin sources. I think that is simple and useful enough, but of course it could go out of hand when we start to implement it.

I have no experience with scala-meta either, but I had view it a few times. Correct me if I am wrong, but for using scala-meta, we would end with those templates in scala files that will modify their ASTs in compile time given the information extracted from scapegoat. The problem here is that those files had to be compiled first and in a sub-module and then the main module will be compiled using this sub-module as a dependency.

So, I don’t know which approach will be more simple and more maintainable.

PS: I hope I made clear what my thoughts are 😄.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 10, 2018

Just to clarify more what I had in mind, I'm thinking in something like this

 //this file is automatically generated, don't modify it.
 package mwz.sonar.scala.scapegoat.inspections

 /** Represent an scapegoat [[Inspection]] */
 final case class ScapegoatInspection (id: Int, name: String, description: String, defaultLevel = Level)

 /** All Scapegoat inspections available */
 val AllScapegoatInspections = List(
  ScapegoatInspection(id = 0, name = "ArrayEquals", description = "Checks for ...", defaultLevel = Level.Info),
   ...
 )

And thats all, we would then import this file in the source code, we only need to iterate over the AllScapegoatInspections List to create the sonar repository and so on. So that's why I think we don't need something as robust as scala-meta.

I'm open to discuss this architectural decision.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 11, 2018

I see what you mean and perhaps that's the easiest and cleanest solution and as a plus, it avoids using macros - which is always good unless there is no other way and you absolutely have to use macros! :)

So in that case, if you're thinking about generating the sources using an sbt task we should definitely check this file into git once generated initially and we should make sure that we run this manually as and when required when we upgrade scapegoat dependency, which doesn't seem to get updated too often, so I think we can get away with doing that - my scalameta proposel was probably a bit too heavyweight.

We'll still probably want some sort of a blacklist of inspections so that we can skip those which are either not working or are simply not implemented yet like some of those that I mentioned earlier.

Regarding the collection used for the repository itself, probably a Map indexed by the inspection name would be a better fit than a List as we'll need to do a lookup for each issue from the Scapegoat report, so it makes sense to me to use a Map in this case.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 11, 2018

"Regarding the collection used for the repository itself, probably a Map indexed by the inspection name would be a better."

Oh, yeah yeah, I just wanted to show a draft of the generated file, we should discuss with more detail what it should have. In any case I was thinking in a Map too, but by name, because we need to retrieve the index by name. But I was thinking in building that map in the source code by using the list of the generated file.

Update: Oh, Sorry I misread your comment the first time, and understood that the map was by the id of the rule.

"We'll still probably want some sort of a blacklist of inspections so that we can skip those which are either not working or are simply not implemented yet like some of those that I mentioned earlier."

Maybe the blacklist idea would be useful, but we need to review it carefully, for example the two rules you mentioned earlier do not extend the Inspection class yet, so the scanner will omit them anyways. But I'm aware that we could need to blacklist some inspections in the future, so implementing the mechanism now would be nice.

"we should make sure that we run this manually as and when required (...)"

Well, I was looking about how to generate the files, and I found this, as you can see there, it is pretty simple to add a task that will be executed before compile automatically, and thus we can leave the file outside the repo, to avoid any human error (like my dyslexia haha 😋). Additionally we could add some caching mechanism to avoid the generation of the file if there were no changes.

Conclusion:
I think we are reaching a maturity level in terms of the general idea of this, so I'm proposing the following:

First I'll continue the PoC I showed you earlier, to add the sbt task that generates the files in the "src_managed" folder, and some code that uses the generated files.
The aim of the PoC is that at the end, I should be able to generate an executable JAR (using sbt-assembly) that contains both the source code and the generated code, but not the scapegoat sources (we only need them for compiling).

After that we should make a draft of the high level tasks to implement this, so instead of having another massive PR (that we saw is not the best idea), we will have many relative small PRs. Maybe we should create another branch for this development, and make the PRs to that branch, and only when the branch is production ready we merge it into master.

What do you think ?
If you like the idea of the PoC, I'll do it in the next few days.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 13, 2018

But I'm aware that we could need to blacklist some inspections in the future, so implementing the mechanism now would be nice.

Let's see if there is anything that we need to blacklist from the current version of Scapegoat, but if there isn't then we can probably reduce the scope for now and come back to this and implement it when we actually need it.

+1 for generating the managed sources - that's probably a better way than I suggested because it automates the whole process.

First I'll continue the PoC I showed you earlier, to add the sbt task that generates the files in the "src_managed" folder, and some code that uses the generated files.

Feel free to do this directly on a branch in this project as it might save you some time later when you want to port the code.

After that we should make a draft of the high level tasks to implement this, so instead of having another massive PR (that we saw is not the best idea), we will have many relative small PRs.

Yeah, that's definitely a good idea.

Maybe we should create another branch for this development, and make the PRs to that branch, and only when the branch is production ready we merge it into master.

I'd be quite happy to integrate those small PRs into master as long as we keep the new sensor behind a feature flag so it's switched off in the future releases until we're ready to release it.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 13, 2018

I really like to do PoCs in empty projects when I'm learning something new (as this is the case), latter to integrate it here it will be just copy pasting with minor changes so not a big deal.

Regarding the feature flag, I think it can be done here, then if we don't add the scapegoat plugin there until it is ready, everything should be work without any problem (I hope so).

So, for now I'll be working in the PoC, I will ping you when it is ready.

Additionally, I think we should start to discuss the high level design for this, like what will be the initial set of tasks (and maybe open them as issues?), what should the generated file contain, where should be define the map from name to id, etc. I don't know if you have any other concerns ?

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 13, 2018

So, for now I'll be working in the PoC, I will ping you when it is ready.

That's awesome, thanks!

Additionally, I think we should start to discuss the high level design for this, like what will be the initial set of tasks (and maybe open them as issues?), what should the generated file contain, where should be define the map from name to id, etc. I don't know if you have any other concerns?

Yeah, we can come up with a task list and I could add it to the description of this issue - that way we can keep track of the work.

@mwz mwz changed the title Integrate Scapegoat plugin from arthepsy/sonar-scala-extra Scapegoat support Apr 13, 2018
@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 13, 2018

Ok, I have the following task list (ordered) in mind, please modify it as you consider:

  1. Design the generated file structure.
  2. Implement the file generator.
  3. Design & Implement the Scapegoat Rules Repository (Including unit-tests).
  4. Design & Implement the Scapegoat Sensor (Including unit-tests).
  5. Test the whole plugin in runtime.
  6. Address any issue the plugin identifies over itself.
  7. Release.

Extras: I consider these tasks useful, but not priority to make a first release.

  • Add the blacklist mechanism. (Unless we found something we need to blacklist before release).
  • Add a caching mechanism to don't generate the file unless necessary.
  • Add some tests to the generated file.
  • Add a Scapegoat Quality Profile.
  • Add a Scalastyle+Scapegoat Quality Profile.
@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 13, 2018

Hi @BalmungSan, I've slightly tweaked your list and added it to the description - please let me know if I missed anything.

I think that it would be good to provide a default Scapegoat quality profile, otherwise, the rules won't appear in Sonar as active which might make the initial set-up a bit more difficult for some of the inexperienced Sonar users.

Ideally, in addition to the default Scapegoat quality profile, we should also provide Scalastyle + Scapegoat and our recommended profile, but as you mentioned, those are probably not necessary for the first release and we can implement those afterwards.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 14, 2018

Hi @mwz, everything looks perfect to me.
I suppose you're including the testing process in the release step.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 15, 2018

Yes, unit tests and manual e2e tests obviously go without saying :)

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 17, 2018

Hello @mwz, I'm pleased to announce that the PoC was a completely *Success.

Now, before proceeding to implement the generation task in this project, and making the respective PR, I would like to discuss some things related to the implementation.

  1. I should add the task dependencies (Scapegoat and fast-classpath-scanner) in the project/plugins.sbt file or in the project/build.sbt file?
  2. I should add the code of the generation task in the build.sbt file or in a separate file like project/Generator.scala?
  3. What should be the package of the generated file?, I'm thinking in com.mwz.sonar.scala.scapegoat.inspections.
  4. What should be the structure of the generated file?, I still think the one I proposed the last time is good enough, and that we should build the map in the project source code.
@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Apr 17, 2018

That's great news @BalmungSan!

  1. I should add the task dependencies (Scapegoat and fast-classpath-scanner) in the project/plugins.sbt file or in the project/build.sbt file?

Both of those are library dependencies, so project/build.sbt is probably a much better place than project/plugins.sbt.

  1. I should add the code of the generation task in the build.sbt file or in a separate file like project/Generator.scala?

I'd definitely extract your task definition to a separate file, preferebly containing Scapegoat in the name so we know it's related to Scapegoat.

What should be the package of the generated file?, I'm thinking in com.mwz.sonar.scala.scapegoat.inspections.

Yeah, that sounds good 👍

  1. What should be the structure of the generated file?, I still think the one I proposed the last time is good enough, and that we should build the map in the project source code.

I think the structure which you suggested previously works fine, although you might need to wrap the map in a companion object as vals are not allowed at the top-level.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Apr 17, 2018

I think the structure which you suggested previously works fine, although you might need to wrap the map in a companion object as vals are not allowed at the top-level.

Oh yes, I forgot that little detail 👍

I'll make the PR in the next few hours.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented May 12, 2018

Hello @mwz, sorry for taking too long. But, I finally finished my little investigation for addressing the second task (Scapegoat Rules Repository).
Following is a summary of what I found, and some questions I have about implementation details.

For adding the Scapegoat Rules Repository to the plugin we need to create a class that extends the RulesDefinition interface, and override the define method. In this method we will create all rules of the repository using the NewRule builder class.
As you can see this class provides a lot of setters to specify the properties of the rules, some of them are very straightforward. But I have a couple of doubts.

  • For the rules name, should them have some prefix or suffix to indicate that they come from Scapegoat? E.g Scapegoat-ArrayEquals.
  • Does any Scapegoat rule have params? I think I remember not, but I'm not sure.
  • Does any Scapegoat rule is a template? I almost sure not.
  • Should any rule be activated by default? Which ones?
    Should all of them? Should none of them?
  • Which RuleType should we use for each rule?, the default value is CODE_SMELL, are you ok with that?
  • Do we have a way to provide the debt remediation time? Or are we going to ignore it, at least for now?

Additionally, I have two more questions about the implementation of the repository in general.
The first one is regarding the unit tests, I can only think of the following tests

  • Check if the properties of the repository are ok (name, key, language).
  • Check the number of rules in the repository is the same as the the number of scapegoat inspections.
  • Take some Rule/Inspection and check if its properties are ok (name, severity, type, etc).

If you have ideas for more or better test, I will be happy to hear them.

The second one is about the activation of the repository, IMHO we should leave it deactivated in production for now (I don’t see any value in having rules that don’t do nothing for now displayed in the web interface). However, we would need to activate it for manual testing. To make the last one easier I propose to add the code that activates the repository, but have it commented in production. Nevertheless, I will just leave this decision up to you.

Edit
Most of this questions could be answered while reviewing #59. And I would prefer that. Thanks in advance for taking the time to review and answer all this.

@BalmungSan BalmungSan mentioned this issue May 13, 2018
5 of 5 tasks complete
@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented May 13, 2018

Hi @BalmungSan, thanks for doing the research and capturing the outcome along with all of your questions. I hope that you don't mind if I add quick comments here while they are fresh in my head before I address your latest PR?

For the rules name, should them have some prefix or suffix to indicate that they come from Scapegoat? E.g Scapegoat-ArrayEquals.

We should come up with a naming convention for both new versions of scalastyle and scapegoat rules which doesn't collide with the existing names, so maybe something like this: scala:scalastyle:RuleName and scala:scapegoat:RuleName?

Does any Scapegoat rule have params? I think I remember not, but I'm not sure.
Does any Scapegoat rule is a template? I almost sure not.

Not as far as I'm aware.

Should any rule be activated by default? Which ones?

I would be tempted to activate all of the rules by default so it's easier to get started for users who set it up the first time.
(When we look into the Quality Profiles after this, I think it would be good to create separate quality profiles for scalastyle, scapegoat, scalastyle + scapegoat and a profile with rules which are recommended by us, with some rules potentially disabled, based on what we went through a little while ago when we discussed which rules we find useful in our projects.)

Which RuleType should we use for each rule?, the default value is CODE_SMELL, are you ok with that?

Yes, let's start with that and if we discover that something should be labeled differently we can always tweak it later.

Do we have a way to provide the debt remediation time? Or are we going to ignore it, at least for now?

I guess the way to go about it would be to provide a guesstimate on how long it might take to fix a particular issue, but this isn't that important from my point of view and I wouldn't worry about it for now - we can come back to this in the future.

Additionally, I have two more questions about the implementation of the repository in general.
The first one is regarding the unit tests, I can only think of the following tests

  • Check if the properties of the repository are ok (name, key, language).

👍

  • Check the number of rules in the repository is the same as the the number of scapegoat inspections.

👍

  • Take some Rule/Inspection and check if its properties are ok (name, severity, type, etc).

👍

Another thing you can check is if there are no rules without a name, etc.

The second one is about the activation of the repository, IMHO we should leave it deactivated in production for now (I don’t see any value in having rules that don’t do nothing for now displayed in the web interface). However, we would need to activate it for manual testing. To make the last one easier I propose to add the code that activates the repository, but have it commented in production. Nevertheless, I will just leave this decision up to you.

Yeah I agree - a feature flag would be a good way to go about this. I'd put this behind sonar.scala.scapegoat.enable and set it to false by default. I think we could enable it by default when we release it, but we would probably still want to keep the flag around to give eveyone the option to disable it if they choose so.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented May 14, 2018

" I hope that you don't mind if I add quick comments here while they are fresh in my head before I address your latest PR?"

Sure, no problem 👍

"scala:scalastyle:RuleName and scala:scapegoat:RuleName?"

Yeah, absolutely! They provide a good scoping, and make it easier to search for an specific rule.

"I would be tempted to activate all of the rules by default so it's easier to get started for users who set it up the first time."

Oh ok, I thought you would prefer the opposite. But well, that was the idea of making questions instead of assumptions.

"I guess the way to go about it would be to provide a guesstimate on how long it might take to fix a particular issue, but this isn't that important from my point of view and I wouldn't worry about it for now - we can come back to this in the future."

Yeah I totally agree that it should be done latter in a separate issue.

"Another thing you can check is if there are no rules without a name, etc."

Ok, I will add a few more tests, latter when you review the PR you could give me more specific feedback about the tests.

"Yeah I agree - a feature flag would be a good way to go about this. I'd put this behind sonar.scala.scapegoat.enable and set it to false by default. I think we could enable it by default when we release it, but we would probably still want to keep the flag around to give eveyone the option to disable it if they choose so."

Uhm... I see what you mean, and it is absolutely worth it. But what I can't see is how it will work without needing a recompilation of the plugin (which since an end-user perspective wouldn't be very convenient). Also it should enable/disable the entire scapegoat module (obviously it doesn't make any sense to activate the rules repository but deactivate the sensor for example). And if we add such functionality it should be available for each module of this plugin (ScalaSensor, ScoverageSensor, ScalastyleSensor & ScapegoatSensor).
So, IMHO, this feature should be addressed in its own Issue. And, probably, latter (at least until this module is ready).

I'll make another push to the PR with you suggestions tomorrow in the morning 👍.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jun 10, 2018

Hello @mwz, Continuing with task 3 - Scapegoat Quality Profile. After a brief research I can conclude it is a pretty simple and straightforward task.

We only need to create a class that extends the BuiltInQualityProfilesDefinition interface and override the define method, and there using the Context we activate the rules we want to be present in the NewProfile.

Just a few questions:

  • The profile should contain all Scapegaot rules?, or are we planning to include/exclude some ones?. - (In the mean while I will be working in this assuming all rules will be present).
  • Are we planning to change/override the severity of any rule?, or are we using the default?. - (for the moment I will assume that we're using the default).

That's all, I will be pushing a first draft of the PR in a few hours. 👍

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jun 16, 2018

Hello @mwz, now for almost finishing this, I'll proceed with a first draft of the Sensor.
The idea is to have something working to review and test, later we can tune any detail before merging.

As already mentioned here, the basic functioning of Sensor is somewhat simple.
However, there are some special cases that are worth mentioning:

  • A rule that is not activated in the Quality Profile used in the scanning, but that is present in the report. Probably just log an Error/Warning would be enough.
  • A rule that have a different severity in the report than in the Quality Profile. I'm thinking in use the one in the report but also log an Info/Warning.
  • A rule that is activated in the quality profile, but is not present in the report. Unfortunately we can't do anything about it.

Do you have any other situation to considerate?

PS: This time I will first open a PR with only the implementation, and latter I will push some tests. The reason for this is because I will need a little more time to finish a robust set of tests, and I don't want to pospone so long your code review and any manual testing of the sensor you may want to do.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jun 17, 2018

By the way, while implementing the Scapegoat Sensor, I found this. This may be simplest way to add a runtime feature flag to the modules of the plugin.
The bad news are that it only works for Sensors, thus the Rules Repositories and Quality Profiles would still be present in the web interface.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jun 17, 2018

Hi @BalmungSan,
Thanks, I really appreciate you looking into this. I've not had much spare time recently, but hopefully I should find some more time soon to contribute a bit more.

A rule that is not activated in the Quality Profile used in the scanning, but that is present in the report. Probably just log an Error/Warning would be enough.

Maybe we should log a warning only for those rules which don't exist in the profile? This would be a good indication that either the user needs to upgrade sonar-scala or we need to update to the latest version of Scapegoat.
I'd probably ignore the deactivated rules - we don't want to warn people about those because most likely someone made a decision to deactivate the rules and we should respect that.

A rule that have a different severity in the report than in the Quality Profile. I'm thinking in use the one in the report but also log an Info/Warning.

I think that your SonarQube configuration should be driving the rule severity. If you configured your profile and changed the severity of some rules, you probably wouldn't want it to be overwritten by the default scapegoat severity.

A rule that is activated in the quality profile, but is not present in the report. Unfortunately we can't do anything about it.

Yes I agree - that's correct because your report might not contain a particular issue or might not even have any issues, so there is no reason to log anything in that case.

Do you have any other situation to considerate?

No, nothing else comes to my mind at this moment.

PS: This time I will first open a PR with only the implementation, and latter I will push some tests. The reason for this is because I will need a little more time to finish a robust set of tests, and I don't want to pospone so long your code review and any manual testing of the sensor you may want to do.

Sure, whatever works best for you 👍

By the way, while implementing the Scapegoat Sensor, I found this. This may be simplest way to add a runtime feature flag to the modules of the plugin.
The bad news are that it only works for Sensors, thus the Rules Repositories and Quality Profiles would still be present in the web interface.

I think the easiest way to put the whole Scapegoat functionality behind a feature flag would be to call context.addExtensions again after the existing extensions are added based on the value of the feature flag and add the Scapegoat extensions there. I think I suggested before that we use sonar.scala.scapegoat.enable and set it to false by default for now. When we release it, we can toggle it to true and keep the flag to allow people to opt-out if they don't want to use it.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jun 17, 2018

Hi @mwz,

I think the main difference in our opinions is what we're considering the source of trust, for you its SonarQube and for me is the Scapegoat report.

To be honest, I have mixed feelings about this. If the user wants to configure everything in sonar and just import a plain report, the best is to do what you're proposing.
But since the scapegoat scalac plugin can be configured to exclude and change the severity (Level) of some inspections, maybe (and just maybe) that's what the team want to be used. And after all, this is more close to the development team than tuning Sonar rules.

Regarding the feature flag, I think we should discuss it after finishing the sensor.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jun 21, 2018

You make a valid point and it's definitely worth having a think about how we want to handle this.

I personally think there are more pros than cons of treating the Sonar rules as a source of truth for rule severity, but I'd be interested to hear your thoughts on this and why you think that using a report will work better for you.

In my opinion overriding the severity of Sonar rules with what is in the report might work well for individual projects, but if you have multiple projects, having to do the same changes separately in each project will make it hard to maintain and ensure consistency across your projects - changing the severity in the UI will not have any effect in your analysis unless you make those changes in each project. I think that it's much more convenient to review the rules and enable/disable them and adjust the severity in SonarQube rather than having to do this in the build config. In fact, making the changes to Scalastyle rules is only possible via the UI, so if we were to do it in a different way for Scapegoat, that would be most likely confusing to users. It also appears that the old sonar-scala-extra plugin does it that way too, see here.

If you do insist on having it the other way round because it makes it convenient for you and your team, perhaps we could add this functionality as another config flag and make it an opt-in feature e.g. sonar.scala.scapegoat.useReportSeverity with default set to false?

What do you think?

Regarding the feature flag, I think we should discuss it after finishing the sensor.

👍

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jun 22, 2018

Well, the main reason I though about it was that from a "devops" perspective, you should have all your configurations in "code". Thus the UI is not best place for defining the parameters of the rules.

But I agree that it is easier and, probably, better to configure those rules in sonar instead of in the build config. Especially, because, it is inconsistent with the current scapegoat plugins and with the scalastyle module.
Also, the "devops" argument falls apart, since you can import/export the SonarQube quality profiles - and that's probably a better way of automatization and versioning.
So now, I agree with you, Sonar should be the source of trust.

Finally, I think the idea of adding a config flag to determine this behavior is interesting, but to be realistic it probably will not add too much value now; maybe latter, if more users suggest it, we could implement it.
I really don't need it, I just wanted to open a discussion about it.

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jul 1, 2018

Hi @mwz,
We're only missing the documentation, examples & release.
I'll leave it up to you 👍.

If you need some help with something, have any problem or want to discuss something else before releasing don't hesitate in writing me here.

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jul 1, 2018

That's great, thanks @BalmungSan for doing all of the work by yourself!
I'd like to implement the feature flag and the quality profiles before we release the whole thing - I'll be raising some PRs with those changes soon, so stay tuned 😄

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jul 5, 2018

Hey @mwz , I come up with (what I think is) a great idea for implementing the sonar-scala recommended Quality Profile.

The first idea I had, was to write and xml file with all the profile's rules and their properties, and load it.

<!-- Example of the structure of the file -->
<profile name="sonar-scala recommended way">
  <scapegoat-rules>
    <rule id="" severity="[INFO|MINOR|MAJOR|CRITICAL|BLOCKER]"/>
    (...)
  </scapegoat-rules>
  <scalastyle-rules>
    <rule id="" severity="[INFO|MINOR|MAJOR|CRITICAL|BLOCKER]" isTemplate="[true|false]">
      <!-- only if isTemplate is true -->
      <param key="" value=""/>
      (...)
    </rule>
    (...)
  </scalastyle-rules>
</profile>

The big problem with it, is that if we modify the file and introduce and invalid value, it will only fail in runtime instead of in compile-time (of course a good set of test could catch a good amount of mistakes/bugs).
But what if we make another source generator that creates a plain profile from the xml file? (with plain profile I mean a big scala file that register each rule one by one, without any "loops" nor ifs).

What do you think about it?, or what other ideas do you have?

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jul 6, 2018

Hi @BalmungSan, I've been looking into the "Scalastyle + Scapegoat" quality profile and I haven't given this too much thought yet, but I don't think we should use XML for configuring our recommended profile - as you said it's too error prone and adds unnecessary loading/parsing steps - we should aim to have the config managed in the code.

There are two different approaches we could take for determining which rules should be activated in our profile - a whitelist or a blacklist. I would personally prefer blacklisting because I think there would be fewer rules to exclude than to include from the collection of all the rules from Scapegoat and Scalastyle repositories.
I think it's also fine to warn at runtime if we detect any rule key mismatches while activating the rules in our profile since every time we update either Scapegoat or Scalastyle we would be able to catch those warnings during testing, so ideally those wouldn't get exposed to the users.

The second aspect of it is overwriting rule severity - this could be as simple as a mapping from a rule key to the new severity for those rules which we want to have a different severity level.

The last part is probably defining custom rules from the Scalastyle templates. I think we should create rules from all of the existing Scalastyle templates with default parameters and enable them all in the Scalastyle profile. If we want to make any changes to those rules in the sonar-scala profile, we should create new rules and activate them only in that profile.

After having briefly looked at implementing the "Scalastyle + Scapegoat" profile, I think the Scalastyle config parser needs to be refactored to make it easier for us to reference the predefined rules without having to duplicate some of the parsing/processing logic in the new quality profile definitions, e.g. keys of the Scalastyle rules are generated at the time of creating the rules and are not exposed anywhere, so this is the key thing that needs to be improved.

@BalmungSan, you made it very easy to activate all of the Scapegoat rules since we keep the list of inspections in memory and we convert them all into rules, however the old Scalastyle module doesn't make it easy and needs some ❤️ to make it work for what we need. I'm happy to have a look into that next before we can implement the additional quality profiles, but in the meantime, I think we should release Scapegoat as is since in its current form it's already a great replacement for the old sonar-scala-extra!

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jul 6, 2018

Hi, @mwz
I agree we should release the scapegoat module as it is right now. We should add documentation to the README and update the examples with Scapegoat.

Additionally, since I'm blocked with #70, I'll take a look on how to refractor the Scalastyle module in few days. (#35).

@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jul 6, 2018

I'll release it over the weekend 👍

@BalmungSan, since I already started looking into Scalastyle, it probably doesn't make sense for us both trying to work on the same thing unless you really want to pick it up? 😄

@BalmungSan

This comment has been minimized.

Copy link
Contributor

@BalmungSan BalmungSan commented Jul 6, 2018

Oh, sorry, I read wrong and did not realize you were going to look into it.

I would really like to refactor all the scalastyle module at once (probably in a couple of consequent PRs). If you want we can have a discussion about it in #35 👍

@mwz mwz added this to To do in sonar-scala Jul 8, 2018
@mwz mwz moved this from To do to In progress in sonar-scala Jul 8, 2018
@mwz

This comment has been minimized.

Copy link
Owner Author

@mwz mwz commented Jul 8, 2018

I extracted the additional quality profiles into separate issues: #79 and #80.

@mwz mwz moved this from In progress to Done in sonar-scala Jul 9, 2018
@mwz mwz closed this Jul 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
sonar-scala
  
Done
2 participants
You can’t perform that action at this time.