One of the things they talk about and had great success with is a tool for Java that they made and called Error Prone, which extends the Java compiler (javac) with additional checks. As they say it:
Error Prone …
hooks into your standard build, so all developers run it without thinking
tells you about mistakes immediately after they’re made
produces suggested fixes, allowing you to build tooling on it
I think that the compiler allows for configuration for additional checks, but it might also be a separate tool that calls the compiler and hooks into its internals. I'm unclear about the details here.
Anyway, a lot of people don't want to run additional tooling (some of colleagues for instance, and confirmed by the article at Google), and only follow what the compiler and maybe their editor tells them.
What I am thinking of is, is that we could have a CLI that mostly mimics the behavior of elm make. I don't yet know whether this CLI should be provided by elm-review itself under a new flag, a new subcommand or a different binary, or whether it should be a new tool.
This is roughly what I expect it to do:
Runs elm make with the elm make arguments from the user
If the compilation fails, we display the errors just like elm make does, and we exit the process.
If the compilation succeeds, we withhold the generated HTML/JS file in memory instead of writing it to where the user would expect.
Run elm-review with the elm-review arguments from the user
If the review returns errors, we display the errors just like elm-review does, and we exit the process.
Create the file generated by elm make
This would work well in a watch mode like elm-review --watch and elm-live already provide, as much as I understand from elm-live. Instead of exiting the process, we would then only interrupt this process, and start it again when a file gets changed. I also think that the current behavior would work when combined with --fix and --fix-all.
This makes inherent sense to me, since one of elm-review suppositions is that the analyzed project compiles anyway, so that we (rule authors) can avoid doing a lot of unnecessary checks that the compiler already does. Said otherwise, you should not follow elm-review reports if the compilation fails because you may get false positives.
For editors, this makes a lot of sense too. We discussed this at some point with @klazuka with regards to integrating elm-review in IntelliJ, and we figured it would make sense to only display the elm-review errors if there were no compilation errors.
I don't know if this would make it easier, but editors might then integrate elm-review by re-using their current integration with elm make, but instead of running elm make, they run elm-review in the proposed way (only if the user checked some box saying they wanted to use elm-review this way?). The JSON output of elm-review already resembles the output from elm make anyway, and we can adapt it if necessary. @klazuka@razzeee I'd love to know if you think this makes sense.
An issue with this is that even though some of the rules in a configuration make a lot of sense to enable at compilation time, like detecting invalid regexes, noticing problematic code structures, ..., all rules don't make sense like the rules that detect unused code. Some workarounds I can think of for that are
Adding a flag that does not prevent the generation of the compiled file when there are review errors. The downside is that it may look like there are a lot of errors that you will at least ignore temporarily, and cognitively this might be hard on the user.
Using a different configuration for when you are developing and when you are cleaning up (which would be the regular one). This is not great because that makes the ergonomics for editors quite complicated, and because the people who don't want to do that extra tooling step won't do this anyway.
Categorizing each rule, and allowing elm-review to only/not run some categories. I see this happening in quite a few linters, but I imagine that they did that for other reasons (would love research/insight into this). I would prefer avoiding this kind of option though, especially as that will mean needed to figure out useful categorizations, which I doubt we'd get right from the start.
Alternative solution: forking the compiler
When I came up with this idea, I originally thought of forking the compiler and adding the elm-review capabilities to it. This would have the benefit of giving us more information like type inference and not having to re-parse the source code. But that would be a huge endeavor, especially since I don't know any Haskell and am unfamiliar with the compiler codebase.
This would be pretty cool but would also have some downsides such as
Tests would change a lot: Since we would need to spawn a command to analyze, we could not write tests using elm-test. Likely, the result would be something like creating one test file per test like what Scalafix does. Or since elm-review works with projects: one folder per test. The ergonomics would be pretty terrible IMO compared to what we would have.
We couldn't make --fix-all really fast which finds and applies all fixes in a single function call, because we'd have to interrupt the process to spawn the compiler to get the information we'd want.
Web demos would not be possible or as easy as currently.
elm-review rules would become its own ecosystem, only working with this tool. You would not be able to integrate elm-review checks in other tools that can't spawn commands (I know of an attempt to integrate it into elm-ui-explorer for instance).
Keeping up-to-date with new versions of the compiler would also become a huge chore. Seeing that Lamdera is still using a version of 0.19.0, I don't want to imagine the work we'd need to do.
The type inference information would be cool though 😅
Please tell me what you think of all this!
The text was updated successfully, but these errors were encountered: