-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Add integration tests against real projects #360
Comments
I'm experimenting with a Gradle build script to try this, since it's pretty easy to pull down and analyze dependencies dynamically, allowing multiple versions of PMD and real projects. My thought would be to run a combination matrix of sorts, with some analysis of results between PMD versions. Not far into it, I'll share when I have something interesting. Anyone, feel free to reach out if you'd like to pick it up in the meantime. |
I've got something working and producing a report. Many things I could keep polishing and improving, but I'd like to get feedback before putting further effort in. To that end, this weekend I'm going to clean it up, collect and organize the loose ends, and submit a PR for discussion purposes. In the meantime, I've attached an example report, produced using PMD versions 5.0.0 to 5.6.1, against some Hibernate, Solr, and Spring Framework dependencies, using all the Java rules grabbed from
No analysis is performed, although if a file is empty a link is omitted from the table. Feel free to share thoughts! Also, good news is, I was able to reliably manifest the problem from #364 in 5.6.0 when running with more than 1 thread. This file was compressed by 7-Zip, to reduce to less than 3MB, it's over 30MB with ZIP reports.zip, extract and open the index.html file. |
@ryan-gustafson sorry it took me this long to look at it, but that report is amazing! It would be of great help to both avoid regressions, and battle test fixes and improvements beyond our test cases before a release. One interesting thing of those diffs is the number of differences between builds for DFA results.... specially considering we haven't touched that code directly in quite some time (check this and this)... seems that module is more fragile than I ever thought... We definitely need to move this forward with master vs PR for PRs. We could upload diffs to chunk.io for free from Travis :) Please, contact me if you need help to set this up. |
@jsotuyod I've been so busy lately I've not been able to get back to this. This weekend however is looking rather clear, so I'll see about getting a PR up, that should enable progress on other fronts. I've not looked at all into chunk.io or Travis, I assume one of you guys could work on that. Glad you found it interesting. My hope is it has practical promise for allowing greatly expanding coverage and regression detection. But not just between release, but for comparing your local dev against latest CI SNAPSHOT build, or on a PR basis. The two thinks I know I'd like to add, but likely not before I send a PR, would be:
As for DFA, it's heavily dependent upon analysis of the symbol table data (here), so changes there could indirectly change DFA results (for better or worse). |
That's exactly the kind of things I was offering my assistance with. Let me know if you need anything.
Definitely, as I said, I'm really looking forward to have this on all PRs by making Travis do PR vs master.
Not sure how you are getting the sources now, but for JS at least there are several big open source projects to look at. For Apex, Visualforce, PLSQL, Apache Velocity, XML and XSL things may be harder... But we can ask our Salesforce guys if there is a good OSS for Apex / VF to use as benchmark.
Definitely, but as you said, at a later stage. Just rolling this out as is is most valuable.
I had no idea, good to know. Reports should be better then, assuming the DFA code is right, since we improved symbol table a lot for some scenarios such as anonymous inner classes. |
@ryan-gustafson any chance we can get our hands on this, whatever state it's in? We would love for this to see the light, maybe even as part of GSoC 2018, and what you had shown us would be an amazing starting point. |
This could be extended to other languages once we tackle pmd#360
@jsotuyod I totally missed the ask for the code on this. My apologies! GSoC is in flight already, is it too late for the code to be useful? I could dig it up this week sometime yet. |
@ryan-gustafson it's never late! @djydewang has already started on his own version, but yours may give him some insight or ideas. |
See attached ZIP. It is Gradle Groovy based, version 3.5 using the Gradle wrapper. Depending on the available Source dependencies/configurations, and PMD versions, it will dynamically create the appropriate tasks (a lot of them!). The
The code isn't pretty, but it worked. There's no shortage of kludges and work arounds, not all PMD versions worked right. The |
@ryan-gustafson Amazing! I have never thought of using dependency to generate PMD reports. Maybe I can refactor my code to generate PMD reports in this way. But since I'm not familiar with gradle, I may not be able to use the code directly. There is no doubt that your code has inspired me a lot e.g. TODO and FIXME in the code are worth thinking about. Thank you for showing us a amazing starting point again:) |
Ideally we would run the current PMD against a few other (open-source) projects like Spring, Solr, openjdk, ...
This should help in finding NPE, ClassCastExceptions, Parser errors earlier.
The text was updated successfully, but these errors were encountered: