Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New checker for JavaScript language. #10

Merged
merged 2 commits into from Oct 14, 2013
Merged

Conversation

lukaszpiotr
Copy link
Contributor

Scope of this pull request:

  • added Closure-Linter JavaScript checker (code: https://code.google.com/p/closure-linter/source/browse/) to Pylama checkers,
  • added def gjslint callback in pylama\utils.py to handle gjslint linter request,
  • extended file extensions in main.py by .js suffix - default .py is always present due to main purpose of the tool,
  • introduced unittests in tests.py covering added functionalities (to test JavaScript code dummy.js was added to the root of Pylama)
  • fixed failing tests on Win platform, by adding platform check and platform depended errors to ignore,
  • extended README by gjslint entries,
  • extended requirements section in README by python-gflags and curses for Win platform users (after issues when running on clean Python 2.7 virtualenv).

Attention:

  • no Python 3 support for gjslint due to issues like: usage of StringIO in closure_linter/tesutil.py etc.

Requirements:

  • python-gflags
    • Linux: sudo apt-get install python-gflags and verify installation using: dpkg -s python-gflags
    • Windows: pip install python-gflags and verify installation using: pip freeze

Tests:

  • Installing Pylama with integrated Closure-Linter checker by python setup.py install ended successfully.
  • New solution was successfully tested (unittests + end-user tests) on Win (v.7) and Unix/Linux (Ubuntu).

Notes:

  • No modifications besides removal of printing utilities were done to Closure-Linter code.
    Printing utilities were removed due to common printing interface provided by Pylama.

Usage:

  • pylama.exe --linters=gjslint --ignore=E:0010 --report=report.txt E:\path\to\dir_or_file
  • If Python and JavaScript linters are used at the same time, but in defined directory
    there is just JavaScript code or just Python code, then just this linter will be used
    for which the code was found. Example:
    If in E:\bootstrap\js there are just *.js files. Usage: pylama.exe --linters=pep8,gjslinter E:\bootstrap\js will use just gjslinter. It works exactly the same the other way round.

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 8f3c2a5 on lukaszpiotr:gjslint_checker into abca0a0 on klen:develop.

@lukaszpiotr
Copy link
Contributor Author

I am pretty sure that tests are failing on Travis due to lack of python-gflags, however I never wokerd with Travis so please feel free to instruct me to fix the build.

@pwilczynskiclearcode
Copy link

If you think that system dependency is missing you can extend .travis.yml with proper before_install section.

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling be8f230 on lukaszpiotr:gjslint_checker into abca0a0 on klen:develop.

@lukaszpiotr
Copy link
Contributor Author

Ok, so I did the clean run (python -m unittest discover) on Python 2.7 virtualenv and this resulted in exactly the same failing test which are failing on Travis. After adding python-gflags to virtualenv by pip install python-gflags everything is fine. So... please tell me if my modification done to .travis.yml is correct or I did sth. wrong ?

@pwilczynskiclearcode
Copy link

https://travis-ci.org/klen/pylama/jobs/11958164#L26 python-gflags is installed correctly:

$ sudo apt-get install -qq python-gflags
Selecting previously unselected package python-gflags.
(Reading database ... 62342 files and directories currently installed.)
Unpacking python-gflags (from .../python-gflags_1.5.1-1build1_all.deb) ...
Processing triggers for man-db ...
Setting up python-gflags (1.5.1-1build1) ...

however I don't really know what are other differences between your env and travis...

@lukaszpiotr
Copy link
Contributor Author

pip freeze results in python-gflags==2.0 on clean (just Python2.7 and python-gflags) virtualenv. I tested this on Win 7 and Ubuntu 12 LTE (on Ubuntu I installed dep. as it stands in pull request - sudo apt-get install python-gflags). I did not experience any troubles. However now I can see that on Travis there is python-gflags (1.5.1-1build1) so if 1.5 stands for version, then maybe this is the issue ? Can I force Travis to use gflags 2.0 ?

@lukaszpiotr
Copy link
Contributor Author

on Travis: Press [ENTER] to continue or ctrl-c to cancel adding it, can I interact somehow with Travis ? btw. Could you @pwilczynskiclearcode try to checkout this branch, install python-gflags and tell me if you encountered same troubles as Travis ?

@pwilczynskiclearcode
Copy link

Can I interact somehow with Travis ?

no, but you can say 'yes'...

$ add-apt-repository --help
Options:
  -y, --yes             Assume yes to all queries

But I've looked into the problem now... and don't get why you need system packaged instead of python egg
It think it would be nice if we could call:

python setup.py develop[test]

during travis build, so it would install all deps needed during test...

@lukaszpiotr
Copy link
Contributor Author

So I can reproduce and fix (even with gflags1.5) what is happening on Travis.:

lukas@lukas-VirtualBox:~/pylama$ python2.7 -m unittest discover
....FFF...........dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
WARNING:pylama:dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
WARNING:pylama:dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
WARNING:pylama:dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
dummy.py:1:0: C0110 All modules should have docstrings. [pep257]
WARNING:pylama:dummy.py:1:0: C0110 All modules should have docstrings. [pep257]
dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
WARNING:pylama:dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
WARNING:pylama:dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
dummy.py:15:0: C0301 Line too long (91/80) [pylint]
WARNING:pylama:dummy.py:15:0: C0301 Line too long (91/80) [pylint]
dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
WARNING:pylama:dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
dummy.py:100:4: C0110 Class docstring should have 1 blank line around them. [pep257]
WARNING:pylama:dummy.py:100:4: C0110 Class docstring should have 1 blank line around them. [pep257]
dummy.py:100:4: C0110 First line should end with a period. [pep257]
WARNING:pylama:dummy.py:100:4: C0110 First line should end with a period. [pep257]
dummy.py:100:4: C0110 Blank line missing after one-line summary. [pep257]
WARNING:pylama:dummy.py:100:4: C0110 Blank line missing after one-line summary. [pep257]
dummy.py:100:4: C0110 Multiline docstring should end with 1 blank line. [pep257]
WARNING:pylama:dummy.py:100:4: C0110 Multiline docstring should end with 1 blank line. [pep257]
unknown.py:0:0: [Errno 2] No such file or directory: 'unknown.py'
WARNING:pylama:unknown.py:0:0: [Errno 2] No such file or directory: 'unknown.py'
.
======================================================================
FAIL: test_gjslint (tests.LamaJsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/lukas/pylama/tests.py", line 76, in test_gjslint
    self.assertEqual(len(errors), 1231)
AssertionError: 0 != 1231

======================================================================
FAIL: test_ignore_gjslint (tests.LamaJsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/lukas/pylama/tests.py", line 81, in test_ignore_gjslint
    self.assertEqual(len(errors), 584)
AssertionError: 0 != 584

======================================================================
FAIL: test_select_gjslint (tests.LamaJsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/lukas/pylama/tests.py", line 86, in test_select_gjslint
    self.assertEqual(len(errors), 1231)
AssertionError: 0 != 1231

----------------------------------------------------------------------
Ran 19 tests in 4.442s

FAILED (failures=3)
lukas@lukas-VirtualBox:~/pylama$ sudo apt-get install python-gflags
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  python-gflags
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.8 kB of archives.
After this operation, 283 kB of additional disk space will be used.
Get:1 http://de.archive.ubuntu.com/ubuntu/ oneiric/main python-gflags all 1.5.1-1 [39.8 kB]
Fetched 39.8 kB in 0s (138 kB/s)   
Selecting previously deselected package python-gflags.
(Reading database ... 176250 files and directories currently installed.)
Unpacking python-gflags (from .../python-gflags_1.5.1-1_all.deb) ...
Processing triggers for man-db ...
Setting up python-gflags (1.5.1-1) ...
lukas@lukas-VirtualBox:~/pylama$ python2.7 -m unittest discover
..................dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
dummy.py:1:0: C0110 All modules should have docstrings. [pep257]
dummy.py:9:5: E301 expected 1 blank line, found 0 [pep8]
dummy.py:15:80: E501 line too long (91 > 79 characters) [pep8]
dummy.py:15:0: C0301 Line too long (91/80) [pylint]
dummy.py:68:13: E126 continuation line over-indented for hanging indent [pep8]
dummy.py:100:4: C0110 Class docstring should have 1 blank line around them. [pep257]
dummy.py:100:4: C0110 First line should end with a period. [pep257]
dummy.py:100:4: C0110 Blank line missing after one-line summary. [pep257]
dummy.py:100:4: C0110 Multiline docstring should end with 1 blank line. [pep257]
unknown.py:0:0: [Errno 2] No such file or directory: 'unknown.py'
.
----------------------------------------------------------------------
Ran 19 tests in 12.660s

@lukaszpiotr
Copy link
Contributor Author

I do not need sys-package, it can be python egg too. I just have read that this way you are "more sure" things will work for all Linux distr. - it worked locally for me, it works on a different machine with different Ubuntu right now, as you can see the log above, so I decided to take this "recommended way".

Your suggestion is to extend requirements:

pyflakes
pep8
pylint
python-gflags==2.0

right ?
Do I also have to inform Travis in travis.yml to use this by:

install:
 - pip install -r pylama/requirements.txt --use-mirrors

or, not?

Sorry, for so many questions buy this is my "first time" with Travis and I did not expect such issues.

@pwilczynskiclearcode
Copy link

I thought that project uses setuptools setup.install_requires so you could later:

$ python setup.py develop  # for regular dependencies pyflakes pep8 pylint
$ # and
$ python setup.py develop[test]  # for test dependencies

but we have to explicitly call 'pip install -r ' instead. It's a bit of an inconsequence of library's author.

Using egg has this advantage over system requirement that is system independent, so we won't need to inform travis for installing different things on different systems.

lukaszpiotr
Sorry, for so many questions buy this is my "first time" with Travis and I did not expect such issues.

I have never used Travis too, no need for sorry :)

@lukaszpiotr
Copy link
Contributor Author

Ok so my idea was to make it "light way", the core purpose of PYLAMA is to check Python code, so lets say 80% users will use this for Python (so why do they have to install packages required by gjslint, right ?), and 10% for Python + JavaScript. That is why I wanted to leave it up to the user, and not to pollute his/her env. unnecessary. Now I can see I forgot about the tests :(.

Did I get it clear that, you suggest to integrate python-gflags in setup.py:

install_requires = ['python-gflags']
if version_info < (2, 7):
    install_requires += ['argparse', 'ordereddict']

?

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 83cd487 on lukaszpiotr:gjslint_checker into abca0a0 on klen:develop.

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 8f50f63 on lukaszpiotr:gjslint_checker into abca0a0 on klen:develop.

@lukaszpiotr
Copy link
Contributor Author

update: The Travis CI build passed

@pwilczynskiclearcode
Copy link

Ok, please rebase it, so it contains only one commit with meaningful message.

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 5df03d8 on lukaszpiotr:gjslint_checker into abca0a0 on klen:develop.

@lukaszpiotr
Copy link
Contributor Author

All modifications (commits) done on this pull request (driven by Travis issues) are squashed into add gjslint checker commit (see above). This is final state for this pull request (one commit for all changes on top), from now on please review the code.

@pwilczynskiclearcode
Copy link

All modifications (commits) done on this pull request (driven by Travis issues) are squashed into add gjslint checker commit (see above). This is final state for this pull request (one commit for all changes on top), from now on please review the code.

Are you sure that all commits are squashed into one commit so this pull request contains only one commit?

@lukaszpiotr
Copy link
Contributor Author

I understood "so it contains only one commit with meaningful message" as you want to have
one commit with all modifications done on this pull request - so that you can see all the work
in diff to master within this one commit (not distributed) over all commits. This is
reflected in the last commit above. Done on top of pull request.

  1. I am not willing to remove all the other commits because this is like removing
    real effort which went into this pull request. What is more in case of failure you
    can not go back to certain places (commits). The "simple linear history" is a lie.
    Having clear picture on what was done and how is was done and how much effort went
    into this pull request is more important to me. Later on you can use git log to
    see the real effort and distribution of knowledge within software project.
  2. Why git rebase is evil: http://geekblog.oneandoneis2.org/index.php/2013/04/30/please-stay-away-from-rebase

Please review the source code.

@ghost
Copy link

ghost commented Oct 1, 2013

Hi, I'll be your reviewer for this pull request. Sorry for my english, just ask if something is not clear.
General practice for working with large pull requests:

  1. When pull request is discussed, you commit new changes as reviever should see your changes between his change request. IMPORTANT: don't rebase when you're under review, as this make review comments to disappear.
  2. When pull request is marked as good to merge, then you'll squash all commit into one with overall description of changes inside.

One commit is better for merger/upstream because it's easier to maintain/revert later. "Showing effort" isn't necessary, as the end result of your work is much more important for everybody. "commit often" practice is good when you're working on changes locally, but overall pushed result should be contained in one commit with commit message in format used by project.

It's more infomation/guideline how we should work.

@lukaszpiotr
Copy link
Contributor Author

Hi Jarek :) I see your point but I just disagree. Please let me try to exlplain why:

  1. I think showing effort is extremely important because thx to this you can see
    how many developers spend how many "commits" when working on a project. And
    then you have a clear view where do you have bottlenecks within your project.
    And you can discuss with your team whether the effort is a reflection of
    low quality code or other issues.

  2. Imagine you are working on a feature and besides implementing a feature,
    you also have to introduce tests and update manual or introduce renaming.
    I think it is better to do 1 commit for renaming, 1 commit for feature,
    1 commit for manual update etc... and provide this as a pull request.
    Then you have a dedicated commit to dedicated area and you can e.g. adjust
    reviewers to certain commits.

  3. Imagine you forked a repo, and introduced some changes, and send a pull
    request, and you friend thought that you did a good job and forked your
    branch before it was merged to the master. And then a guy from github
    asks you to squash your commits (rebase) for final pull request, and
    you friend is not able to pull your branch anymore cuz history is a mess.
    It is a one big lie to me.

  4. The are more things...

But if you need me to follow some rules I will try to do my best.

class LamaJsTest(unittest.TestCase):

def test_gjslint(self):
if version_info < (3, 0):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use unittest.skipIf. It's more explicit than if inside test body.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree if we think about the same: @unittest.skipIf(sys.version_info < (3, 0), "not supported in this veresion"), you are right. But when you take a look on the code of Pylama author, there you can find already existing convention for skipping the tests for not supported Python versions. I just followed already existing convention. But sure thing I agree to what you've pointed out.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please be more verbose? I just found link to package. Or please discuss my proposition. However I still think participating in shared code should not change the convention of the author. Maybe switch to "new skipping style" can be introduced to the author for all cases as a separate PR - what do you think ?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean - unittest2 is backport of 2.7's unittest which you can use to implement this right way.
I think it's really important to diverse state of test between skipped/executed but without insides.

@ghost
Copy link

ghost commented Oct 1, 2013

I've described a way in which i prefer to work. I don't want to cause too much discussion which isn't point of this PR :)
IMHO it shouldn't damage your code in any way, so i recommend you to follow.

@lukaszpiotr
Copy link
Contributor Author

Sorry Jarek for following this topic in this PR but I think it is really important. Maybe you can take a look on this:

after Rober Conckling:

The problem with rebase is that it corrupts the true history of the commits. This doesn't show up on a smaller repo, but if you have a busy repo, with lots of contributers, untangling a mess becomes much harder if you no longer have the true parentage of a given commit.

Git rebase destroys the context of the commit, leaving basically a diff apply instead of the much more contextually rich merge commit. Yes, your repo looks messier, but it more accurately reflects the lifecycle of the code, and what the developer intended at each commit. If you really want a straight-line for your repos, why aren't you using something like SVN? Is the distributed nature of git really the big selling factor?

I've run teams that have used git how you are describing, always making people use rebase (in this case it was because we were syncing to SVN). Reconstructing what a developer did 4 months ago is much simpler with a merge vs a rebase. You can go to the merge just before the developer's first commit and see what the repo actually looked like when they started work. You can never do that with a rebase. The context in which the developer was working has been lost. Also, with multi-commit branches, you can see what the repo looked like after each commit. With a rebase, those intermittent commits are almost useless, as the repo doesn't look like it did when they made that commit.

To sum up, don't subvert git just because you want to see a straight line in your commit graph. It's not worth it, and it is ultimately a lie.

I am just curious what do you think - cuz we are both using git right ? So why not to lear sth. new form each other ?

@michalr
Copy link

michalr commented Oct 1, 2013

Guys, this is getting out of hand.

I understand Lukasz's point, but think that it is mainly influenced by the job that he's currently doing.

Right - rich log is critical to reason about the large code repository, which is the main goal of the tool that Lukasz is developing right now.

However, I think that something need to be highlighted here.

What we tend to do here is to create small and atomic issues, so as they fit into one commit.

If you need to split it into more meaningful and atomic commits - split the issue accordingly.

What is more, I prefer code to speak for itself, not relying on commit messages. Code itself should be clear and if an important assumption needs to be made that is far from obvious - it's a perfect use case for "in code comments", not commit messages.

A commit of: "extend setup.py and clear travis.yml" is of limited use to anybody - you missed something, which was a part of the issue that you try to resolve - that's all this commit is for.

We extensively use issue trackers / wiki pages / sphinx doc pages to keep track of the history of the project not the git log.

I don't want to fight here - simply different approaches to the task.

Let's agree to disagree here and don't follow up on the topic anymore.

Changes made to the code to solve the task assigned are clear - you can leave them as they currently are - no rebase needed.

@lukaszpiotr
Copy link
Contributor Author

@but think that it is mainly influenced by the job that he's currently doing - I think you got me here ;p I am too much influenced by my way. Srx for trying to force my thing on top. From now on I will stick to the rules w/o discussion.

@@ -0,0 +1,2280 @@
/* ===================================================
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this file is in main directory? It looks like bootstrap.js library. Could you explain purpose of this file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have dummy.py file used for tests for Python checkers in the main directory - so following proposed by author convention you have dummy.js files used by JavaScript tests in main directory. Why bootstrap - in general random pick, but it is open source, popular and I found it to be a good test case for gjslinter.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can replace by much smaller js file as we don't want to test internals of gjslint. Just upload file with hello world and comment about this file purpose. Also i would move this kind of files into tests/data as i think they shouldn't pollute global codespace.

@ghost
Copy link

ghost commented Oct 1, 2013

Ok, I think we can go into next round of review :)

  1. You should implement and push changes to current PR -> then i can verify it's done according to my notes.
  2. I think you should look at pylama as a whole again and then create in project's wiki about next refactorings.
  3. In my personal view, first thing to refactor should be checkers directory. Including these linters into repo (they should be defined as a dependencies, not "statically" uploaded to repo) looks really bad from maintance point.

@klen klen merged commit 5df03d8 into klen:develop Oct 14, 2013
@klen
Copy link
Owner

klen commented Oct 14, 2013

Sorry for long time I've been very busy at last weeks. And thank you for work. So, I saw the request and made some decisions.

I will not include gjslint to pylama module by default. Not so many people are required check js files with pylama.

BUT, I created some like plugin system. In the basic version pylama will be only consist 'pep8,pyflakes,mccabe,pep257' checkers. By example, for pylint support you should install 'pylama_pylint' package.

So, I suggest you move gjslint to "pylama_gjslint" package and will be maintainer for them. I could do it myself, but it's your plugin. It's very easy, see realization in pylint: https://github.com/klen/pylama_pylint and look in Pylama's README. If you have any problems, questions or etc, I would like to help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants