Skip to content

andras-tim/wscheck

Repository files navigation

WSCheck

PyPi Build Docs DependencyStatus CodeQuality Coverage License

WSCheck is a static analysis tool for whitespaces.

Installation

pip install wscheck

Usage

Check multiple files:

wscheck orange.sh pineapple.xml kiwi.js

Exclude rules:

wscheck --exclude WSC002 --exclude WSC003 orange.sh

Get list of available rules:

wscheck --list-rules

For details about rules, see Rules

Write CheckStyle output too:

wscheck --checkstyle results.xml pineapple.xml

Example

wscheck examples/multiple_problems.py
In examples/multiple_problems.py line 2:
class LabelPrinter:
^-- WSC007: File begins with newline

In examples/multiple_problems.py line 6:
        self.print_to_pdf()
                           ^-- WSC002: Tailing whitespace

In examples/multiple_problems.py line 9:
   def __generate_pdf(self):
   ^-- WSC003: Indentation is not multiple of 2

In examples/multiple_problems.py line 10:
        pdf_generator = _LabelPdfGenerator()
                                            ^-- WSC001: Bad line ending '\r\n'

In examples/multiple_problems.py line 16:
--->--->os.makedirs(self.__cache_dir, exist_ok=True)
^-- WSC004: Indentation with non-space character

In examples/multiple_problems.py line 22:
        return os.path.join(self.__cache_dir, pdf_name)
                                                       ^-- WSC006: Too many newline at end of file (+1)

Bugs

Bugs or suggestions? Visit the issue tracker.

Benchmark

  • You can run a quick benchmark:

    tox -- tests/performance --quick-benchmark
  • You can run benchmarks and generate histogram for compare calls to each other:

    tox -- tests/performance --benchmark-histogram
  • You can run benchmarks and save results for later compare:

    tox -- tests/performance --benchmark-save=foo
  • You can run benchmarks and compare with the last saved result with fail treshold:

    tox -- tests/performance --benchmark-histogram --benchmark-compare --benchmark-compare-fail=mean:5% --benchmark-sort=name
  • You can run benchmarks and compare with the last saved result by groups:

    tox -- tests/performance --benchmark-histogram --benchmark-compare --benchmark-group-by=func
    
    tox -- tests/performance --benchmark-histogram --benchmark-compare --benchmark-group-by=name