How to Report a Brakeman Issue
Clone this wiki locally
Submitting an Issue
Before submitting an issue, please run Brakeman in debug mode (
brakeman -d). This will provide much more information and full stack traces for any exceptions.
Issues should be submitted to the Brakeman Github repo.
See below for how to report different types of problems.
The worst bugs are when an error causes Brakeman to exit prematurely and not generate a report. Please report these!
When reporting a crash, please include the stack trace.
"Hanging" or "Stuck" Scans
If Brakeman appears to hang (output is not updating), please read these instructions.
If Brakeman reports a parse error, please read these instructions.
The best way to view errors is in the HTML report. If errors are encountered during the scan, click "Exceptions raised during the analysis (click to see them)" to view the errors.
When reporting an error, please include all information about the error, including the full stack trace (run Brakeman with the
-d option and then click the "Location" column to view the full stack trace).
False Positives/False Negatives
False positives are warnings which are reported as problems, but are actually safe. (False negatives are when warnings should have been reported, but were not). False positives are expected to nearly any kind of security automation tool, but that doesn't mean we shouldn't try to minimize them.
False positives/negatives typically fall into four categories:
- Ones Brakeman should detect, but doesn't (bugs, ignorance, or just haven't gotten to it)
- Ones Brakeman could detect, but would be very difficult
- Ones for which there is no way to statically detect
- Ones which are actually problems (false false positives?)
Determining which of these categories a given issue falls into can be difficult, so the best thing to do is to report suspected false positives/negatives. For bonus points, submit a pull request with a failing test that demonstrates the issue.
Submitting a Pull Request
Pull requests are welcome!
Please follow the typical GitHub flow:
- Fork Brakeman
- Clone locally
git clone your_new_fork
- Create a new branch
git checkout -b fix_some_broken_stuff
- Add new tests
- Make fixes, follow coding conventions of project
- Run tests with
ruby test/test.rbor just
- Push your changes
git push origin fix_some_broken_stuff
- Go to your fork, click "Submit pull request"
- Provide a description of the bug and fix
These are some code conventions to follow so your code fits into the rest of Brakeman.
- Must use typical Ruby 2 space indentation
- Must work with Ruby 1.9.3
- Prefer to wrap lines near 80 characters but it's not a hard rule
Tests are very important to ensure fixes actually work, and to make it obvious what your changes are supposed to fix. They also protect against breaking features in the future.
To run Brakeman tests:
To check test coverage, install
simplecov before running tests. Then open
coverage/index.html in a browser. For a correct report, run the tests from the root directory.
Add a Test Case
Brakeman has several Rails applications in the
test/apps directory. Choose the one that best matches your situation and modify it to reproduce the issue. It is preferable to modify the application in such a way that the fewest existing tests are broken. In particular, the tests for "expected number of reported warnings" will probably change, but no other tests should. Unless the tests or expected behavior are broken.
test/tests directory, each application has its own set of tests. Most of these consist of
assert_no_warning, which test for warnings generated by Brakeman.
When adding a test for a false positive, use
assert_no_warning so the expected behavior is clear.
assert_warning tests can be tedious, especially in bulk. There is a tool which will convert Brakeman reports to tests in
tests/to_test.rb. This file takes exactly the same options as Brakeman. This makes it easy to generate a smaller set of tests (as opposed to tests for every Brakeman warning, which probably already have tests).
ruby to_test.rb apps/rails2 -t Execute
will generate some boilerplate and then a set of methods:
#... def test_command_injection_1 assert_warning :type => :warning, :warning_type => "Command Injection", :line => 34, :message => /^Possible\ command\ injection/, :confidence => 0, :file => /home_controller\.rb/ end def test_command_injection_2 assert_warning :type => :warning, :warning_type => "Command Injection", :line => 36, :message => /^Possible\ command\ injection/, :confidence => 0, :file => /home_controller\.rb/ end #...
The boilerplate is unnecessary unless you are adding a whole new test application.
When adding a single test or set of tests, copy the tests from here, change the names to something descriptive, and you are done!
Note that when adding an
assert_no_warning test for false positives, you can still generate the test with the false positive, then change the assertion.