Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Ignoreing false positives #17

Closed
philcox opened this Issue · 22 comments

6 participants

@philcox

Is there anyway to annotate or ignore false positives? Maybe the ability to add a comment in the .rhtml and .erb files to ignore certain known false positives.

@presidentbeef

There is no system (yet).

First, where are false positives tracked?

You mention adding comments to the source code of the project. Assuming people are willing to modify their code to hide some warnings, at some point do you end up with a bunch of meaningless and outdated comments just to manage the use of an external tool?

Alternatively, there could be a database of marked false positives. Perhaps it could be stored in a hidden file in the application's root directory. This would allow it to be checked into source control without changing the code of the project.

Then there needs to be a way of getting the false positives into the database (or creating the comments to ignore them in the source code). What would be the interface for this?

Lastly, there needs to be a reliable way of identifying and matching the reported false positives when doing the scans. This takes some work, because you don't want simple things like changing line numbers or whitespace to undo your work hiding the false positive.

Anyhow, that is kind of why there is no system yet for dealing with false positives.

If you think you have a false positive that is more general (for example, an attribute of a model that would never be an XSS problem), then I would be happy to work towards eliminating those entirely from Brakeman's warnings.

@philcox

A couple thoughts (from one of our developers, xegar - check out lint_fu) for consideration whenever you get to this ...

What I've learned from doing this stuff:

Annotating the source code to assist static analysis can be helpful, but agreed that simply adding comments at the site of the FP that say "this is not a bug" is ugly -- and in the long run leads to dangerous false negatives. More to say on annotations below.

Most commercial tools have a simple database to track false positives (or for that matter, false negatives). It may live parallel to the app being scanned, or it may be on a server somewhere. For Brakeman I'd probably use DataMapper in order to make my existing domain-model objects serializable into a SQLite DB which lives as a hidden file in the app root. Later on I would be free to allow for config files that point DataMapper at a real mysql/postgres/whatever server.

Identifying an issue in a way that you'll be able to classify it as the same issue later, is a classically tricky problem. I have had great success with storing the unique issue identifier as a tuple of (enclosing_class+method, checker_that_found_problem, erasure_of_source_code_fragment) where the "erasure" is a function that operates on the AST and replaces easily variable nodes of the AST with nils or a dummy value. For instance, all integer literals get translated to 0, and so forth. Serialize the AST to an s-expression, and you have a compact "fingerprint" of the issue.

What is the role of source code annotations? They're not a good way to suppress false positives, but they make an excellent mechanism for configuring scanners. For instance, I might decorate a method in a controller with:

@brakeman xss disable

or

@brakeman safe

Which doesn't suppress false positives; rather, it keeps the false positives from being generated in the first place because it informs the checker about "external" factors such as a controller that's part of a restricted internal admin interface, or is part of a back-end REST service and doesn't care about XSS, or whatever.

@philcox philcox closed this
@presidentbeef presidentbeef reopened this
@presidentbeef

Thank you for the great suggestions.

@philcox

One other point that the person taking the brakeman output and fixing issues brought up for consideration:

There are many cases where we have this in a .rhtml (or .html.erb) file:

<%=  instance_of_some_object.method %>

And this an a .rb file:

class SomeObject
    def method
        ERB::Util.h( value )
    end
end

So, calling the "method" method in the .rhtml file is completely safe in this case. However, the tool marks it as a high level warning... it doesn't seem to inspect the SomeObject.method code. This is a common issue with Ruby tools as everything is dynamic and not typed. So I'm sure it's a very hard problem to solve. Many IDEs have yet to solve it.

Any thoughts?

@presidentbeef

If it were obvious that instance_of_some_object was an instance of SomeObject and the method were really that simple, it would not be so hard, but that is rarely the case in Ruby :)

However, if instance_of_some_object.method does not involve user input or database values, then there should be no warning about it. Please let me know if you are seeing warnings for arbitrary method calls.

If user input or database values are used as arguments to a method whose result is output, there will be a warning for that, but it should be at weak confidence (unless it is a method known to not escape its output). If these warnings are not useful to you, --report-direct will only warn on instances of user input being output directly.

@daveworth

My quick, half-baked idea for ignoring "known safe" false-positive scenarios is to have a .yml file somewhere in the app that simply denotes Object/Module, Method, and (optional) type of safety. I.e. I know that User#update does not actually have an unprotected mass-assignment bug so the file could describe just that. It's really coarse because there could be one non-bug and one bug near each other, in the same method (say on different branches) and this would not alert then but it could be a first step. Annotating in the code goes against the eloquent principles of ruby that the code should be obvious and not require comments. Comments themselves are often a code-smell.

User:
  update:
    no-SQL-injection
@presidentbeef

Hi Dave,

This is an interesting approach, but I agree that it needs to be a little more fine-grained. I would rather report false positives than potentially hide real problems.

I do like the idea of having a separate file for describing what warnings to ignore, though. I'm warming to the idea of having something in the app's directory for Brakeman to read.

@cktricky

Any progression on this topic?

@daveworth

Ken and Justin, I've lately been a little out of the bag in terms of time but am hoping to get some time on this unless someone else has just wrangled it in the mean time. If not, definitely count me in and I will carve some time to at least get a MVP done an ready in a pull-request

@cktricky

@daveworth - Awesome. Well, I was asking because it's important enough to us to contribute to writing that code BUT, no need to do it if somebody else is biting that bullet :-). Besides, probably wouldn't be until after April 5 that it would get a lot of TLC from me.

So I guess the question is, when do you plan on working on it and is Justin already working on it (no need to duplicate efforts I'd imagine).

Cheers

@daveworth

@cktricky Well lets wait until we hear back from Justin and then make some decisions. By "us" do you mean the whole Carnale0wnage crew? If so that's just kinda exciting... :-)

@cktricky

hehe, no, I meant more in terms of my professional life/crew :-)

Agreed, let's see where Justin is at and if he needs us to pitch in, I'm up to helping out wherever it is needed.

Thanks!

@presidentbeef

I'm not currently doing any work on a system for supporting this. However, what I will be working on is better support for "matching" warnings - that is, if the code has only changed line numbers then it should not be reported as a new warning. I think this essentially means storing the "context" of code (module/class/method) instead of just the line number.

My thoughts:

  • This should be a post-scan filtering step. It should only hide warnings after they are found.
  • I like the idea of a .brakeman_ignore file (or something similar) in the app's root directory.
  • How will users "mark" something as a false positive? What's the interface for this?
  • What are the chances that code marked as a false positive will somehow occlude vulnerable code in the future?
  • It needs to be obvious that Brakeman is ignoring warnings based on users's directions (i.e., there should be a message saying "x ignored warnings" or "Ignoring warnings based on .brakeman_ignore file").
  • There should be an option to report all warnings, even those that would be ignored.
@cktricky

Q: What are the chances that code marked as a false positive will somehow occlude vulnerable code in the future?
A: Could there be a unique hash attached to the finding itself, generated by Brakeman?

I'm all up for a .brakeman_ignore file. Maybe comma separated? The values would be those unique values attached to the finding?

@presidentbeef

Could there be a unique hash attached to the finding itself, generated by Brakeman?

That's what I'm hoping to do.

@daveworth

Justin,

These are all good points, and I've mentally addressed many of them when thinking about the problem at hand.

  • Post-filtering - absolutely correct.
  • .brakeman_ignore - I think the .brakeman_ignore file should be a YAML file so we can put rich data inside to address occlusions concerns
  • Interface- If we're doing YAML the interface would probably have to be ${EDITOR}
  • Vulnerability occlusion/aliasing - This is hard. My thoughts to addressing this were hashing some amount of context such that when a given function changes, potentially exposing more vulnerabilities, or patching a perviously marked "ok" function, the annotation would be ignored. For example, suppose someone has a chunk of code marked as a SQL injection but they believe that they have sanitized the data sufficiently to have prevented the vuln, and have annotated it as such. If they then wrap that code in something else that changes the context it should be alerted against again. Does this make sense?
    • My big worry here is that the data needed for annotations is complicated. My feeling is that if Brakeman generates a file on every run that contains all the necessary information to annotate a vulnerability then a user can copy/paste the correct block from the generated file into their .brakeman_ignore file. When they change context as described above they will likely have to do the same once again. This is probably better than getting compromised later.
  • The latter two issues of stating that X ignored warnings and to report all warnings, or even another option "report only ignored warnings" should go with the above changes. The big meat of the problem is preventing vulnerability aliasing through annotation.
@daveworth

Just a status update. Work is awesome and demands that we do FOSS contributions. As such I have carved out some time. At this point I have hooks in place on my branch (locally, not on github yet) that produce annotations. I am biasing towards action and realizing that the first pass will not be amazingly simple but in time I think will produce good results. I hope to have an alpha branch soon.

@cktricky

ping :-)

Justin, any forward progress on this front in terms of including Dave's work into the product? I'm going to ask for time so that our appsec team can help out if need be. Let me know if we can help.

Cheers,

Ken

@oreoshake

We've had some discussion lately on the pull request, chime in? #73

@cktricky

taking a look now, also, sending some info to link up on this

@cmer

+1

@presidentbeef

Added with #368.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.