Skip to content

Time to process a long set/lists of strings increases exponentially with the number of strings #484

@MartinFalatic

Description

@MartinFalatic

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Run bandit on any of the files attached in Examples.zip

  2. Notice how the run time increases exponentially (user time approximately quadruples as the number of strings in the set doubles.)

(This also occurs if the large sequence of short strings is a list rather than a set.)

Python2 versus Python 3: Though the latter runs a little faster overall, the exponential nature of this problem is still evident.

Expected behavior
The run time is linear despite the extra-long data.

Additionally, it'd be useful to be able to see exactly what file is being processed to locate such bottlenecks, versus the 242 [0.. 50.. 100.. 150.. output. Debug output is far too noisy for this purpose when scoping hundreds of files.

Bandit version

For Python 2:

bandit 1.5.1
  python version = 2.7.15 (default, Jan 12 2019, 21:07:57) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)]

For Python3 (slightly faster):

bandit 1.5.1
  python version = 3.6.8 (default, Jan 25 2019, 14:34:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)]

Additional context
n/a

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions