New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closer telnet-brute integration with the brute library #1269

nnposter opened this Issue Jul 10, 2018 · 3 comments


None yet
2 participants

nnposter commented Jul 10, 2018

The current version of script telnet-brute predates some brute.lua library features, such as setReduce. This issue is to track what changes to the script make sense at this point.

@dmiller-nmap: I have replied to your PM last night. Please let me know your thoughts.


This comment has been minimized.

dmiller-nmap commented Jul 11, 2018

I'll start by listing what I think are the existing features of telnet-brute that the setReduce functionality of brute.lua ought to be able to replace. Please correct me if I miss anything or incorrectly interpret some of it:

  1. "Autosizing," reducing the number of concurrent connections by making some of them "dummy" connections that just wait forever.
  2. Connection backoff, retrying a socket connect that fails after waiting 1, then 2, then 4 seconds.
  3. Connection pooling, sending multiple login attempts across the same connection.

The brute.lua library starts with 5 (or brute.start) threads and increases slowly up to 20 (or brute.threads). If any of those threads returns an error with setReduce(true), then that thread is killed. I propose using setReduce to replace the "autosizing" feature of telnet-brute, since both should keep the number of concurrent connections below the maximum supported by the target service.

For connection backoff, it seems like we could increase the connect timeout and allow the connection to die. This would reduce the number of threads again and slow down the rate of retry because threads that connect too quickly will be left waiting for longer. I admit I'm not quite sure of the exact behavior this is trying to work around, so there may be a better approach, and it may be a call to sleep as you used before. Maybe if you describe the behavior of telnet daemons that this addresses I'd have a better idea.

Connection pooling is a really neat idea, and I completely missed it in my initial attempt. I'll have to look at it again and see if it's something we could build into brute.lua, since I think other services would also benefit. As I understand it, this is to avoid closing and restarting the TCP connection if the service allows multiple authentication attempts in a single connection.


This comment has been minimized.

dmiller-nmap commented Jul 13, 2018

Well, I spotted an interesting detail in brute.lua that could make connection reuse a much easier thing to implement. Ignoring a bunch of other code, the basic structure of brute.lua is like this:

  1. Start some number of worker threads using the Engine.login function.
  2. Each thread repeatedly calls Engine.doAuthenticate and checks the results and whether it should stop.
  3. Each doAuthenticate gets the next set of credentials, instantiates a new Driver (the class defined by the script), calls the Driver's connect and login functions, then disconnect and destroys the Driver.

To make it even more awkward, if the connect or login functions return a setRetry error, the Driver is still disconnected and destroyed and a new one instantiated inside the loop within doAuthenticate. I think it should be easy enough to change the workflow as follows:

  1. Start some number of worker threads using the Engine.login function.
  2. Each thread instantiates a new Driver, repeatedly calls Engine.doAuthenticate and checks the results and whether it should stop and destroy the Driver.
  3. Each doAuthenticate gets the next set of credentials, calls the Driver's connect, login, and disconnect.

We would have to verify that all the existing scripts can tolerate a single Driver instance having its connect, login, and disconnect functions called repeatedly, but this does not seem too difficult. In fact, I would guess that most script authors would be as surprised as I was to find that the Driver was not more persistent.

With this new setup, the Driver would be able to keep track of its own internal state and use the connect and disconnect functions to monitor whether the existing connection could be reused or not.


This comment has been minimized.

nnposter commented Jul 14, 2018

Thread Auto-sizing

I am guessing that this could turn out to be a very simple replacement with setReduce. I will take a look at it.

Connection Back-off

The reason behind the current code is that in some cases I have observed that the daemon got unresponsive, i.e. behaving way outside of its typical latency, and, interestingly enough, the reaction was a RST, not a timeout. Think of it as if the daemon gets periodically restarted. I assume that in such cases the retry with increasing back-off time is the way to go, while timeout would specifically not work.

Connection Reuse

I have not implemented a true connection pooling in the traditional sense in that threads check in and check out connections, specifically to preserve the socket-thread affinity. In telnet-brute the threads instead individually maintain their own single connections, which they reuse across driver instantiations. The Target object is where this state is persisted and the Driver connect/disconnect methods are mapped to my Target attach/detach so that a new driver instance merely picks up its own connection, instead of creating a new one, whenever possible.

Back when I was coding telnet-brute, I was scouting brute.lua for any kind of useful storage for these connections, specifically hoping for using the driver instance for this, but, as you realized, the current driver concept is very simplistic, almost questioning why so many different methods are needed for what is essentially a single pass execution.

So, in my opinion, refactoring brute.lua so that a Driver instance is roughly coinciding with a worker thread would make a lot of sense. One area to pay attention to would be that all the different driver implementations in *-brute might not necessarily leave the login interaction in a clean state. Single atomic request or single packet logins, such as HTTP, would likely work fine but multi-step logins, such as FTP, would need to be tested.

Upfront Analysis

One other feature that I have implemented is that the presence of TLS is tested only once. All threads then simply either do or do not request TLS, instead of invoking comm.tryssl over and over.

Connection reuse does reduce the impact but not as greatly as one would hope. In my experience each telnet connection tends to be useful only for a small number of login attempts, before it gets closed on the server side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment