Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression: 'Too Many Files Open' in MITM component #265

Closed
alxbl opened this issue Nov 18, 2020 · 4 comments
Closed

Regression: 'Too Many Files Open' in MITM component #265

alxbl opened this issue Nov 18, 2020 · 4 comments
Milestone

Comments

@alxbl
Copy link
Collaborator

alxbl commented Nov 18, 2020

It looks like #48 has croppped up again in the MITM component. We should spend some time figuring out how to trace open handles.

A good way to reproduce quickly seems to be to do a SYN flood on the PyRDP instance. (Maybe try masscan or write a simple asyncio flooder in Python)

We'd need to investigate how to debug file descriptor leaks and determine if this is a leak or just denial of service.

Off the top of my head, the source of leak/exhaustion might be:

  • Session Log files
  • Socket Descriptors (inbound, certificate request, outbound)
  • Replay file sink

One successful RDP connection would generate:

  • Session Log (2x)
  • JSON Session Log (2x)
  • Replay File (2x)
  • Mapping.json file (1x)
  • 4 socket descriptors (in/out x2)

The reason these numbers are doubled is because of the initial certificate request which causes a disconnect

Things to check:

  • Does Twisted clean up connections?
  • Do we properly close log handles upon client disconnection?
  • Do we have reasonable connection timeouts for dangling SYNs?
  • Do we have a reasonable accept queue for twisted?
@obilodeau
Copy link
Collaborator

I think @xshill identified the issue. See #264.

@alxbl
Copy link
Collaborator Author

alxbl commented Nov 18, 2020

There's a good chance that the RDPDR stuff is part of the issue, but it looks like it's possible to reproduce this behaviour without any file transfers or mapped drives, so I doubt that's all there is to it.

@obilodeau
Copy link
Collaborator

I'll repro the "too many files open" stack trace with a scanner. Then test if @xshill's fix takes care of that for simple cases at least. We should plug the leaks but I don't think we should lose too much time trying to survive 10,000 concurrent connections.

@obilodeau
Copy link
Collaborator

Tested with rdpassspray and masscan this morning and couldn't crash pyrdp. However, these tools are gentle: rdpassspray spawns a xfreerdp and wait on it to finish before retrying and masscan wait 1 second on a retry on the same host/port combo.

Googling for nmap and stress testing I stumbled upon: https://www.rapid7.com/db/modules/auxiliary/dos/tcp/synflood/. Using this doesn't negatively affects PyRDP on current master branch. The SYN connections don't even trigger logging.

For file transfers, @yraiesets did whole directory transfers stress tests with and without executables in them and nothing wrong happened.

I think the file mapping changes and the linked issues fixed this. I'm closing this. Let's open a new ticket if we have a clear reproducer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants