Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process dies on large NTDS.DIT file #20

Open
b1gbroth3r opened this issue Nov 8, 2022 · 3 comments
Open

Process dies on large NTDS.DIT file #20

b1gbroth3r opened this issue Nov 8, 2022 · 3 comments

Comments

@b1gbroth3r
Copy link

b1gbroth3r commented Nov 8, 2022

Commands tried:
./gosecretsdump_linux_v0.3.1 -enabled -ntds.dit -system SYSTEM

Result:

[1]    16956 killed     ./gosecretsdump_linux_v0.3.1 -enabled -ntds ntds.dit -system SYSTEM

The NTDS.DIT file in this case is 5GB+ in size. I wish I could provide more info but that's all I can share at the moment :/

@C-Sto
Copy link
Owner

C-Sto commented Nov 9, 2022

It looks like the OS is killing the process - though I have encountered this issue when the .dit is 20gb+ as well. I'll look into having the .dit read from disk rather than loaded into memory - however in the meantime you can use something like this - which should be quicker than the impacket dumper, and won't have the memory limit:
https://www.dsinternals.com/en/dumping-ntds-dit-files-using-powershell/

@lwears
Copy link

lwears commented Aug 14, 2023

Hey. I am learning Go, bu not programming in general. Ive cloned the repo to try and understand this issue and maybe have a look into fixing it. Is this still issue still a thing?
I am getting the impression that maybe the project is no longer maintained.
I would be happy to learn some Go and have a look into this if so

@C-Sto
Copy link
Owner

C-Sto commented Aug 28, 2023

hey @lwears - the issue is still a thing. This project is not actively maintained, I come back to it whenever there is something I need in an engagement, but I have spent some time in the past trying to find a decent solution for this problem, and run into many roadblocks on the way.

The first (easy) issue is that in order to extract the data quickly, we first load the whole .dit into memory, and scan from there - hopping around the database is much quicker in memory than doing disk seeks (even on a macbook with fancy NVME drives). This can be relatively easily addressed by adjusting the readseeker to work on-disk, and infact this is how it originally worked.

We still need to build an in-memory version of all of the data in the database (each record links to the next, and it's not always clear which columns a record has), so even if the data is read from disk, some bad assumptions I made early on in the project (like building maps to represent tables 🤦 ) made it quicker, but balloon memory very quickly; even if I'm very particular about deleting unused map entries.

To be able to avoid soaking memory up so quickly, I suspect a full reimplementation will be required; which is why it's on the backburner. Since building the tool, the ESE database spec has been released publicly - most of the code here has more or less been translated from the Python impacket version (which was built by reverse engineering how ESE works), so realistically, the most useful and practical contribution would be to build an ESE library that is performant and sensible with regards to memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants