-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit number of partitions to be processed #64
Comments
This is true, I know. 😢 Most of the computation time (and also part of memory usage) should be consumed in trying to find the boundaries of partitions whose boot sector is lost. This is the main distinguishing feature between RecuperaBit and other tools. You might want to try forcefully disabling the
If the boot sector of the partition you care about is OK, this should cut the running time significantly. |
I tried the recomendation on other issue to disable searching after 50 partitions, it seemed to work and i was able to recover the most important files i needed. |
Given that RecuperaBit also restores "dollar files" with their original name, it is advised not to restore into a NTFS partition for now. |
For reference to anyone that needs it, lots of memory is required, RAM will never be enough, it will need swap, use a SSD drive as swap, it will be a lot faster.
A lot of thanks to @Lazza for this program |
Yes, I see it "can" require lots of resources and time. But for comparison, I just recovered some 4TB data from 6TB partition in 54 partitions and it never needed more than 9GB RAM, the processing took may be 24h, then restore another 24 (working with a pair of WD Gold drives). |
Indeed these are two things that I plan to address. I will speak about this on March 3rd, 2021. The online webinar is organized by Basis Technology. In case you'd like to attend, here's a link with more information: |
Interesting, I supose it depends on how much files are in the partition and MFT fragmentation/damage. Now my recomendation would be, if it takes to much time don't worry it is normal, if it uses too much memory be prepared to add more RAM/SWAP, still it is normal and expected |
Hello, i found your program on an answer on super user i think, its seems to be very promising, aside from its current limitations it seems very well done.
I'm trying to recover data from a ntfs partition, about 1TB of size, the hard disk "died" when deleting some files through linux, it appears that the MFT was corrupted because of failing sectors on the drive. So i made an image of the partition with ddrescue as you recomended and i'm using your program to try to recover the data.
Problems i've found:
solution, increase swap to about 100GB, on an SSD and it uses about 40 GB of memory while running
I have read most issues here and the MFT seems to be very fragmented, so there are a lot of partitions available to scan. I've tried a windows program on the disk and the scan identified most of the files being on the firsts partitions founds on the disk.
¿How could i limit the processing to only the first 10 partitions of the disk?
I have no python-fu, so any help would be greatly appreciated
The text was updated successfully, but these errors were encountered: