-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
subdir for temp files #959
Comments
That’s a bit weird, I’m running a node on 1TB drive too and have enough space left (about 200 GB). But I agree with your point about temp files. |
My main server has 656GB data + 197GB temp files (+ OS / swap) = full drive. Both managed to get to a full sync and maintain it. Though for some reason the production server went disk full. (The other one is less powerful yet has more diskspace.) BTW, any reason the data is 1 giant file rather than the typical arsenal of smaller files ? This makes it incompatible with COW filesystems (BTRFS, ZFS, etc). |
Reason for gigantic db file - transactions. But I believe we can split it at-least at 2: blockchain (blocks 120Gb) and everything else. For sure it will not happen in close future, need to be super careful with such change |
About btrfs, it’s interesting but hard question. LMDB - is B+ tree, BTRFS is B tree. Tree on tree must be or redundant or weird. But i don’t believe that BTRFS and ZFS don’t support 1Tb files. Or you mean their cool features become not cool? |
temporary files are 128mb |
Temp files aren't persisted though. They are created during some stages of sync (and their size depends on how much data you are syncing) and then they should be removed. Temp files are used in a couple of stages and during db migrations, not only indexes, but the idea is the same. |
but sure, it definitely makes sense to make a subdir for temp files, I'll take a look at that |
@AskAlexSharov 1TB files on a COW filesystem like BTRFS would surely be brutal for perforamance, seeing they'd get many small writes ? My main reason to want to explore this avenue is snapshotting and send+receive thereof, so that in case of issues i can easily revert back. @mandrigin currently the amount of temp files still growing. I had successfully completed full sync before, so it's only doing catch up since. (Till it got stuck by full disk, which i've since managed to resume.) My chaindata is 657GiB. |
Thanks for your report! The old temp files files are not cleaned automatically, do at the moment they need to be removed manually. If you stop turbo-geth, and remove the files, it will not have any adverse effect |
I’m sure that 1Tb usual files and 1Tb of mmap files are not equal terabyte. Because only OS can read/write them, not App, and i’m sure that OS has integration with BTRFS for this case. But, yes, definitely we need explore that snapshotting works well. |
What i really mean: BTRFS needs not just small files But knowledge “what changed” for incremental Snapshoting, and OS has information which pages are new in Mmap file and when was updated. |
I'm running turbogeth on a 1TB drive, with 197 GiB in tempfiles it got filled to the brim and got stuck. It would be great if temp / disk usage would be taking into account available space.
Regardless, it would be better if tempfiles would get their own subdir (within the datafolder), so it's easier to mount it on a different disk, considering the large size.
The text was updated successfully, but these errors were encountered: