-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support mounting remote storage #3
Comments
Agree - this is a good idea. I think as a fastest solution , I could do like what emacs tramps kind of does. that is, is you provide the path to the db (and I assume you know you can't use this on a running chain, just a snapshot hence I realize why you said replications service aka you can temp stop it while using the db) you provide the path to the db starting with ssh, so like |
also are you mainly using pebble backed or leveldb backed machines? I assume probably leveldb? |
actually I just looked at this again https://github.com/libfuse/sshfs - did you try it, maybe it should work already as is? |
@sambacha Okay so I tried this and while it does work , its painfully slow. Like you'll be waiting over 15 minutes (and this was for a snap synced dir, so I imagine archive one will be much worse) to do initial dir load (as there's a dir scan that happens) and sshfs will do a copy to in memory of each file. Chaindata itself will have many hundred of files so it's just not feasible. |
What are you using for remote FS, FUSE? |
yes sshfs , and this was on local network too so I imagine remote on different network will be even worse
where sshfs running on my MacBook and thelio-archive is my local network linux machine. |
OK OK, how about something similar to this?
|
Interesting - I didn't know about that will look up |
There are several ways to do this, the benefit is that (for me at least) I can access geth archive state from our production replication service. Geth requires a beacon client to sync to mainnet, which makes it more cumbersome, at least for mainnet use case.
I am refering to SSHFS, etc. Not S3.
The text was updated successfully, but these errors were encountered: