Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More on Global Locks #290

Closed
alexcrow opened this issue Jun 3, 2015 · 3 comments
Closed

More on Global Locks #290

alexcrow opened this issue Jun 3, 2015 · 3 comments

Comments

@alexcrow
Copy link

alexcrow commented Jun 3, 2015

Now that MooseFS seems to have global locks (https://moosefs.com/products/overview.html) and given their code is GPL, should we expect to see this soon in LizardFS? Global locks would be great for running CTDB+Samba to avoid having to use GlusterFS or NFS for the CTDB lock area.

Also can the masters still be managed by Corosync and Pacemaker until the new HA stuff is finalised? Is there any easy recipe for this around? I've looked across the repo and all the info on master HA is very confusing and often conflicting.

One more thing - a wishlist item - Metadata sharding ;-) I know that metadata stuff is really hard to do right (see CephFS!) so it may just be a pipedream. Does open the opportunity to have billions or trillions of files/chunks without paying for an 8-socket Xeon box,,,

Thanks

Alex

@psarna
Copy link
Member

psarna commented Jun 3, 2015

  1. Locks will probably be implemented soon
  2. You could still use corosync/pacemaker for ha, especially if you know how to configure it properly. If you want an easy recipe, you can try using UCARP or keepalived for basic ha
  3. We are already considering various ways of reducing metadata overhead on memory,
    you can expect more information in the near future

@borkd
Copy link

borkd commented Jun 20, 2015

If various ideas for metadata footprint and scalability are still being considered BeeGFS (formerly FhGFS) does a pretty good job in this respect.
"FhGFS distributes metadata on a per-directory basis. This means each directory in your filesystem can be managed by a different metadata server. When a new directory is created, the system automatically picks one of the available metadata servers to be responsible for the files inside this directory. New subdirectories of this directory can be assigned to other metadata servers, so that load is balanced across all available metadata servers."
(from http://www.beegfs.com/wiki/FAQ#distributed_metadata)

@onlyjob
Copy link
Member

onlyjob commented Jun 20, 2015

@borkd, BeeGFS is proprietary so they can claim pretty much anything they want and you won't be able to verify their claims. Let's waste no time for proprietary systems, shall we?

@psarna psarna mentioned this issue Jul 31, 2015
@psarna psarna closed this as completed in 86e0d16 Oct 22, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants