Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

comparison with Ceph is incorrect #120

Closed
chrislusf opened this issue Apr 16, 2015 · 4 comments
Closed

comparison with Ceph is incorrect #120

chrislusf opened this issue Apr 16, 2015 · 4 comments

Comments

@chrislusf
Copy link
Collaborator

Reported by dieterplaetinck, Sep 3, 2013
Ceph can be set up in 3 ways:

  1. an object store (key maps to blobs/files) called RADOS
  2. HA block device which relies on RADOS.
  3. a POSIX compliant filesystem, which relies on RADOS for the file objects and metadata objects.

So 1) is relatively similar to weed-fs. although it uses an algorithm to compute locations on a per-key basis. a lot of the interfacing is done with client libraries, so the client lib computes the location of files and then talks to storage nodes directly.
although there's also an HTTP gateway similar to your interface.

Ceph tries to be a "do-all" system, it has a bunch of additional features (such as the ability to run code on objects on the storage nods etc. they're also working on erasure coding), but it's also a complicated code base.

it's written in C and not really optimized for multi-core processors.

I think a way weed-fs can diversify itself from Ceph is being more simple and elegant (and yet more powerfull in some ways), partly by not trying to implement every single feature, partly because using Go gets you a long way.

Keep in mind that Ceph is gaining a large ecosystem around it, there's company's (inktank but also 3rd parties) that offer support, etc.

@chrislusf
Copy link
Collaborator Author

Project Member #1 chris.lu
Thanks for the explanation! Having a good understanding of ceph will definitely help weedfs in the long run. I will make the change and put a link to this page.

@chrislusf
Copy link
Collaborator Author

The issue is copied from https://code.google.com/p/weed-fs/issues/detail?id=44 since google code is dying.

@vitalif
Copy link

vitalif commented Jul 25, 2020

Hi, sorry for necroposting, but I have a question regarding this topic:
One of the major advantages of Ceph is consistency, it guarantees consistency in case of any failure. Of course, you lose data when more disks die than you allow to by selecting a redundancy scheme, but it's always consistent, that is the cluster always knows what data is clean and what data isn't :) all writes always go through the primary OSD for a given PG and aren't acknowledged until they're journaled on all replicas.
Can you please describe SeaweedFS algorithms regarding consistency somewhere?

@chrislusf
Copy link
Collaborator Author

chrislusf commented Jul 25, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants