-
Notifications
You must be signed in to change notification settings - Fork 189
Description
The last commit on github was JUNE/2008! It's being 7 months without ANY bug fixes or new development. Even the main source code depot (http://cr.skytechnology.pl:8081/#/q/status:open), is being stale since Oct/2018. (the only update in November was an update to the UPGRADE text file)
Is this project still being developed at all?
Due to all the frustrations I had in the last 3 years with LizardFS, and the lack of development in the past 7 months, I start testing the newer version of MooseFS... and HUAAAU!! just HUAU!
MooseFS seems miles ahead of LizardFS today, waaay faster read/write (same hardware as LizardFS), and way better speeds for reading small files, which finally makes it possible to use as home folders in our studio.
NO TIMEOUTS!! transfer of files (read/write) just works!! Same hardware, same network as LizardFS, there are NO "timeout" messages or "unknown error" in the logs of any of the machines, chunkserver or master/shadow... You start writing a 300GB file, and the files transfer CONSISTENTLY at 60mb/sec from start to end! LizardFS just halts, drop down to 5mb/sec, jumps to 10-20... but NEVER I was able to write 300GB at more than 20mb/sec with LizardFS... MooseFS gives 60MB/sec OUT OF THE BOX... no hassle, no setup, no adjustments in sysctl... nothing!
The WebUI is so much better, with chunkserver "maintenance mode" switch, a detailed description of storage classes (their approach to file goal) which informs if a class can be fulfilled or not before hand, based on the chunkserver setup you have... not to mention that the class storage system is just AMAZING!!!
Unfortunately, the open source version of MooseFS doesn't have EC(D,P) support, so I can't comment on that performance... But we are seriously considering not only switch to MooseFS, but actually pay for the "Pro" version to have access to EC(D,P), since we're loosing faith on LizardFS at this point.
Apart from the low read/write performance with LizardFS, we're also finding a lot of wasted disk space happening where chunks over chunks are just "forgot" on disks. The Web UI says everything is fine, no undergoal/overgoal files... nothing to be deleted!
By removing a chunk, formatting it's drives and adding the chunk back (essentially forcing LizardFS to re-replicate that chunk), we ended up with 25% more free disk space on that chunk after replication finishes!! 25%!!!!!
We're basically doing that on ALL our chunks now... kind of a forced "scrub", and we're just seeing more and more free space showing up! (and also a few missing chunks showing up for ec(2,1) and 2 goals, when LFS was saying everything as "safe"... not good!!!)
So, we're actually seeing an extra 15TB free disk space starting to appear out of the 60TB storage we have on LizardFS... just by removing/adding chunks.
Because all of that, for the past 2 years we have being using LizardFS JUST for backup storage. It's IMPOSSIBLE to use it in production at the studio, and now I also can't trust on it for backup since I have to explain the boss that we lost a few ec(2,1)/2 goal files that got silently corrupted by LFS.
And NO public development on LizardFS since June!
Not to mention the infamous "lto tape server" support which I even tried contact your company to known more about, and got NO information about whatsoever. Funny thing, the alleged tape support was about 40% the reason we bet on LizardFS over MooseFS 2 year ago... so sad!
anyhow... I would really like to hear from you guys about the future of LizardFS... if there's one at all!
MooseFS is way ahead of you guys, no question about it! So there's a lot of "catch-up" you guys need to do before even thinking about new and better features.
PS: sorry for rumbling... but it's being really frustrating experience so far with LFS... the whole "timeout" thing between servers in LFS is just ridiculous... and more ridiculous the fact you guys keep blaming on hardware or network instead of acknowledge the issue (just read the answers to this issue here in github... it's all documented).
What a surprised I had with MooseFS when I couldn't find ANY timeout messages or performance drops whatsoever, no matter what you do USING THE SAME HARDWARE/NETWORK as LFS!!! Consistent transfer from start to end, all the time, with multiple clients... It's actually more consistent than NFS! (still 25% slower than NFS, but more consistent speeds for sure).
That's more than prove that the problem is not on our side (hardware/network) but in fact with LFS code itself. Sorry to say!