-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server crashed while uploading a large number of files #273
Comments
Thanks for the detailed information! This should be fixed now. Please re-run your test case again. |
@chrislusf Thanks for the quick update, that indeed fixed the crashes! I wasn't able to make the server crash again. But I still get the same filer error if I stop the server and try to start it again. This doesn't seem to happen on every run, only after a large number of files have been uploaded. Here is the output I got upon stopping the server after 1 million files finished uploading:
|
Hi, I have been testing seaweedfs for storing large numbers of images as mentioned in #271. I have run into a situation where the weed server with filer crashes and I'm unable to start it again.
I made a simple application that reproduces this behaviour consistently: https://github.com/mosic/seaweedfs-repro
It starts a pool of 50 Erlang processes which then (in parallel) upload an image to a path which includes a timestamp, incrementing the time by one second, 10 million times. The seaweedfs server crashed in multiple tries, sometimes after ~100k and one time after ~900k requests.
This is an excerpt from the logged output:
You can see more at: https://gist.github.com/mosic/902eb9a34ae1ae87c0e967cee45fa9bc
After this crash, when I try to start seaweed again I get this:
Running
weed fix
for each volume doesn't fix this. Any suggestions?The text was updated successfully, but these errors were encountered: