-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
file handle read 404 Not Found, missung file chunk after upload via filer.copy #2162
Comments
To confirm:
btw: https://github.com/chrislusf/seaweedfs/releases/dev has the latest dev version. You can use the |
If you mean that another application is uploading files to seaweed, then yes. There is an application that is pushing files via S3 every other second.
Highly variable. Range from a few kb to 300mb, but most ranges are between 1 and 6mb.
Yes, checked them for any new entreis at the time of the uplaod, but nothing to be found there. That's why I didn't include them. And I mean that there was nothing. Running |
|
It would also be nice to know what files are broken, so I can try and see if they can be repaired. |
Use the 2.56 version, and run this to see which file has missing chunks:
|
Ran the Ontop of that, the other application that is uploading data to seaweed via S3 has timestamps in the file nemes. The earliest file has a timestamp Something doesn'T add up. |
I've run |
The files are readable once again. |
Describe the bug
Continuation of #2154
So, this time I had uploaded 11k files via
filer.copy
and once again a few files error out.weed -v 4 -logdir /tmp/wlogs filer.copy * http://localhost:8888/buckets/client_files/ >> /tmp/wlogs/somelog.log
from log:
From
filer.copy
:From
mount
:I did not use other tools, I uploaded it in one go. So something is not right. If this is happening now, I wonder if this happens with S3 and other parts. But will only be able to check on the next version.
System Setup
version 8000GB 2.55 05af54a linux amd64
filer.toml
weed mount -filer=localhost:8888 -dir=/mnt/weed -cacheDir=/snapraid/storage_c2/test/seaweed-cache -cacheCapacityMB=0
Expected behavior
File to no have errors when uploaded via
filer.copy
. If needed, double check if the uplaoded chunk is really uploaded.Sure, will slow down the whole process, but data loss is kinda bad. Tho I will only be able to say it in the next release.
The text was updated successfully, but these errors were encountered: