-
Notifications
You must be signed in to change notification settings - Fork 1.5k
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add command to completely delete file from seafile #861
Comments
Maybe you can fix it by first unsyncing the library. Removing the big files in it and then sync the library with the existing local folder again. |
We'll consider adding this feature. |
@shoeper |
1+ |
The most intuitive way would be add additional link into the trash: "delete completely". If this file was shared between multiple users it should be deleted completely only if all users have pressed "delete completely" (link counting). I don't know exactly how Seafile works, but I think that for any given file you can get a list of blocks. To delete file completely run GC only on those blocks (this probably requires GC tweaking). |
I think this won't come because pro version already has live gc (you don't have to stop seafile to run gc). |
The design of Seafile's data model binds history and trash together. There is no way to clean deleted file without removing other files' history. |
I was about to create a new issue when I found this one. I believe that it would be time to recreate the data model to allow this. Here is the text of the new issue I would have created. If you feel that I should create a new issue with this text, let me know. Title: Seafile - Completely delete a file/directory with no traces left in the library. New feature request: To be able to completely delete a file or a directory, be it in service or already in the trash. Currently, when a file/directory is delete, it is moved to trash where it reside forever. This is not enough. Reasons to implement this feature,
Reducing the overall history retention time is not an acceptable solution as this affects all files/directories. Users need to be able to preserve the full history of other files in the library. This could be implemented in two steps:
IMPLEMENTATION SUGGESTION: A way of implementing it is to define a retention time for each file/directory individually. By default, newly created files/directories would inherit the parent's directory history retention time. The user could then set the history retention time of individual files or directories to any number, including zero. By setting the retention time to zero, it would effectively delete the file/directory. A button named "Permanently delete" or "Nuke" could actually perform this action. For directories, the retention time would also propagate to its children. |
As you could see in the previous answer by killing this much more complicated than you think. But I know the problem. Setting history length for specific files would be really great - also if blocks are only removed on seafile gc run. Or even a setting like remove files in trash after x days. |
I understood how complicated it is. This is meant for a 5.0 or 6.0 release. A file's history should be decoupled from the other files. Sooner or later, Seafile needs to adress this as repositories will grow in size without any serious mean to reduce it without sacrificing the history of a few important files. In my case, I accidently copied a directory containing Gigs of data (binary) in my library. Now I am stuck with that. |
Why was this issue closed? |
It's a shame this issue was closed because this is a very good explanation of the problem and as of 2024 this is still needed. I understand it might be difficult to implement, but the issue should remain open until then. |
I have a "University" library. My teacher gave me a set of virtual machines (50 GiB) for network classes and I accidentally put them into my library. Next day I saw that seafile client is stuck in uploading (0%) and my library grew up to 65 GiB. I've moved files out of the library, ran garbage collector (on the server side), cleared
blocks
directory — nothing helped me. Moreover, uploading those virtual machines blocks seafile from uploading other changes in my "University" library.I know that I can change the history period to the 0 days and then run garbage collector, but I don't want to lose my history.
I would like to have a command (probably on a server side) that could remove specified files completely. directory.
PS: How can I fix this situation by now?
The text was updated successfully, but these errors were encountered: