Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terminate after last (large) files finished #1522

Open
orschiro opened this issue Jul 11, 2017 · 19 comments
Open

Terminate after last (large) files finished #1522

orschiro opened this issue Jul 11, 2017 · 19 comments

Comments

@orschiro
Copy link

I want to put the following out for discussion.

I run my backup to Backblaze B2. To backup my entire system to B2 takes several days to finish. I don't want to run my laptop until Rclone is finished.

Thus, at one point I will want to terminate the Rclone process to continue at a later stage.

However, if I terminate in the middle of a large file upload, next time Rclone will start with this file from scratch.

I would need a way to automatically terminate after the last (large) file has finished uploading to avoid having to start with unfinished files next time I run Rclone.

@diamondsw
Copy link

One way to help with this (especially if you have limited bandwidth) is to limit the simultaneous transfers. I run --transfers=1 at home so that it will finish individual files as quickly as possible and limit this problem to at most a single large file. However, if you have a lot of bandwidth at your disposal then this will typically rate limit you on the cloud end. Just a thought.

@orschiro
Copy link
Author

@diamondsw thanks for the idea. Will give it a try. Any other thoughts?

@ncw
Copy link
Member

ncw commented Jul 13, 2017

I think there is an issue about doing a graceful shutdown for rclone (can't find it at the moment!)

This would cause rclone to finish transferring the files it was transferring before it exited.

Does that sound useful?

@orschiro
Copy link
Author

This would cause rclone to finish transferring the files it was transferring before it exited.

And this would work with large files, too?

@diamondsw
Copy link

Graceful shutdown would be very interesting, with a few caveats:

  1. The user should get an immediate printout/estimate of the remaining time, so judge whether they want to wait for completion. If I need to grab my laptop and leave, I might be willing to wait a minute or two, but not an hour.
  2. A non-graceful shutdown should be offered, so if the user needs to leave now (or there are other forceful needs - OS reboot, emergency sleep due to loss of power, etc) that should be possible as it is today. Sometimes you just can't do things nicely. ;)

@ncw
Copy link
Member

ncw commented Jul 30, 2017

I've seen programs which do the graceful shutdown on the first CTRL-C and quit permanently on the second.

Another option would be to implement this with a different signal - that would stop it working on Windows.

This behavior could or maybe should be controlled by a flag, say --graceful-stop or something like that.

Thoughts?

@orschiro
Copy link
Author

I've seen programs which do the graceful shutdown on the first CTRL-C and quit permanently on the second.

Nice, like this idea!

This behavior could or maybe should be controlled by a flag, say --graceful-stop or something like that.

You have my vote. :-)

@ncw ncw added this to the Help Wanted milestone Aug 30, 2018
@Tr4il
Copy link

Tr4il commented Oct 1, 2019

Was there ever any news/updates to this? It's something I'd actually use a lot! :)

@kageurufu
Copy link
Contributor

kageurufu commented Jan 9, 2020

I'd like to see this implemented using posix signals, maybe SIGTERM (requesting termination) or SIGHUP (hangup) could be semantically sane.

Then adding a --graceful-stop as @ncw suggested could attach that to SIGINT (Ctrl+C sends this) as well.

Maybe do a graceful shutdown, then abort for subsequent SIGINTs? (EDIT: Just saw this was already suggested)

Then, I could safely killall --SIGTERM rclone from a script to safely stop current background transfers during system shutdown or something

@sjpotter
Copy link

was about to file an issue for this. I think it should be part of the remote control interface.

rclone rc gracefull-shutdown, doesn't queue anything for the future.

if one wants to add it via other signals that can come later, but this seems like it should be part of the remote control interface first, as that's the easiest to get right

@ncw
Copy link
Member

ncw commented Jan 22, 2020

was about to file an issue for this. I think it should be part of the remote control interface.

rclone rc gracefull-shutdown, doesn't queue anything for the future.

if one wants to add it via other signals that can come later, but this seems like it should be part of the remote control interface first, as that's the easiest to get right

That sounds like a good idea - thanks. I'l re-working some of that infrastructure at the moment so I'll see if I can fit it in there.

@scottmlew
Copy link

I wanted to chime in to add that this feature would be extremely helpful.

The rc idea sounds promising. Another idea might be to let the user put a specially named file in a directory, and this signals to rclone to stop processing that directory. However, from looking at the code (my first time looking at Go, so please forgive me!) it looks like the upload list is compiled before any uploads start, so this wouldn't work.

(as an aside, the idea of having a directory ignored by including a special file in it would be a very convenient way of specifying excludes for many use cases...I don't see that this is possible now)

@ncw
Copy link
Member

ncw commented Apr 10, 2020

A graceful stop rc feature is a lot easier to implement than a file in a directory alas. It wouldn't be too difficult to implement provided you don't mind sticking your fingers into concurrent flying chainsaws that is rclone's sync code ;-)

@pesaventofilippo
Copy link

Any news on this? AFAIK it's still not implemented

@ncw
Copy link
Member

ncw commented May 30, 2023

I think if you were to use the rc to set this flag to something small

  --max-transfer SizeSuffix      Maximum size of data to transfer (default off)

Then rclone would stop after finishing its current transfers.

So first you'd want to set a graceful cutoff

$ rclone rc options/set --json '{"main": {"CutoffMode": "soft"}}'

Then set the cutoff to something small and rclone will stop after the current transfers are done.

$ rclone rc options/set --json '{"main": {"MaxTransfer": 1}}'

@pesaventofilippo - do you want to give that a try?

@pesaventofilippo
Copy link

pesaventofilippo commented Jun 1, 2023

I tried it - it kinda worked.
It stopped uploading after the current transfer was finished, throwing an error (as expected, it reached the limit).

Instead of quitting when finished, it just hanged indefinitely doing nothing, but all current transfers were done so I just stopped it with ctrl-c.

(still, in the future it would be nice to have an option to "formally" stop after current transfers and gracefully exit without throwing an error)

@ncw
Copy link
Member

ncw commented Jun 1, 2023

@pesaventofilippo what rclone version did you try that with? If not the latest, can you retry with v1.62.2? There have been bugs with rclone hanging on reaching this limit in old versions but I think (hope!) they are fixed in the latest version.

@pesaventofilippo
Copy link

Yep, I tried with the latest version v1.62.2
Maybe I'll do more tests when I have the occasion to see if it happens every time

@ncw
Copy link
Member

ncw commented Jun 1, 2023

@pesaventofilippo if you can get a log with -vv of it hanging then please open a new issue on Github about it as it isn't supposed to hang! Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants