Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Total upload/time limits! [Feature Request] #985

Closed
doonze opened this issue Jan 4, 2017 · 14 comments
Closed

Total upload/time limits! [Feature Request] #985

doonze opened this issue Jan 4, 2017 · 14 comments

Comments

@doonze
Copy link

doonze commented Jan 4, 2017

Loving rclone so far, but I would like to see a total session upload limit option. Pretty straight forward, a flag you can set to stop uploading at say 100GB. rclone will finish it's current uploads once that target is reached but will not take any more work, then close the session. Since it tracks total upload anyway, I think it would be a really easy thing to do. If no one else is interested, I can fork and write it for testing. But this should be a simple feature for someone already familiar with the code. I have a 1TB per month (soft) limit with my ISP, and I would rather keep them out of my business, so being able to control the amount uploaded would be nice.

Also, while we're at it, a time based limit as well. I've seen that request before. If you could set something to only upload for 3 hours for example. That way with crontab (or the like) you could control the exact times your uploading. Should also be easy since total time is also already tracked.

Biggest advantage is that currently if I want to cut it off, I have to of course kill it once my target time/size is reached. So I lose the current files it's uploading, and risk maybe leaving things in a funky state. I'm confident rclone will fix that on the next copy/sync, but it's a hacky way of imposing limits. I'd rather just be able to set them when I run it. With total limits, and bandwidth limits that are already in place, I could for example just set my big directory's to upload 750GB, at a low bandwidth limit, then just let it run all month till it hit the target. Then do it again next month. And I never have to worry about blowing my ISP's limits.

Thoughts?

@ncw ncw added the enhancement label Jan 4, 2017
@ncw
Copy link
Member

ncw commented Jan 4, 2017

Nice idea. Fancy having a go?

@doonze
Copy link
Author

doonze commented Jan 5, 2017

I can pull and fork it when I get a chance. Likely take me awhile to dig through the code, why I was hoping someone familiar with the code wanted to take a swing at it. Lol. But yeah, I'll take a look.

@doonze
Copy link
Author

doonze commented Jan 6, 2017

Hmmm... Crash course in GO, still not sure of the logic flow your using, but I can at least read it now...

@ncw
Copy link
Member

ncw commented Jan 6, 2017

Ok here is an outline of what you need to do...

  • in fs/config.go make a new flag (or flags) and stuff them into the Config struct
  • in fs/sync.go in the run method, you need to break out of the Do the transfers loop when your limits are reached. Curernt transfers will finish neatly to completion if you do that, and there may be some objects in the pipeline, but that is the neatest way to do it.

Then write a test or two, do the docs and you are done!

@doonze
Copy link
Author

doonze commented Jan 7, 2017

ah ha!! sync was the package I was not getting! I hadn't looked in there much cause I had figured it was like copy and others, just a package for that command. Light bulb comes on bbl.....

@doonze
Copy link
Author

doonze commented Jan 9, 2017

Another side benefit is that you will be able set a size AND time limit. So you can say upload for 24 hours, or 100 GB's, whatever happens first! I can see that being handy!

ncw: Do you think checking these two values in the run loop would cause any perceivable slow down? I'm new to GO, but I don't think it would. But I'm wondering if it's better to have the checks only on updates, or every loop? Course if someone set their updates to say, every 24 hours, then it would only check every 24 hours.... I could see that being an issue...

UPDATE:
Got the config variables created and put into the config structure for both the size and time limits.

TODO:

  1. Add the run loop exit. Plan on just making the loop think the directory is now empty once target is reached. I haven't dug into it yet, but that seems the easiest way. I'll update when I dig into that part.

  2. Figuring out how to adapt the Bytes, MB, GB conversions that are in place already for the bandwidth limits to the total size limit. I'm not sure if it's best to copy the existing as is, and create a whole new process for my variables, or add logic to the existing code to handle both at once. Still thinking on the best way to handle this.

@ncw
Copy link
Member

ncw commented Jan 9, 2017

Do you think checking these two values in the run loop would cause any perceivable slow down?

No. rclone is limited by network speed and disk speed, not by CPU.

Figuring out how to adapt the Bytes, MB, GB conversions that are in place already for the bandwidth limits to the total size limit. I'm not sure if it's best to copy the existing as is, and create a whole new process for my variables, or add logic to the existing code to handle both at once. Still thinking on the best way to handle this.

You shouldn't need to make anything special here - look at a SizeSuffix config variable - that does MB/GB parsing for you into an int64.

@doonze
Copy link
Author

doonze commented Jan 10, 2017

OK, I wondered if that was global or a built in function of bandwidth. I looked at that code before I went through the GO tutorals. I'll look more closely at how it's implemented. I'll have to copy the format for the time limit, got to convert minutes, hours, and days to seconds. I'll likely just adapt the bytes conversion to a base 60 for time. Unless it's already in a package I haven't looked at yet. GO is still a little weird to me logic flow wise, and learning someone else's logic is always the challenge.

Haven't had any time to code last few days, plan on working on it later this week.

@ncw
Copy link
Member

ncw commented Jan 11, 2017

I'll have to copy the format for the time limit, got to convert minutes, hours, and days to seconds. I'll likely just adapt the bytes conversion to a base 60 for time. Unless it's already in a package

You can use a time.Duration for this, eg

cmd/mount/mount.go:	commandDefintion.Flags().DurationVarP(&dirCacheTime, "dir-cache-time", "", dirCacheTime, "Time to cache directory entries for.")

@doonze
Copy link
Author

doonze commented Jan 11, 2017 via email

@doonze
Copy link
Author

doonze commented Jan 19, 2017

Have had 0 time to work on this :( but I've got a pretty go idea of how I'm going to code it.)

@vertigo235
Copy link

We need this now to help with the google upload bans

@ghost
Copy link

ghost commented Aug 14, 2017

@vertigo235 --bwlimit can be used to limit the speed to one that wouldn't go over the 24h limit -- this isn't needed immediately

@vertigo235
Copy link

yes that's a workaround, but I am using a preemptable GCE instance to sync some stuff and it would be very nice to run it for an hour or so each day instead of having to keep it open all the time. (or until google kills it)

@ncw ncw added this to the Help Wanted milestone Dec 14, 2017
ncw pushed a commit to boosh/rclone that referenced this issue Jan 24, 2020
…sfer session

This gives you more control over how long rclone will run for, making
it easier to script backups, e.g. via cron. Once the `--max-duration`
time limit is reached, no new transfers will be initiated, but those
already in-flight will be allowed to complete.

Fixes rclone#985
@ncw ncw closed this as completed in 0d7573d Jan 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants