Cloudplow has 3 main functions:
-
Automatic uploader to Rclone remote : Files are moved off local storage. With support for multiple uploaders (i.e. remote/folder pairings).
-
UnionFS Cleaner functionality: Deletion of UnionFS-Fuse whiteout files (*_HIDDEN~) and their corresponding "whited-out" files on Rclone remotes. With support for multiple remotes (useful if you have multiple Rclone remotes mounted).
-
Automatic remote syncer: Sync between two different Rclone remotes using 3rd party VM instances. With support for multiple remote/folder pairings. With support for multiple syncers (i.e. remote/remote pairings).
-
Ubuntu/Debian OS.
-
Python 3.5 or higher (
sudo apt install python3 python3-pip
). -
Required Python modules (see below).
-
Clone the cloudplow repo.
sudo git clone https://github.com/l3uddz/cloudplow /opt/cloudplow
-
Fix permissions of the cloudplow folder (replace
user
/group
with your info; runid
to check).sudo chown -R user:group /opt/cloudplow
-
Go into the cloudplow folder.
cd /opt/cloudplow
-
Install the required python modules.
sudo python3 -m pip install -r requirements.txt
-
Create a shortcut for cloudplow.
sudo ln -s /opt/cloudplow/cloudplow.py /usr/local/bin/cloudplow
-
Generate a basic
config.json
file.cloudplow run
-
Configure the
config.json
file.nano config.json
{
"core": {
"dry_run": false,
"rclone_binary_path": "/usr/bin/rclone",
"rclone_config_path": "/home/seed/.config/rclone/rclone.conf"
},
"hidden": {
"/mnt/local/.unionfs-fuse": {
"hidden_remotes": [
"google"
]
}
},
"notifications": {
"Pushover": {
"app_token": "",
"service": "pushover",
"user_token": "",
"priority": "0"
},
"Slack": {
"webhook_url": "",
"sender_name": "cloudplow",
"sender_icon": ":heavy_exclamation_mark:",
"channel": "",
"service": "slack"
}
},
"nzbget": {
"enabled": false,
"url": "https://user:pass@nzbget.domain.com"
},
"plex": {
"enabled": true,
"max_streams_before_throttle": 1,
"poll_interval": 60,
"verbose_notifications": false,
"rclone": {
"throttle_speeds": {
"0": "100M",
"1": "50M",
"2": "40M",
"3": "30M",
"4": "20M",
"5": "10M"
},
"url": "http://localhost:7949"
},
"token": "",
"url": "https://plex.cloudbox.media"
},
"remotes": {
"google": {
"hidden_remote": "google:",
"rclone_excludes": [
"**partial~",
"**_HIDDEN~",
".unionfs/**",
".unionfs-fuse/**"
],
"rclone_extras": {
"--checkers": 16,
"--drive-chunk-size": "64M",
"--stats": "60s",
"--transfers": 8,
"--verbose": 1,
"--skip-links": null
},
"rclone_sleeps": {
"Failed to copy: googleapi: Error 403: User rate limit exceeded": {
"count": 5,
"sleep": 25,
"timeout": 3600
}
},
"rclone_command": "move",
"remove_empty_dir_depth": 2,
"sync_remote": "google:/Backups",
"upload_folder": "/mnt/local/Media",
"upload_remote": "google:/Media"
},
"google_downloads": {
"hidden_remote": "",
"rclone_excludes": [
"**partial~",
"**_HIDDEN~",
".unionfs/**",
".unionfs-fuse/**"
],
"rclone_extras": {
"--checkers": 32,
"--stats": "60s",
"--transfers": 16,
"--verbose": 1,
"--skip-links": null
},
"rclone_sleeps": {}
},
"rclone_command": "copy",
"remove_empty_dir_depth": 2,
"sync_remote": "",
"upload_folder": "/mnt/local/Downloads",
"upload_remote": "google:/Downloads"
},
"box": {
"hidden_remote": "box:",
"rclone_excludes": [
"**partial~",
"**_HIDDEN~",
".unionfs/**",
".unionfs-fuse/**"
],
"rclone_extras": {
"--checkers": 32,
"--stats": "60s",
"--transfers": 16,
"--verbose": 1,
"--skip-links": null
},
"rclone_sleeps": {
"Failed to copy: googleapi: Error 403: User rate limit exceeded": {
"count": 5,
"sleep": 25,
"timeout": 300
}
},
"rclone_command": "move",
"remove_empty_dir_depth": 2,
"sync_remote": "box:/Backups",
"upload_folder": "/mnt/local/Media",
"upload_remote": "box:/Media"
}
},
"syncer": {
"google2box": {
"rclone_extras": {
"--bwlimit": "80M",
"--checkers": 32,
"--drive-chunk-size": "64M",
"--stats": "60s",
"--transfers": 16,
"--verbose": 1
},
"service": "scaleway",
"sync_from": "google",
"sync_interval": 24,
"sync_to": "box",
"tool_path": "/home/seed/go/bin/scw",
"use_copy": true,
"instance_destroy": false
}
},
"uploader": {
"google": {
"check_interval": 30,
"exclude_open_files": true,
"max_size_gb": 400,
"opened_excludes": [
"/downloads/"
],
"schedule": {
"allowed_from": "04:00",
"allowed_until": "08:00",
"enabled": false
},
"size_excludes": [
"downloads/*"
]
},
"google_downloads": {
"check_interval": 30,
"exclude_open_files": true,
"max_size_gb": 400,
"opened_excludes": [
"/downloads/"
],
"schedule": {},
"size_excludes": [
"downloads/*"
]
},
}
}
"core": {
"dry_run": false,
"rclone_binary_path": "/usr/bin/rclone",
"rclone_config_path": "/home/seed/.config/rclone/rclone.conf"
},
"dry_run": true
- prevent any files being uploaded or deleted - use this to test out your config.
rclone_binary_path
- full path to rclone binary file.
rclone_config_path
- full path to rclone config file.
Hidden
UnionFS Hidden File Cleaner: Deletion of UnionFS whiteout files and their corresponding files on rclone remotes.
"hidden": {
"/mnt/local/.unionfs-fuse": {
"hidden_remotes": [
"google"
]
}
},
This is where you specify the location of the unionfs _HIDDEN~ files (i.e. whiteout files) and the rclone remotes where the corresponding files will need to be deleted from. You may specify than one remote here.
The specific remote path, where those corresponding files are, will be specified in the remotes
section.
Notification alerts during tasks.
Currently, only Pushover and Slack are supported. But more will be added later.
"notifications": {
"Pushover": {
"app_token": "",
"service": "pushover",
"user_token": "",
"priority": 0
}
},
Retrieve app_token
and user_token
from Pushover.net and fill it in.
You can specify a priority for the messages send via Pushover using the priority
key. It can be any Pushover priority value (https://pushover.net/api#priority)
Note: The key name can be anything (e.g. "Pushover":
), however, the "service"
must be "pushover"
.
"notifications": {
"Slack": {
"webhook_url": "",
"sender_name": "cloudplow",
"sender_icon": ":heavy_exclamation_mark:",
"channel": "",
"service": "slack"
}
},
Retrieve the webhook_url
when registering your webhook to Slack
(via https://my.slack.com/services/new/incoming-webhook/).
You can use sender_name
, sender_icon
and channel
to specify settings
for your webhook. You can however leave these out and use the defaults.
Note: The key name can be anything (e.g. "Slack":
), however, the "service"
must be "slack"
.
Cloudplow can pause the Nzbget download queue when an upload starts; and then resume it upon the upload finishing.
"nzbget": {
"enabled": false,
"url": "https://user:pass@nzbget.domain.com"
},
enabled
- true
to enable.
url
- Your Nzbget URL. Can be either http://user:pass@localhost:6789
or https://user:pass@nzbget.domain.com
.
Cloudplow can throttle Rclone uploads during active, playing Plex streams (paused streams are ignored).
"plex": {
"enabled": true,
"max_streams_before_throttle": 1,
"poll_interval": 60,
"verbose_notifications": false,
"rclone": {
"throttle_speeds": {
"0": "1000M",
"1": "50M",
"2": "40M",
"3": "30M",
"4": "20M",
"5": "10M"
},
"url": "http://localhost:7949"
},
"token": "",
"url": "https://plex.domain.com"
},
enabled
- true
to enable.
url
- Your Plex URL. Can be either http://localhost:32400
or https://plex.domain.com
.
token
- Your Plex Access Token.
poll_interval
- How often (in seconds) Plex is checked for active streams.
max_streams_before_throttle
- How many playing streams are allowed before enabling throttling.
verbose_notifications
- Send notifications when rate limit is adjusted due to more/less streams.
rclone
-
url
- Leave as default. -
throttle_speed
- Categorized option to configure upload speeds for various stream counts (where5
represents 5 streams or more). Stream count0
represents speeds when no active stream is playing.M
is MB/s.- Format:
"STREAM COUNT": "THROTTLED UPLOAD SPEED",
- Format:
This is the heart of the configuration, most of the config references this section one way or another (e.g. hidden path references).
You can specify more than one remote here.
"remotes": {
"google": {
Under "remote"
, you have the name of the remote as the key (in the example above, it is "google"
). The remote name can be anything (e.g. google1, google2, google3, dropbox1, etc).
Hidden Cleaner
"remotes": {
"google": {
"hidden_remote": "google:",
"hidden_remote"
: is the remote path where the unionfs hidden cleaner will remove files from (if the remote is listed under the hidden
section).
"rclone_excludes": [
"**partial~",
"**_HIDDEN~",
".unionfs/**",
".unionfs-fuse/**"
],
These are the excludes to be used when uploading to this remote.
"rclone_extras": {
"--checkers": 16,
"--drive-chunk-size": "64M",
"--stats": "60s",
"--transfers": 8,
"--verbose": 1
},
These are rclone parameters that will be used when uploading to this remote. You may add other rclone parameters.
Note: a value of null will mean --no-traverse
instead of --no-traverse=null
.
Format:
"rclone_sleeps": {
"keyword or phrase to be monitored": {
"count": 5,
"sleep": 25,
"timeout": 300
}
},
Example:
"rclone_sleeps": {
"Failed to copy: googleapi: Error 403: User rate limit exceeded": {
"count": 5,
"sleep": 25,
"timeout": 300
}
},
"rclone_sleeps"
are keywords or phrases that are monitored during rclone tasks that will cause this remote's upload task to abort and go into a sleep for a specified amount of time. When a remote is asleep, it will not do it's regularly scheduled uploads (as definted in check_intervals
).
You may list multiple keywords or phrases here.
In the example above, the phrase "Failed to copy: googleapi: Error 403: User rate limit exceeded"
is being monitored.
"count"
: How many times this keyword/phrase has to occur within a specific time period (i.e. timeout
), from the very first occurrence, to cause the remote to go to sleep.
"timeout"
: The time period (in seconds) during which the the phrase is counted in after its first occurance.
-
On it's first occurrence, the time is logged and if
count
is reached within thistimeout
period, the upload task will abort and the remote will go intosleep
. -
If the
timeout
period expires without reaching thecount
, thecount
will reset back to0
. -
The
timeout
period will restart again after the first new occurance of the monitored phrase.
"sleep"
: How many hours the remote goes to sleep for, when the monitored phrase is count
-ed during the timeout
period.
"rclone_command": "move",
This is the desired command to be used when running any rclone uploads. Options are move
or copy
. Default is move
.
"remove_empty_dir_depth": 2,
This is the depth to min-depth to delete empty folders from relative to upload_folder
(1 = /Media/
; 2 = /Media/Movies/
; 3 = /Media/Movies/Movies-Kids/
)
"upload_folder": "/mnt/local/Media/",
"upload_remote": "google:/Media/"
"upload_folder"
: is the local path that is uploaded by the uploader
task, once it reaches the size threshold as specified in max_size_gb
.
"upload_remote"
: is the remote path that uploader
task will uploaded to.
Each entry to uploader
references a remote inside remotes
(i.e. the names have to match). The remote can only be referenced ONCE.
If another folder needs to be uploaded, even to the same remote, then another uploader/remote combo must be created. The example at the top of this page shows 2 uploader/remote configs.
"uploader": {
"google": {
"check_interval": 30,
"exclude_open_files": true,
"max_size_gb": 500,
"opened_excludes": [
"/downloads/"
],
"schedule": {
"allowed_from": "04:00",
"allowed_until": "08:00",
"enabled": false
},
"size_excludes": [
"downloads/*"
]
}
}
In the example above, the uploader references "google"
from the remotes
section.
"check_interval"
: how often (in minutes) to check the size of this remotes upload_folder
. Once it reaches the size threshold as specified in max_size_gb
, the uploader will start.
"exclude_open_files"
: when set to true
, open files will be excluded from the rclone transfer (i.e. transfer will occur without them).
"max_size_gb"
: maximum size (in gigabytes) before uploading can commence
"opened_excludes"
: Paths the open file checker will check for when searching for open files. In the example above, any open files with /downloads/
in it's path, would be ignored.
"schedule"
: This section allows you to specify a time period, in 24H (HH:MM) format, for when uploads are allowed to start. Uploads in progress will not stop when allowed_until
is reached. This setting will not affect manual uploads, only the automatic uploader in run
mode.
"size_excludes"
: Paths that will not be counted in the total size calculation for max_size_gb
.
To have Cloudplow run automatically, do the following:
-
sudo cp /opt/cloudplow/systemd/cloudplow.service /etc/systemd/system/
-
sudo systemctl daemon-reload
-
sudo systemctl enable cloudplow.service
-
sudo systemctl start cloudplow.service
Command:
cloudplow
usage: cloudplow [-h] [--config [CONFIG]] [--logfile [LOGFILE]]
[--loglevel {WARN,INFO,DEBUG}]
{clean,upload,sync,run}
Script to assist cloud mount users.
Can remove hidden files from rclone remotes, upload local content to remotes as-well as keeping remotes
in sync with the assistance of Scaleway.
positional arguments:
{clean,upload,sync,run}
"clean": clean HIDDEN files from configured unionfs mounts and rclone remotes
"upload": perform clean and upload local content to configured chosen unionfs rclone remotes
"sync": perform sync of configured remotes
"run": starts the application
optional arguments:
-h, --help show this help message and exit
--config [CONFIG] Config file location (default: /opt/cloudplow/config.json)
--logfile [LOGFILE] Log file location (default: /opt/cloudplow/cloudplow.log)
--loglevel {WARN,INFO,DEBUG}
Log level (default: INFO)
If you find this project helpful, feel free to make a small donation via Monzo (Credit Cards, Apple Pay, Google Pay, and others; no fees), Paypal (l3uddz@gmail.com), and Bitcoin (3CiHME1HZQsNNcDL6BArG7PbZLa8zUUgjL).
Hey dude! Help me out for a couple of 🍻!