Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need basic help with installation - need more details #34

Open
bgmess opened this issue Jan 2, 2018 · 155 comments
Open

Need basic help with installation - need more details #34

bgmess opened this issue Jan 2, 2018 · 155 comments

Comments

@bgmess
Copy link

bgmess commented Jan 2, 2018

I am fairly new to Linux and Docker. I need more instructions that those given in the quick start:

Launch the CrashPlan PRO docker container with the following command:

docker run -d
--name=crashplan-pro
-p 5800:5800
-p 5900:5900
-v /docker/appdata/crashplan-pro:/config:rw
-v $HOME:/storage:ro
jlesage/crashplan-pro

Where do I enter the above commands?

Is this all I need to do?

Any tips would be greatly appreciated. Not sure how to begin. I have installed Docker on the Synology NAS, but that's as far as I got.

Thanks,
Brian

@excalibr18
Copy link

excalibr18 commented Jan 2, 2018

Hello Brian,
I'm a Synology user as well and was using Patters' CrashPlan package before migrating to the Docker image. Since you're new to Linux, I'll assume you haven't used an SSH client like PuTTY. You'll need to download that before you get started. You can get it here: http://www.putty.org/

Here's how I got jlesage's crashplan pro docker image up and running on my Synology and adopted my existing backup set created from when I was using Patter's CrashPlan package (EDIT: This reflects my personal setup [I have a Synology DS1515+ running DSM 6.1.5 and Docker package version 17.05.0-0367; I have a single volume and it's set up with Synology's SHR-2 RAID configuration] and is intended to be a step-by-step example of how I deployed the docker image on my system. Your individual Synology setup may differ from mine, so please understand that your deployment of the docker image may differ as a result.):

From Synology DSM:

  1. Open Package Center.
  2. Stop Patter's CrashPlan package if you are migrating from it.
  3. Install the Docker package from the Package Center interface (this you already did).
  4. Close Package Center.
  5. Open Docker.
  6. Click 'Registry' on the list to the left (below 'DSM').
  7. In the search bar at the top of the Docker window, enter jlesage and click the 'Search' button. You should see jlesage/crashplan-pro at the top of the list. Highlight it and click the 'Download' button. Leave the tag as latest in the 'Choose Tag' window that pops up and click the 'Select' button.
  8. You should now see a '1' pop up to the right of 'Image' below 'Registry'. Let your Synology complete the download (it's a 499 MB download).
  9. Once the download is complete, open Control Panel from the DSM interface.
  10. Click on 'Terminal & SNMP' from Control Panel.
  11. Click the box next to 'Enable SSH service'.
  12. Open PuTTY.

From PuTTY:

  1. In the 'PuTTY Configuration' window, enter the local IP address (192.168.xxx.xxx) for your Synology where it says 'Host Name (or IP address)'. Then click the 'Open' button, which will open up a Terminal Session.
  2. Enter the username and password at the prompt for a user with Admin privileges.
  3. Enter the following command to get 'root' access:
    sudo -i
    Press Enter.
    Enter your password again and press Enter.
  4. Quick note about PuTTY: if you highlight the code blocks on this site and copy with Ctrl+C, you'll paste in PuTTY with the Right-Click Mouse Button instead of Ctrl+V.
  5. You will need to create the /docker/appdata/crashplan-pro configuration folder manually by entering the following command:
    mkdir -p /volume1/docker/appdata/crashplan-pro
    Press Enter.
  6. Create the Docker container image by entering the following command at a minimum (there are other environment variables you can introduce if you see fit or necessary - just check jlesage's documentation on those specifics) as a single continuous string (no line break) and let it do its thing:
    docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro
    The reason we are doing this through SSH/PuTTY is because the Synology Docker program won't let you map Volume1.
  7. At this point your Docker container has been created. You can now close your PuTTY session window.

From Synology DSM:

  1. Uncheck the box next to 'Enable SSH service' in the 'Terminal & SNMP' window in the Synology DSM interface.
  2. Open Docker.
  3. Click 'Container' from the list on the left. You should now see a 'crashplan-pro' entry with it showing as 'Running'.

From your Web Browser:

  1. Log into the CrashPlan PRO container GUI by pointing your browser to your IP address of the synology at port 5800 (http:\\192.168.XXX.XXX:5800)
  2. Sign into your CrashPlan account through the web GUI you just accessed.
  3. The following assumes you were using Patters' CrashPlan Package initially and you need to 'adopt' the backup.
  4. Click the 'Replace Existing' button to start the wizard.
  5. Skip 'Step 2 - File Transfer'.
  6. Once done with the wizard, go to your 'Device Details' and click 'Manage Files'. Since we mapped the Synology's 'Volume1' to the Docker Container's 'Volume1', the Docker CrashPlan-Pro Container should automatically see and recognize the files/folders previously backed up by Patters' CrashPlan Package (assuming you were using it).
  7. Perform a backup and the system will make sure it is the same by scanning all the files and making sure everything matches. If done correctly you shouldn't need to re-upload everything to CrashPlan's servers.
  8. You can close the browser window and you’re done! As long as the Docker Container is up and running, you’ll be backing up to CrashPlan. Any changes to your backup set like adding new folders to your backup will be done through the web GUI.

I hope this helps you get up and running!

@jlesage
Copy link
Owner

jlesage commented Jan 3, 2018

You can also look at Synology's documentation:
https://www.synology.com/en-global/knowledgebase/DSM/help/Docker/docker_container

@beagleboy1010
Copy link

Thank you for the step by step instructions. I am a new to docker too. Everything installed and all working fine :) cheers

But I have a problem when I do a restore to the original folder, nothing restored? is this something to do with folder structures?
Please help?

many thanks

@excalibr18
Copy link

I haven't yet had to restore from within the Docker Container, but I'm fairly certain the reason the restore doesn't work is because Volume1 is being mapped as Read Only: -v /volume1/:/volume1:ro

If you change the -v /volume1/:/volume1:ro portion of the docker run command to -v /volume1/:/volume1:rw then it should allow the restores.

@jlesage: is that correct? would the user need to remove the docker container and re-create it with the read/write tag?

@jlesage
Copy link
Owner

jlesage commented Jan 17, 2018

Yes that's correct @excalibr18. Re-creating the container with the R/W permission for the volume would allow the restore to work properly.

@beagleboy1010
Copy link

Perfect! Thanks guys
All working properly now

@Slartybart
Copy link

Is there a way to map other external drives, as most of my Crashplan files are on drives other than the Volume 1 internal disks? Thank in advance for any help

@jedinite13
Copy link

I am having problems connecting with the web service. After running the command and going to the webpage I get Code42 cannot connect to its background service. Retry

If I create the container via the UI it works but I can't create the storage volume to map to /volume1 where all my data is contain in different shares. Any guidance would be appreciated.

Also, as a note, I used the default instructions and it's moved all my files on CrashPlan to Deleted. so be careful when setting $HOME:/storage as your location if you already have a backup set. I am not sure if I am going to have to just upload everything again 2.1TB or if it will de-dup and mark the files as not deleted.

@jlesage
Copy link
Owner

jlesage commented Jan 18, 2018

@Slartybart, yes it's possible. You just need to map additional folders to the container (using the -v argument).

@jlesage
Copy link
Owner

jlesage commented Jan 18, 2018

@jedinite13, did you follow instructions indicated at https://github.com/jlesage/docker-crashplan-pro#taking-over-existing-backup?
Basically, if path to your files (as seen by the container) is different between your old and new installation, you need to re-select your files using the new paths. Old paths would be marked as "missing", but that's not a problem. Once you perform the backup, nothing will be re-uploaded because of deduplication.

@excalibr18
Copy link

@Slartybart The Synology maps USB external drives as volumeUSB# where # corresponds to a separate physical USB external drive. If you only have one USB external drive connected, then it'll be mapped as volumeUSB1 by default.

If you're looking to mount the entire USB external drive (in a similar way as Volume1 internal disks are mapped), then you can use -v /volumeUSB1/usbshare/:/usbshare:ro when setting up the container.

@excalibr18
Copy link

@jedinite13 The Synology UI won't allow you to map Volume1, which is why you need to do it via the command line.

As for not being able to connect to the background service, try doing this:

  1. Remove the crashplan-pro docker container. You can do this from the Docker GUI interface on the Synology.
  2. Delete all the contents of /docker/appdata/crashplan-pro/ (including all the sub folders). You can do this from Synology's FileStation if you're logged in with an admin-level account, otherwise you'll need to do it from the command line logged in as root. Make sure you don't actually remove the crashplan-pro/ folder. If you did you'll need to re-create it.
  3. Create the Docker image again starting from step 6 under "From PuTTY".

That has seemed to worked for other users (see https://github.com/jlesage/docker-crashplan-pro/issues/14)

@excalibr18
Copy link

One other thing to be aware of: if on the Synology you are experiencing the iNotify Max Watch Limit issue, please refer to this solution: https://github.com/jlesage/docker-crashplan-pro/issues/23

Also be aware that you'll have to repeat setting the max watch limit in /etc.defaults/sysctl.conf each time the DSM downloads and applies an update.

@aagrawala
Copy link

@excalibr18 & @jlesage Totally AWESOME, folks! Thank you so much! Just followed the instructions instead of fussing with the client app on my MAC.

I have the following questions... Kind of dumb ones I think ;-)

  1. We don't need the MAC or local client anymore, correct?

  2. Can we delete the java JRE packages from the 'public' folder that we used to download per Patter's procedure?

  3. Can we uninstall the CrashPlan Home (Green) package from Synology DSM?

  4. With this Docker Package solution, looks like we don't need to install the CrashPlan Pro package from the Package Center in Synology DSM, correct?

Once again, thank you!
Anil

@excalibr18
Copy link

  1. We don't need the MAC or local client anymore, correct?

Correct. I had the local client installed on my Win10 machine and after going with jlesage's docker solution, I no longer need to use that machine. I just need any machine on my local network with a web browser to access the docker container's web GUI.

  1. Can we delete the java JRE packages from the 'public' folder that we used to download per Patter's procedure?

I would assume this is correct, however when used Patter's package I always opted to use the system Java, not Patter's internal one.

  1. Can we uninstall the CrashPlan Home (Green) package from Synology DSM?

Assuming you upgraded you CrashPlan subscription to the Pro/Small Business plan, and jlesage's docker solution is confirmed to be working for you (i.e. you have adopted your backup set and it is working correctly), then I don't see a need to keep the CrashPlan Home (Green) package.

  1. With this Docker Package solution, looks like we don't need to install the CrashPlan Pro package from the Package Center in Synology DSM, correct?

Correct. With this Docker solution, you don't need the packages from Patter's.

@aagrawala
Copy link

@excalibr18 Thanks for responding so quickly to my questions.

Another Question: My migration worked fine per the above instructions and the backup is running now looking at the web GUI. However, I cannot seem to browse the files and/or folders that have been backed up using "Manage Files". I can see that volume1 is listed, but browsing under it can't seem to find my file structure that's set up for backup. Any tips?

Thank you in advance!
Anil

@excalibr18
Copy link

excalibr18 commented Feb 5, 2018

To make sure I understand you correctly, are you saying that when you click "Manage Files" in the GUI, there's nothing listed under volume1? Or are there folders listed, but you just can't find the folders you are backing up?

Did you map volume1 using -v /volume1/:/volume1:ro or -v /volume1/:/storage:ro

@aagrawala
Copy link

Folders listed but can't find my folder structure from my Synology Home directory that I have selected to be backed up. See the attached snapshot showing what's under volume1
screen shot 2018-02-05 at 12 27 39 am

@excalibr18
Copy link

excalibr18 commented Feb 5, 2018

What were the folder paths for your backup when you were using Patters' package?

If you scroll down in that window, do you see any folders with a check mark next to it?

@aagrawala
Copy link

Something like this as shown in the attached from the old client app on my computer saved a few years ago
screen shot 2018-02-05 at 12 46 49 am

@aagrawala
Copy link

Web GUI shows that the backup is running and the files that are new and hadn't been backed up for the last 25 days (yes, I hadn't done this upgrade for so long after the backup had stopped) are being backed up (can't see the files that are being backed up themselves... Just from the size of the backup remaining

@excalibr18
Copy link

excalibr18 commented Feb 5, 2018

OK this helps. In this case using the screenshot from the old client app, when you scroll down in the "Manage Files" window you should see a "photo" folder in the list. If you click on that "photo" folder, then you should see the various Year folders you had selected in the old client to back up. Each of the Year folders should have a check mark next to it.

For example, I selected my entire "Archives" and "Documents" shared folders that reside in Volume1. When I scroll down in "Manage Files" this is what it looks like:
image

@aagrawala
Copy link

Ah, LoL, I didn't notice the scroll bar to the right side in the GUI and thought those were the only folders what I sent you :-) I do see all the other folders when I scroll down.

Thanks a million for your help. And, sorry for bugging you for this silly thing and taking up your time.

Best,
Anil

@excalibr18
Copy link

No worries. Glad it's all sorted out and working for you! The new GUI for CrashPlan Version 6 is less intuitive than the one before it unfortunately.

@lenny81
Copy link

lenny81 commented Feb 8, 2018

Thanks for the installation steps. I've just transitioned to the docker from Patter's solution. Everything went well and the container is running CP pro, however I can't access the web gui. I've followed this step:
"Log into the CrashPlan PRO container GUI by pointing your browser to your IP address of the synology at port 5800 (http://192.168.XXX.XXX:5800)" and replacing the XXX.XXX with my synology server address.
I get a 'Can't connect to a server' message. Am I missing something obvious?

Any suggestions would be appreciated.

@jlesage
Copy link
Owner

jlesage commented Feb 8, 2018

Which parameters did you give to the docker run command? Did you mapped the port (-p 5800:5800)?

@excalibr18
Copy link

@lenny81: Are you accessing the web GUI of the Docker on the same LOCAL network as the Synology?

@lenny81
Copy link

lenny81 commented Feb 9, 2018

I'm trying to access it on the same network as the synology.

jlesage: No idea. I'm pretty clueless with all this. I can't remember which guide I used to set it up or how I check if I mapped the port... how do I check this?

@PNMarkW2
Copy link

Any thoughts on why my local install doesn't show any files, even though it does show my folders?

@gatorheel
Copy link

I had this when my new container arguments were wrong. It looked like I could see my folders, but it was really just showing me the structure from my prior backup. I would double-check your -v paths to make sure you have them correct.

@jlesage
Copy link
Owner

jlesage commented Sep 13, 2018

Likely a permission issue. Did you set the USER_ID and GROUP_ID to the same value as your original container?

@PNMarkW2
Copy link

Okay, I think between the two of you that you've helped highlight where I went wrong. I'm going to have to delete the container and recreate it to update some of the variables. I'll let you know the results.

@PNMarkW2
Copy link

On the plus side, I can see the full directory structure and the files on my network now. The downside seems to be that it's acting like it's starting over instead of resuming where it left off. I saw this because it claims to have backed up only 13GB of a multi-TB dataset. Now maybe it's only reporting what it's done since I restarted it, but it reads as if we're back at square one.

@PNMarkW2
Copy link

I tried again to delete the container and recreate it, only this time I read somewhere to remove any files from the system related to the old container. So yes, for good or for bad I did that. This time it started up much like when I created it the very first time by asking me to log in, but now it's stuck there. It just sits on that screen and says "Signing in..." and it's been that way for 7 hours now.

@jlesage
Copy link
Owner

jlesage commented Sep 17, 2018

Try to restart the container.
Also can you post the command line you used to create the container?

@PNMarkW2
Copy link

The Restart seemed to do some good, I had to log in again but I'm able to navigate around, it's not just stuck on "Signing in". However, it still looks like it's started over telling me that it's only 13% complete, that should be more like 50-60%.

As requested here is my startup script.
docker run -d
--name=CrashPlan
-e USER_ID=0 -e GROUP_ID=0
-e CRASHPLAN_SRV_MAX_MEM=3072M
-e SECURE_CONNECTION=1
-p 5800:5800
-p 5900:5900
-v /volume1/docker/appdata/crashplan:/config:rw
-v /volume1/:/volume1:ro
--restart always
jlesage/crashplan-pro

@jlesage
Copy link
Owner

jlesage commented Sep 19, 2018

Looking at the progression doesn't tell if your data is actually uploaded or deduplicated. You can look at the history (Tools->History) for more details.

@rekuhs
Copy link

rekuhs commented Oct 10, 2018

I'm having trouble mapping additional volumes, the first one is fine and works perfectly but I have volumes 2, 3, 4 and 5 that I need to map.

This works and gets Vol1 mapped docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro

But adding to that to map the additional volumes doesn't
docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro -v /volume2/:/volume2:ro jlesage/crashplan-pro -v /volume3/:/volume3:ro jlesage/crashplan-pro -v /volume4/:/volume4:ro jlesage/crashplan-pro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Using that returns the following error:
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: "-v": executable file not found in $PATH".

The Container is created but can't be started.

I will admit that I don't really know what I'm doing

Can anyone help?

@jlesage
Copy link
Owner

jlesage commented Oct 10, 2018

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro

@rekuhs
Copy link

rekuhs commented Oct 10, 2018

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Thank you :)

@THX-101
Copy link

THX-101 commented Feb 8, 2019

I have a question about permissions: should I use -e USER_ID=0 -e GROUP_ID=0, or is this a security risk? Am running this on a Synology. Also am I or am I not supposed to run the "docker run -d ..." command as sudo?

@jlesage
Copy link
Owner

jlesage commented Feb 8, 2019

You are running as root when using USER_ID=0 and GROUP_ID=0, which is not considered as a good practice. So if you are able, you should use a user/group that has permission to access the files you want to backup.

To create the container, you have the choice: either you manually run the docker run command, or you use the Synology UI.

@THX-101
Copy link

THX-101 commented Feb 8, 2019

When using putty to build my containers, I log in with my Synology admin user, but I'm always required to run the docker run command with sudo. That is the way it is supposed to be, right?

And concerning not running as root, what would be best practice? Make a 'docker' group and a 'docker' user, that has read/write in both the /volume1/docker folder and /volume1/share and nothing else?

@jlesage
Copy link
Owner

jlesage commented Feb 8, 2019

Correct, the docker run command needs to be run as root, via sudo.

And yes, creating an additional user with restricted permissions is a solution.

@THX-101
Copy link

THX-101 commented Feb 8, 2019

And one other question: I just ran the docker with -e USER_ID=0 -e GROUP_ID=0 for the very first time (switched from Windows client). I suppose it is best to let Crashplan finish synchronising block information before I tear down the container, right?

@jlesage
Copy link
Owner

jlesage commented Feb 8, 2019

I think it should not be a problem to stop the container before it ends the synchronization. Once you re-start the container, CrashPlan should just continue where it left off.

@THX-101
Copy link

THX-101 commented Feb 8, 2019

Thanks jlesage, you are a one-woman-tripple-A-customer support. Very much appreciated ! 🥇

edit: So sorry for being so sexist. I assumed you were a guy.

@PNMarkW2
Copy link

I recently had to reset my Synology from the ground up, which of course meant having to reset CrashPlan. So after I got the Synology running I added Docker followed closely by adding CrashPlan. Having had to reset CrashPlan once before I made sure to keep the setting I used.

docker run -d
--name=CrashPlan
-e USER_ID=0 -e GROUP_ID=0
-e CRASHPLAN_SRV_MAX_MEM=3072M
-e SECURE_CONNECTION=1
-p 5800:5800
-p 5900:5900
-v /volume1/docker/appdata/crashplan:/config:rw
-v /volume1/:/volume1:ro
--restart always
jlesage/crashplan-pro

But having done this all CrashPlan will do is constantly scan files. It goes for a while then restarts over and over. Today is 27 days since it performed any sort of backup. I've tried to delete it to reinstall, but when I try that I'm told that there are containers dependant on CrashPlan and it won't let me.

Any thoughts or suggestion would be welcome.

Thank you

Mark

@jlesage
Copy link
Owner

jlesage commented Jul 12, 2019

I would try to look at the history (View -> History) and at /volume1/docker/appdata/crashplan/log/service.log to see if there is anything obvious.
Also, I guess you are running the latest docker image?

@PNMarkW2
Copy link

Like taking a car to a mechanic, it spent a month scanning for files and seems to be "happy" now.

Under Tools -> History all it would appear to complain about it failing to upgrade to a newer version. As I said I had to re-do my entire Synology, That involves downloading and installing Docker again, so I would assume it is the latest and greatest version available. I also had to get the CrashPlan package again, and again I would have assumed that to be the latest as well, but maybe not since it's trying to update so soon.

Right now it looks like it's backing up files, what still seems off though is it doesn't seem to think anything was backed up previously like it didn't sync with my previous backup after the reinstall. That doesn't quite match what I see when I log in to view my archive online, there it shows a good chunk of data with my last activity within the past 24 hours.

I'm still a bit confused, but it seems to be running, maybe. :-)

@jlesage
Copy link
Owner

jlesage commented Jul 14, 2019

For your information, I just published a new docker image containing the latest version of CP.

@PNMarkW2
Copy link

PNMarkW2 commented Jul 4, 2020

My CrashPlan installation on my Synology NAS has been running fine for months, actually nearly a year since my last post. Now I'm getting emails regularly that there hasn't been any backup, the email today said nothing has been backed up for 8 days.

So my first question is why would something that has been running fine for nearly a year just stop? And if it stopped because it's missing some update, why can't it tell me that instead of just stopping?

Second, when I try to bring up the local web interface I have to log in, which I do, and then I do again and again because it will sit there and say it's scanning files, then go to a black screen and sit there until asking me to log in again. This happens so quickly I'm not actually able to do anything when I get logged in, but I did get a look at the log which says there is an upgrade I need.

My guess is I have the nuke the whole thing and reinstall CrashPlan, but I wanted to check here first because that's not a process I enjoy.

Thank you for your help.

@jlesage
Copy link
Owner

jlesage commented Jul 4, 2020

CrashPlan will not perform any backup if its version is too old. So if you didn't upgraded your container recently, then you need to do it to re-enable backup.

If you don't want to manually bother with upgrading your container, you could try to install Watchtower (https://github.com/containrrr/watchtower), a container which allows to seamlessly upgrade your containers.

@PNMarkW2
Copy link

PNMarkW2 commented Jul 5, 2020

Thank you for taking the time on a holiday to respond. I truly appreciate that, and I'll look into what you suggested since my efforts to follow the directions to update always seem to result in grief.

I will say again if the reason for the program to stop working is known, such as a version is too old, that perhaps that should be explicitly reported along with the email saying that the backup hasn't happened.

Thank you again.

Mark

@jlesage
Copy link
Owner

jlesage commented Jul 7, 2020

I will say again if the reason for the program to stop working is known, such as a version is too old, that perhaps that should be explicitly reported along with the email saying that the backup hasn't happened.

You should forward this feedback to CrashPlan, since there is nothing I can do for that...

@SJLBoulder
Copy link

PNMarkW2 - I have a suggestion for your notification problem. You can join GitHub and "watch" jlesage's project for updates, e.g. a new Docker container with the latest CrashPlan version. Go here, and click on "watch" at the top of the page: https://github.com/jlesage/docker-crashplan-pro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests