Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need basic help with installation - need more details #34

Open
bgmess opened this issue Jan 2, 2018 · 150 comments

Comments

Projects
None yet
@bgmess
Copy link

commented Jan 2, 2018

I am fairly new to Linux and Docker. I need more instructions that those given in the quick start:

Launch the CrashPlan PRO docker container with the following command:

docker run -d
--name=crashplan-pro
-p 5800:5800
-p 5900:5900
-v /docker/appdata/crashplan-pro:/config:rw
-v $HOME:/storage:ro
jlesage/crashplan-pro

Where do I enter the above commands?

Is this all I need to do?

Any tips would be greatly appreciated. Not sure how to begin. I have installed Docker on the Synology NAS, but that's as far as I got.

Thanks,
Brian

@excalibr18

This comment has been minimized.

Copy link

commented Jan 2, 2018

Hello Brian,
I'm a Synology user as well and was using Patters' CrashPlan package before migrating to the Docker image. Since you're new to Linux, I'll assume you haven't used an SSH client like PuTTY. You'll need to download that before you get started. You can get it here: http://www.putty.org/

Here's how I got jlesage's crashplan pro docker image up and running on my Synology and adopted my existing backup set created from when I was using Patter's CrashPlan package (EDIT: This reflects my personal setup [I have a Synology DS1515+ running DSM 6.1.5 and Docker package version 17.05.0-0367; I have a single volume and it's set up with Synology's SHR-2 RAID configuration] and is intended to be a step-by-step example of how I deployed the docker image on my system. Your individual Synology setup may differ from mine, so please understand that your deployment of the docker image may differ as a result.):

From Synology DSM:

  1. Open Package Center.
  2. Stop Patter's CrashPlan package if you are migrating from it.
  3. Install the Docker package from the Package Center interface (this you already did).
  4. Close Package Center.
  5. Open Docker.
  6. Click 'Registry' on the list to the left (below 'DSM').
  7. In the search bar at the top of the Docker window, enter jlesage and click the 'Search' button. You should see jlesage/crashplan-pro at the top of the list. Highlight it and click the 'Download' button. Leave the tag as latest in the 'Choose Tag' window that pops up and click the 'Select' button.
  8. You should now see a '1' pop up to the right of 'Image' below 'Registry'. Let your Synology complete the download (it's a 499 MB download).
  9. Once the download is complete, open Control Panel from the DSM interface.
  10. Click on 'Terminal & SNMP' from Control Panel.
  11. Click the box next to 'Enable SSH service'.
  12. Open PuTTY.

From PuTTY:

  1. In the 'PuTTY Configuration' window, enter the local IP address (192.168.xxx.xxx) for your Synology where it says 'Host Name (or IP address)'. Then click the 'Open' button, which will open up a Terminal Session.
  2. Enter the username and password at the prompt for a user with Admin privileges.
  3. Enter the following command to get 'root' access:
    sudo -i
    Press Enter.
    Enter your password again and press Enter.
  4. Quick note about PuTTY: if you highlight the code blocks on this site and copy with Ctrl+C, you'll paste in PuTTY with the Right-Click Mouse Button instead of Ctrl+V.
  5. You will need to create the /docker/appdata/crashplan-pro configuration folder manually by entering the following command:
    mkdir -p /volume1/docker/appdata/crashplan-pro
    Press Enter.
  6. Create the Docker container image by entering the following command at a minimum (there are other environment variables you can introduce if you see fit or necessary - just check jlesage's documentation on those specifics) as a single continuous string (no line break) and let it do its thing:
    docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro
    The reason we are doing this through SSH/PuTTY is because the Synology Docker program won't let you map Volume1.
  7. At this point your Docker container has been created. You can now close your PuTTY session window.

From Synology DSM:

  1. Uncheck the box next to 'Enable SSH service' in the 'Terminal & SNMP' window in the Synology DSM interface.
  2. Open Docker.
  3. Click 'Container' from the list on the left. You should now see a 'crashplan-pro' entry with it showing as 'Running'.

From your Web Browser:

  1. Log into the CrashPlan PRO container GUI by pointing your browser to your IP address of the synology at port 5800 (http:\\192.168.XXX.XXX:5800)
  2. Sign into your CrashPlan account through the web GUI you just accessed.
  3. The following assumes you were using Patters' CrashPlan Package initially and you need to 'adopt' the backup.
  4. Click the 'Replace Existing' button to start the wizard.
  5. Skip 'Step 2 - File Transfer'.
  6. Once done with the wizard, go to your 'Device Details' and click 'Manage Files'. Since we mapped the Synology's 'Volume1' to the Docker Container's 'Volume1', the Docker CrashPlan-Pro Container should automatically see and recognize the files/folders previously backed up by Patters' CrashPlan Package (assuming you were using it).
  7. Perform a backup and the system will make sure it is the same by scanning all the files and making sure everything matches. If done correctly you shouldn't need to re-upload everything to CrashPlan's servers.
  8. You can close the browser window and you’re done! As long as the Docker Container is up and running, you’ll be backing up to CrashPlan. Any changes to your backup set like adding new folders to your backup will be done through the web GUI.

I hope this helps you get up and running!

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jan 3, 2018

You can also look at Synology's documentation:
https://www.synology.com/en-global/knowledgebase/DSM/help/Docker/docker_container

@beagleboy1010

This comment has been minimized.

Copy link

commented Jan 16, 2018

Thank you for the step by step instructions. I am a new to docker too. Everything installed and all working fine :) cheers

But I have a problem when I do a restore to the original folder, nothing restored? is this something to do with folder structures?
Please help?

many thanks

@excalibr18

This comment has been minimized.

Copy link

commented Jan 17, 2018

I haven't yet had to restore from within the Docker Container, but I'm fairly certain the reason the restore doesn't work is because Volume1 is being mapped as Read Only: -v /volume1/:/volume1:ro

If you change the -v /volume1/:/volume1:ro portion of the docker run command to -v /volume1/:/volume1:rw then it should allow the restores.

@jlesage: is that correct? would the user need to remove the docker container and re-create it with the read/write tag?

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jan 17, 2018

Yes that's correct @excalibr18. Re-creating the container with the R/W permission for the volume would allow the restore to work properly.

@beagleboy1010

This comment has been minimized.

Copy link

commented Jan 17, 2018

Perfect! Thanks guys
All working properly now

@Slartybart

This comment has been minimized.

Copy link

commented Jan 17, 2018

Is there a way to map other external drives, as most of my Crashplan files are on drives other than the Volume 1 internal disks? Thank in advance for any help

@jedinite13

This comment has been minimized.

Copy link

commented Jan 18, 2018

I am having problems connecting with the web service. After running the command and going to the webpage I get Code42 cannot connect to its background service. Retry

If I create the container via the UI it works but I can't create the storage volume to map to /volume1 where all my data is contain in different shares. Any guidance would be appreciated.

Also, as a note, I used the default instructions and it's moved all my files on CrashPlan to Deleted. so be careful when setting $HOME:/storage as your location if you already have a backup set. I am not sure if I am going to have to just upload everything again 2.1TB or if it will de-dup and mark the files as not deleted.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jan 18, 2018

@Slartybart, yes it's possible. You just need to map additional folders to the container (using the -v argument).

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jan 18, 2018

@jedinite13, did you follow instructions indicated at https://github.com/jlesage/docker-crashplan-pro#taking-over-existing-backup?
Basically, if path to your files (as seen by the container) is different between your old and new installation, you need to re-select your files using the new paths. Old paths would be marked as "missing", but that's not a problem. Once you perform the backup, nothing will be re-uploaded because of deduplication.

@excalibr18

This comment has been minimized.

Copy link

commented Jan 18, 2018

@Slartybart The Synology maps USB external drives as volumeUSB# where # corresponds to a separate physical USB external drive. If you only have one USB external drive connected, then it'll be mapped as volumeUSB1 by default.

If you're looking to mount the entire USB external drive (in a similar way as Volume1 internal disks are mapped), then you can use -v /volumeUSB1/usbshare/:/usbshare:ro when setting up the container.

@excalibr18

This comment has been minimized.

Copy link

commented Jan 18, 2018

@jedinite13 The Synology UI won't allow you to map Volume1, which is why you need to do it via the command line.

As for not being able to connect to the background service, try doing this:

  1. Remove the crashplan-pro docker container. You can do this from the Docker GUI interface on the Synology.
  2. Delete all the contents of /docker/appdata/crashplan-pro/ (including all the sub folders). You can do this from Synology's FileStation if you're logged in with an admin-level account, otherwise you'll need to do it from the command line logged in as root. Make sure you don't actually remove the crashplan-pro/ folder. If you did you'll need to re-create it.
  3. Create the Docker image again starting from step 6 under "From PuTTY".

That has seemed to worked for other users (see https://github.com/jlesage/docker-crashplan-pro/issues/14)

@excalibr18

This comment has been minimized.

Copy link

commented Feb 4, 2018

One other thing to be aware of: if on the Synology you are experiencing the iNotify Max Watch Limit issue, please refer to this solution: https://github.com/jlesage/docker-crashplan-pro/issues/23

Also be aware that you'll have to repeat setting the max watch limit in /etc.defaults/sysctl.conf each time the DSM downloads and applies an update.

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

@excalibr18 & @jlesage Totally AWESOME, folks! Thank you so much! Just followed the instructions instead of fussing with the client app on my MAC.

I have the following questions... Kind of dumb ones I think ;-)

  1. We don't need the MAC or local client anymore, correct?

  2. Can we delete the java JRE packages from the 'public' folder that we used to download per Patter's procedure?

  3. Can we uninstall the CrashPlan Home (Green) package from Synology DSM?

  4. With this Docker Package solution, looks like we don't need to install the CrashPlan Pro package from the Package Center in Synology DSM, correct?

Once again, thank you!
Anil

@excalibr18

This comment has been minimized.

Copy link

commented Feb 5, 2018

  1. We don't need the MAC or local client anymore, correct?

Correct. I had the local client installed on my Win10 machine and after going with jlesage's docker solution, I no longer need to use that machine. I just need any machine on my local network with a web browser to access the docker container's web GUI.

  1. Can we delete the java JRE packages from the 'public' folder that we used to download per Patter's procedure?

I would assume this is correct, however when used Patter's package I always opted to use the system Java, not Patter's internal one.

  1. Can we uninstall the CrashPlan Home (Green) package from Synology DSM?

Assuming you upgraded you CrashPlan subscription to the Pro/Small Business plan, and jlesage's docker solution is confirmed to be working for you (i.e. you have adopted your backup set and it is working correctly), then I don't see a need to keep the CrashPlan Home (Green) package.

  1. With this Docker Package solution, looks like we don't need to install the CrashPlan Pro package from the Package Center in Synology DSM, correct?

Correct. With this Docker solution, you don't need the packages from Patter's.

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

@excalibr18 Thanks for responding so quickly to my questions.

Another Question: My migration worked fine per the above instructions and the backup is running now looking at the web GUI. However, I cannot seem to browse the files and/or folders that have been backed up using "Manage Files". I can see that volume1 is listed, but browsing under it can't seem to find my file structure that's set up for backup. Any tips?

Thank you in advance!
Anil

@excalibr18

This comment has been minimized.

Copy link

commented Feb 5, 2018

To make sure I understand you correctly, are you saying that when you click "Manage Files" in the GUI, there's nothing listed under volume1? Or are there folders listed, but you just can't find the folders you are backing up?

Did you map volume1 using -v /volume1/:/volume1:ro or -v /volume1/:/storage:ro

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

Folders listed but can't find my folder structure from my Synology Home directory that I have selected to be backed up. See the attached snapshot showing what's under volume1
screen shot 2018-02-05 at 12 27 39 am

@excalibr18

This comment has been minimized.

Copy link

commented Feb 5, 2018

What were the folder paths for your backup when you were using Patters' package?

If you scroll down in that window, do you see any folders with a check mark next to it?

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

Something like this as shown in the attached from the old client app on my computer saved a few years ago
screen shot 2018-02-05 at 12 46 49 am

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

Web GUI shows that the backup is running and the files that are new and hadn't been backed up for the last 25 days (yes, I hadn't done this upgrade for so long after the backup had stopped) are being backed up (can't see the files that are being backed up themselves... Just from the size of the backup remaining

@excalibr18

This comment has been minimized.

Copy link

commented Feb 5, 2018

OK this helps. In this case using the screenshot from the old client app, when you scroll down in the "Manage Files" window you should see a "photo" folder in the list. If you click on that "photo" folder, then you should see the various Year folders you had selected in the old client to back up. Each of the Year folders should have a check mark next to it.

For example, I selected my entire "Archives" and "Documents" shared folders that reside in Volume1. When I scroll down in "Manage Files" this is what it looks like:
image

@aagrawala

This comment has been minimized.

Copy link

commented Feb 5, 2018

Ah, LoL, I didn't notice the scroll bar to the right side in the GUI and thought those were the only folders what I sent you :-) I do see all the other folders when I scroll down.

Thanks a million for your help. And, sorry for bugging you for this silly thing and taking up your time.

Best,
Anil

@excalibr18

This comment has been minimized.

Copy link

commented Feb 5, 2018

No worries. Glad it's all sorted out and working for you! The new GUI for CrashPlan Version 6 is less intuitive than the one before it unfortunately.

@lenny81

This comment has been minimized.

Copy link

commented Feb 8, 2018

Thanks for the installation steps. I've just transitioned to the docker from Patter's solution. Everything went well and the container is running CP pro, however I can't access the web gui. I've followed this step:
"Log into the CrashPlan PRO container GUI by pointing your browser to your IP address of the synology at port 5800 (http://192.168.XXX.XXX:5800)" and replacing the XXX.XXX with my synology server address.
I get a 'Can't connect to a server' message. Am I missing something obvious?

Any suggestions would be appreciated.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Feb 8, 2018

Which parameters did you give to the docker run command? Did you mapped the port (-p 5800:5800)?

@excalibr18

This comment has been minimized.

Copy link

commented Feb 8, 2018

@lenny81: Are you accessing the web GUI of the Docker on the same LOCAL network as the Synology?

@lenny81

This comment has been minimized.

Copy link

commented Feb 9, 2018

I'm trying to access it on the same network as the synology.

jlesage: No idea. I'm pretty clueless with all this. I can't remember which guide I used to set it up or how I check if I mapped the port... how do I check this?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 5, 2018

Thanks. So I did that, but now I can't get to the web interface. I'm going to https://192.168.xxx.xxx:5800/ just like I've always done, but I get the message "This site can’t provide a secure connection" and "192.168.xxx.xxx sent an invalid response."

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 5, 2018

Is the non-secure address working (http://192.168.xx.xx:5800)?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 5, 2018

Yes, it is, I feel silly for not trying that. That's what I get for doing this right before I needed to head out so I was. Changing it to HTTP allowed me to see what's happening, and it now says it is synchronizing block information, which I assume is the correct state of affairs at this point. Like I said, it's been a long time since I last had to do this.

Is there a reason the https would not work when it would before? I was using a saved link to the web interface so I know it was the same link I used before I deleted the Docker container and recreated it with my new variable info.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 6, 2018

If HTTPs access was working before, it means that you probably forget to set the SECURE_CONNECTION to 1 when recreating the container?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 6, 2018

Very possible, I could not find my original script for setting u the container.

Another issue that's come up, it finished the synchronizing block information, but it oddly thinks it's done and there is nothing to be backed up, which is not true, there is plenty to be backed up. Online I have the opposite, Code 42 is warning me there has been no activity in over 13 days (which is what prompted my needing to change the CRASHPLAN_SRV_MAX_MEM). Something does not seem to be synced between the container and online.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 6, 2018

In Crashplan, for your device, when you click Details->Manage Files, do you see your files?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 6, 2018

That would be no. When I click on Manage Files I can see the folders that should contain files, but when I click into that folder there are no files visible.

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 13, 2018

Any thoughts on why my local install doesn't show any files, even though it does show my folders?

@gatorheel

This comment has been minimized.

Copy link

commented Sep 13, 2018

I had this when my new container arguments were wrong. It looked like I could see my folders, but it was really just showing me the structure from my prior backup. I would double-check your -v paths to make sure you have them correct.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 13, 2018

Likely a permission issue. Did you set the USER_ID and GROUP_ID to the same value as your original container?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 13, 2018

Okay, I think between the two of you that you've helped highlight where I went wrong. I'm going to have to delete the container and recreate it to update some of the variables. I'll let you know the results.

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 15, 2018

On the plus side, I can see the full directory structure and the files on my network now. The downside seems to be that it's acting like it's starting over instead of resuming where it left off. I saw this because it claims to have backed up only 13GB of a multi-TB dataset. Now maybe it's only reporting what it's done since I restarted it, but it reads as if we're back at square one.

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 16, 2018

I tried again to delete the container and recreate it, only this time I read somewhere to remove any files from the system related to the old container. So yes, for good or for bad I did that. This time it started up much like when I created it the very first time by asking me to log in, but now it's stuck there. It just sits on that screen and says "Signing in..." and it's been that way for 7 hours now.

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 17, 2018

Try to restart the container.
Also can you post the command line you used to create the container?

@PNMarkW2

This comment has been minimized.

Copy link

commented Sep 18, 2018

The Restart seemed to do some good, I had to log in again but I'm able to navigate around, it's not just stuck on "Signing in". However, it still looks like it's started over telling me that it's only 13% complete, that should be more like 50-60%.

As requested here is my startup script.
docker run -d
--name=CrashPlan
-e USER_ID=0 -e GROUP_ID=0
-e CRASHPLAN_SRV_MAX_MEM=3072M
-e SECURE_CONNECTION=1
-p 5800:5800
-p 5900:5900
-v /volume1/docker/appdata/crashplan:/config:rw
-v /volume1/:/volume1:ro
--restart always
jlesage/crashplan-pro

@jlesage

This comment has been minimized.

Copy link
Owner

commented Sep 19, 2018

Looking at the progression doesn't tell if your data is actually uploaded or deduplicated. You can look at the history (Tools->History) for more details.

@rekuhs

This comment has been minimized.

Copy link

commented Oct 10, 2018

I'm having trouble mapping additional volumes, the first one is fine and works perfectly but I have volumes 2, 3, 4 and 5 that I need to map.

This works and gets Vol1 mapped docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro

But adding to that to map the additional volumes doesn't
docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro -v /volume2/:/volume2:ro jlesage/crashplan-pro -v /volume3/:/volume3:ro jlesage/crashplan-pro -v /volume4/:/volume4:ro jlesage/crashplan-pro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Using that returns the following error:
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: "-v": executable file not found in $PATH".

The Container is created but can't be started.

I will admit that I don't really know what I'm doing

Can anyone help?

@jlesage

This comment has been minimized.

Copy link
Owner

commented Oct 10, 2018

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro
@rekuhs

This comment has been minimized.

Copy link

commented Oct 10, 2018

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Thank you :)

@THX-101

This comment has been minimized.

Copy link

commented Feb 8, 2019

I have a question about permissions: should I use -e USER_ID=0 -e GROUP_ID=0, or is this a security risk? Am running this on a Synology. Also am I or am I not supposed to run the "docker run -d ..." command as sudo?

@jlesage

This comment has been minimized.

Copy link
Owner

commented Feb 8, 2019

You are running as root when using USER_ID=0 and GROUP_ID=0, which is not considered as a good practice. So if you are able, you should use a user/group that has permission to access the files you want to backup.

To create the container, you have the choice: either you manually run the docker run command, or you use the Synology UI.

@THX-101

This comment has been minimized.

Copy link

commented Feb 8, 2019

When using putty to build my containers, I log in with my Synology admin user, but I'm always required to run the docker run command with sudo. That is the way it is supposed to be, right?

And concerning not running as root, what would be best practice? Make a 'docker' group and a 'docker' user, that has read/write in both the /volume1/docker folder and /volume1/share and nothing else?

@jlesage

This comment has been minimized.

Copy link
Owner

commented Feb 8, 2019

Correct, the docker run command needs to be run as root, via sudo.

And yes, creating an additional user with restricted permissions is a solution.

@THX-101

This comment has been minimized.

Copy link

commented Feb 8, 2019

And one other question: I just ran the docker with -e USER_ID=0 -e GROUP_ID=0 for the very first time (switched from Windows client). I suppose it is best to let Crashplan finish synchronising block information before I tear down the container, right?

@jlesage

This comment has been minimized.

Copy link
Owner

commented Feb 8, 2019

I think it should not be a problem to stop the container before it ends the synchronization. Once you re-start the container, CrashPlan should just continue where it left off.

@THX-101

This comment has been minimized.

Copy link

commented Feb 8, 2019

Thanks jlesage, you are a one-woman-tripple-A-customer support. Very much appreciated ! 🥇

edit: So sorry for being so sexist. I assumed you were a guy.

@PNMarkW2

This comment has been minimized.

Copy link

commented Jul 12, 2019

I recently had to reset my Synology from the ground up, which of course meant having to reset CrashPlan. So after I got the Synology running I added Docker followed closely by adding CrashPlan. Having had to reset CrashPlan once before I made sure to keep the setting I used.

docker run -d
--name=CrashPlan
-e USER_ID=0 -e GROUP_ID=0
-e CRASHPLAN_SRV_MAX_MEM=3072M
-e SECURE_CONNECTION=1
-p 5800:5800
-p 5900:5900
-v /volume1/docker/appdata/crashplan:/config:rw
-v /volume1/:/volume1:ro
--restart always
jlesage/crashplan-pro

But having done this all CrashPlan will do is constantly scan files. It goes for a while then restarts over and over. Today is 27 days since it performed any sort of backup. I've tried to delete it to reinstall, but when I try that I'm told that there are containers dependant on CrashPlan and it won't let me.

Any thoughts or suggestion would be welcome.

Thank you

Mark

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jul 12, 2019

I would try to look at the history (View -> History) and at /volume1/docker/appdata/crashplan/log/service.log to see if there is anything obvious.
Also, I guess you are running the latest docker image?

@PNMarkW2

This comment has been minimized.

Copy link

commented Jul 14, 2019

Like taking a car to a mechanic, it spent a month scanning for files and seems to be "happy" now.

Under Tools -> History all it would appear to complain about it failing to upgrade to a newer version. As I said I had to re-do my entire Synology, That involves downloading and installing Docker again, so I would assume it is the latest and greatest version available. I also had to get the CrashPlan package again, and again I would have assumed that to be the latest as well, but maybe not since it's trying to update so soon.

Right now it looks like it's backing up files, what still seems off though is it doesn't seem to think anything was backed up previously like it didn't sync with my previous backup after the reinstall. That doesn't quite match what I see when I log in to view my archive online, there it shows a good chunk of data with my last activity within the past 24 hours.

I'm still a bit confused, but it seems to be running, maybe. :-)

@jlesage

This comment has been minimized.

Copy link
Owner

commented Jul 14, 2019

For your information, I just published a new docker image containing the latest version of CP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.