Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More of a question - Upgrade to v2 #51

Closed
bdschuster opened this issue Jan 13, 2019 · 57 comments
Closed

More of a question - Upgrade to v2 #51

bdschuster opened this issue Jan 13, 2019 · 57 comments

Comments

@bdschuster
Copy link
Collaborator

Hello! Long time! I have some servers I did not update to 2 yet. What is the best way I should go about it? It is one of my primary Plex servers, so I don't want to screw it up or cause rescans.

Let me know. Thanks!

@TechPerplexed
Copy link
Owner

Hiya @bdschuster :)

Let me start with some bad news... although you should be able to import your Plex database, you will need to change the paths of your libraries.

Unfortunately Plex WILL rescan your content to some degree and depending on the size of your libraries, that might take a while (days).

I believe there should be a way to change the paths in the the database itself, but that's not something I have tried myself.

What I'd probably do is create a backup of the Plex database, restore it on a server that charges hourly (Hetzner or Scaleway) and then install Gooby and import Plex.

Then change the paths, do your scanning, create a backup and restore that on your main server.

It'll probably cost you $0.50 for a few days of usage but it's by far the safest method :)

@bdschuster
Copy link
Collaborator Author

Thanks! I think i'm going to just try and use a symlink. Anything that you know wrong with that? I mean, I will fix it eventually, but for now.

@TechPerplexed
Copy link
Owner

TechPerplexed commented Jan 17, 2019

I don't think there is anything wrong with using a symlink, but I'm not sure how that would solve your problem. I assume that you currently use /media/Plex (the old location for the mount), right? We'd have to fool the container into using that location (the actual mount doesn't matter).

You could try to edit the yaml file (the one located in /var/local/Gooby/Docker/components) and change the line - ${MEDIA}:/Media to - ${MEDIA}:/media/Plex

That way at least Plex won't have to rescan... or so I assume. I wouldn't attempt it on your main server unless you test it but theoretically it should work without having to rescan anything!

@bdschuster
Copy link
Collaborator Author

it actually looks like it's using /media/Google currently

@TechPerplexed
Copy link
Owner

Ahh ok - so yes, edit the 20-plex.yaml file and change that line to /media/Google. I don't envy you having to update a working (and primary) system... you better warn your users 😰

@bdschuster
Copy link
Collaborator Author

bdschuster commented Jan 17, 2019

Last question (maybe...lol), will the old backups restore to the new location correctly? and does the old backup, backup the Tautulli data?

@TechPerplexed
Copy link
Owner

No, the old backup only backed up Plex I'm afraid, and it will just restore it to the old Plex location. However, the system should offer to import your old Plex, Tautulli, and Sonarr/Radarr databases (= copy them to the new location)

Wishing you luck! Make sure you have 20 backups just in case, ok? I'd hate for you to lose anything in case it didn't go as planned... I mean, it should, but eh, computers have a mind of their own!

@bdschuster
Copy link
Collaborator Author

any idea what could be going on here? when I run system cleanup:

Shutting everything down

ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

@bdschuster
Copy link
Collaborator Author

and now i appear to be stuck here:
Updating and starting containers

Pulling ombi (linuxserver/ombi:latest)...
latest: Pulling from linuxserver/ombi
84ed7d2f608f: Pull complete
caf09a4b300c: Pull complete
34082dbadae0: Pull complete
3da8e4835db4: Pull complete
49d2e1fcfbf3: Pull complete
b41d4b109c3b: Pull complete
cde4c5a465c5: Pull complete
c2ee1a9950cc: Pull complete
de1e0f981741: Pull complete
172adb5bae7c: Pull complete
Digest: sha256:9276720fe902bcf7c33ad7d2f11da4aeb8f7c2f0f1fb725f68fdad69c1b06e36
ERROR: Cannot overwrite digest sha256:9276720fe902bcf7c33ad7d2f11da4aeb8f7c2f0f1fb725f68fdad69c1b06e36
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Cleaning Docker leftovers

@TechPerplexed
Copy link
Owner

Hmmm I get that message too every now and then (the HTTP request taking too long). Usually a server reboot solves that... I'm using a dedi with OneProvider.

As for the "cannot overwrite digest" - no idea! I searched Google and it says it is most likely a docker bug... yeah, not helpful!

@bdschuster
Copy link
Collaborator Author

don't hate me, another question....lol. How do you have your Plex scan for new files when they are downloaded from Sonarr/Radarr?

@TechPerplexed
Copy link
Owner

Hate you? Neverrrr 🤣

It's very simple really - go to Settings - Connect - add Plex. Then enter

Host: plex
Port: 32400
Username + Password for Plex
Update library: yes
Use SSL: no

That's all :)

@bdschuster
Copy link
Collaborator Author

The only issue with that is it initiates a partial scan when download completes, meaning it's still sitting and waiting to be transferred to google drive, so it doesn't find anything and never shows up, untill the next scheduled scan.

@TechPerplexed
Copy link
Owner

Ah I see what you mean... well if you are using the new MergerFS version, theoretically whatever is sitting in the upload folder should be scanned and added to Plex even before it's uploaded to Google. Is that not how it's behaving?

@bdschuster
Copy link
Collaborator Author

I'll tell you when i figure out what's going on here....I CANNOT get the remote access working. I'm not sure if this has something to do with my old data or not, but it's driving me crazy. If you have any info...let me know.

@TechPerplexed
Copy link
Owner

What remote access do you mean, can you give an example please?

@bdschuster
Copy link
Collaborator Author

bdschuster commented Jan 19, 2019 via email

@bdschuster
Copy link
Collaborator Author

And I'm also having a problem with locking up on reboot then losing all docker containers after a force reboot. I'm on 18.04 Server. I'm going to roll back to 16.04 and reinstall/restore and see what happens.

@TechPerplexed
Copy link
Owner

Ok, with Plex, make sure you manually specify port 8443 in external connections. That should be enough... let me know if you already tried that and we'll see if the advanced settings match.

Sorry about your reboot/locking problem... how strange that you lose them. Could that be a permissions issue perhaps? I ran Gooby on 16.04, 18.04 and even Debian 9 (with the still-not-tested Docker adaptation currently in beta) and I never ran into that issue. Could it be a permissions issue?

@bdschuster
Copy link
Collaborator Author

it's a clean wipe of the server, but it is physical, not virtual.
After installing 16.04 Server, The 8443 works! So i'm back online with Plex....but.....If i do a reboot on the server, my containers are gone, I do a systems cleanup, then everything is back up. Any ideas on this?

Also, another weird thing is NZBGET was having issues unraring, had to dig in logs, but found out I had to chmod 777 /mnt/uploads/Downloads , then reboot the container, and all is working, same with Sonarr, Radarr, etc.

Thanks again for all your help, never had these issues before, everything always just worked...lol

@bdschuster
Copy link
Collaborator Author

Just rebooted again. Same thing, no containers, did system cleanup, and had to chmod Downloads, restart downloader containers, and all was good again.

@TechPerplexed
Copy link
Owner

Heh, I know right... always someone 😛 Teasing, just sorry you're having issues!

It's a longshot... but can you check if your cron contains this line?
@reboot /opt/Gooby/scripts/cron/rclean.sh > /dev/null 2>&1

Fingers crossed that's what it is :)

@bdschuster
Copy link
Collaborator Author

It wasn't in /etc/crontab, but i added it...i'll reboot a little later and test. It does ask for a user in /etc/crontab now, not sure if it matter if it's not defined.

@TechPerplexed
Copy link
Owner

I would add it to the user: try crontab -e instead

@bdschuster
Copy link
Collaborator Author

dang, it's actually in crontab -e. So that's not the issue :-(

@TechPerplexed
Copy link
Owner

Gah!!! I'm at a loss then... so when you reboot, you say the containers are gone. Do you mean the folder /var/local/Gooby/Docker becomes owned by another user, or root only, or disappears altogether?

@bdschuster
Copy link
Collaborator Author

bdschuster commented Jan 21, 2019 via email

@TechPerplexed
Copy link
Owner

Weird. Really this shouldn't be an issue, but let's try to manually set the permissions:

sudo chown -R $USER:$USER $HOME
sudo chown -R $USER:$USER /var/local/Gooby
sudo chown -R $USER:$USER /var/local/.Gooby
sudo chown -R $USER:$USER /mnt/uploads

If that's not it, you might want to check what docker version you have: docker -v and docker-compose -v - see if there is anything odd there?
Mine shows:
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad0

@bdschuster
Copy link
Collaborator Author

bdschuster commented Jan 21, 2019 via email

@bdschuster
Copy link
Collaborator Author

rebooted, same thing. docker -ps shows nothing. Running /opt/Gooby/scripts/cron/rclean.sh manually comes back "already running". Did system cleanup, back to normal. I don't get it.

@TechPerplexed
Copy link
Owner

That makes two of us! The fact that you get "already running" means that the script hasn't finished running after the reboot. This could indicate that your system is very, VERY slow and will eventually get there - or it could mean that it can't finish because it gets hung up on something.

The weirdest part is it only hangs after you reboot... not after a regular cleanup. Can you try to wait about 10 minutes after a reboot and see if it sorts itself after a longer time period?

I'm as stumped as you are!

@bdschuster
Copy link
Collaborator Author

i can only assume it's getting hung on something, I've waited almost 30mins already. The system is for sure not slow, it's a physical server, and NO issues with transcoding of running plex or unpacking or anything...lol.

@TechPerplexed
Copy link
Owner

Have you tried a gasp rebuild? That is usually my last resort... and then solves all problems (hopefully that will be the case for you too!)

@bdschuster
Copy link
Collaborator Author

sorry, explain? lol

@TechPerplexed
Copy link
Owner

LOL, I just meant wipe the server and start with a fresh installation... if that is an option at all...

@deedeefink
Copy link

deedeefink commented Jan 22, 2019

Hi folks,
So, I will join the crowd with the same issues. I've been running the server without issues for months, but after installing some Ubuntu/library updates (Can remember which) I started having issues with the mounts coming down.

After a system cleanup everything works, but overnight the mounts goes offline again. This has now repeated for a week or more.

But, hey, happy about reading these looong threads again (Secretly missed it)

Just wanted t say that you're not the only ones having problem. So I'm cheering for a resolution

@bdschuster
Copy link
Collaborator Author

Woohoo!!! Sorry, I just love it when I'm not the only one 😊, then I don't look (as) crazy...lol

@TechPerplexed
Copy link
Owner

GAH 🗡 😿 👊 Well it's lovely to hear from you @deedeefink but that's not what I wanted to hear, heh.

Just to clarify: @bdschuster has a problem with the containers going down, not the mount, but you @deedeefink mentioned the mounts coming down. Let's verify you're both experiencing the same issue here... can you describe in more detail what exactly goes down in your case, @deedeefink?

@TechPerplexed
Copy link
Owner

Just a follow up: I have rebooted my server about 3 million times this last week (ok, I exaggerate), and I can't reproduce either of your problems... sorry :(

In better news: we're testing a new syncmount script which not only uploads stuff faster to Google, but it will make @bdschuster particularly happy since it addresses the future date issue you were having.

It's been field tested for a few weeks in a private setting and it seems to work fine, so stay tuned for an update in another week or so 👍 (or if you really can't wait, grab the script in the Debian branch and start testing) 😄

@bdschuster
Copy link
Collaborator Author

don't play with me @TechPerplexed LOL! Also, do you think to figure out where the failure is after rebooting, should I kill the script, then run it manually to see where it is getting hung at? Or any other ideas of figuring out? I could give you a temp login to the server if you wanted to look.

@TechPerplexed
Copy link
Owner

Yeah it's puzzling... so how exactly is it behaving?

Let's see if understood everything. You reboot, and then

  • /mnt/google comes up correctly
  • /mnt/uploads/Downloads doesn't have the correct permissions
  • /var/local/Gooby exists, but
  • none of the containers come up

However, when you run rclean from the menu, it behaves correctly and the containers come up normally, did I get that right?

Any more odd behaviour that you notice?

@bdschuster
Copy link
Collaborator Author

Yes and No:

  • /mnt/google comes up
  • /var/local/Gooby exits, but
  • none of the containers are listed using docker -ps
    • Can be fixed by running rclean, all containers are now there.
  • /mnt/uploads/Downloads is a strange one to me.
    • It is owned by myusername:myusername
    • Has 776 permissions, which should be fine
    • Containers cannot write to mount (/Downloads), and doesn't appear they are mounted to /mnt/uploads/Downloads. (ie: If something is written manually from inside the container, you can't see it in /mnt/uploads/Downloads or other containers at /Downloads. Here's the kicker on this one, This isn't just happening at reboot. My theroy behing this is that when sync happens and this command comes into play find . -type d -empty -delete, the containers lose their mounts because it doesn't see Downloads any longer. This is only happening when /mnt/uploads/Downlods is empty. I see you have mkdir -p ${UPLOADS} ${UPLOADS}/Downloads after that command, but I think the containers already lost their mount at that point, and you have to restart them (Sonarr/Radarr/NZBGET) to get them mounted again.
      I'm still testing the above theory, but I believe i narrowed it down to that, as permissions seem fine.
      I'm going to test that theory now, and get back to you, but the container ordeal, i'm still trying to look into that, it's just hard since I cannot reboot constantly as people are watching things at some times.

@bdschuster
Copy link
Collaborator Author

So my theory about /mnt/uploads/Downlods is correct. I have confirmed when the script deleted and re-creates the Downloads folder, the containers lose their mounts. A restart of the containers bring them back up till the next time the script runs (if Downloads is emptly). I have corrected this by changing your above command to find . ! -path "*Downloads*" -type d -empty -delete and commented out the mkdir, so it ignores the Downloads directory, and issue does not persist.

@TechPerplexed
Copy link
Owner

Downloads: There you go, even though everything should be identical on two systems, somehow it isn't! Glad you got at least that sorted!

Containers: So really it boils down to reboot vs rclean - and while both run the EXACT same script, somehow they behave differently! Grasping at straws here, but what if you delay the script to run a minute after reboot?
Edit your cron: crontab -e and just add @reboot sleep 60 && instead of @reboot?

@bdschuster
Copy link
Collaborator Author

ok, so i figured the containers out. I did try the sleep, i even went up as much as 2mins, with same results. After reboot, i'm seeing that the rclean script is still running (we already knew that), and continues to run until it's killed. Got me thinking what it's hung up on. After deep diving into syslogs, i noticed that even though @reboot is in crontab -e, it is still running as my username, meaning the script must be hanging asking for a sudo password. I ran sudo visudo and added <myusername> ALL=NOPASSWD: ALL to the end of the file and saved. Now I can run sudo commands without prompting for password. Rebooted again (without the sleep in cron) and everything came up as it should. Rebooted 3 times, still no issues.

@bdschuster
Copy link
Collaborator Author

Ok, so looking at your create user script, it does what I mentioned above, BUT, if you already created a user and are not logged as root, it does not ask to create a user and does not run that script, so that's where my problem was.

@TechPerplexed
Copy link
Owner

AHHHHHHHHHH thank heavens for that!!!!!!!!!! So.... it was a permissions issue after all.... but I never considered the visudo thing (it's so natural for me to add that, and now even more so since Gooby takes care of creating my new user after each reinstall)

SO pleased you got it sorted - let me close this issue now and file your experience as a learning moment for me 👍

@TechPerplexed
Copy link
Owner

I have corrected this by changing your above command to find . ! -path "*Downloads*" -type d -empty -delete and commented out the mkdir, so it ignores the Downloads directory, and issue does not persist.

Oh one question (humble request) for you: I'd like to include this line into the script - feel free to edit & send a pull request (if it's not too much trouble) :)

@TechPerplexed
Copy link
Owner

Thank you so much!! ❤️

@bdschuster
Copy link
Collaborator Author

Done for both master and Debian...Now i'm just trying to figure out where we could fix if you already have a username created so it adds ALL=NOPASSWD: ALL if it does not alrady exist. I think i may know how.

Also, going to update to the Debian branch and check out your new uploading...any suggestions before I do?

@TechPerplexed
Copy link
Owner

Aren't you clever 😆

Debian branch should work out of the box... (name is a bit of a misnomer, it just means that the improved Docker installation should work for Ubuntu and Debian alike).

The syncmount script has some very exciting new features. One is complete statistics of what you upload for any given time period through the built in scripts (backup & syncmount) - which will require a little bit of self installation. I'm working on the Wiki right as we speak. Handy to keep an eye on the 750G upload max Google imposes, among other uses.

The other big improvement, of course, is a fix for the future date and some significant enhancements on the uploading process. I have to thank my friend kelinger for all those, I think it's no secret that he is the real brain behind this project 👍

Can't wait to hear how it's working for you!

@bdschuster
Copy link
Collaborator Author

Sounds Awesome! Should I just be able to pull it and run a system cleanup?

@TechPerplexed
Copy link
Owner

Well if you run a system cleanup, it would revert right back to the master branch... so you'd have to disable that first in the script (or update the one line to sudo git clone -b debian https://github.com/TechPerplexed/Gooby /opt/.Gooby :)

@bdschuster
Copy link
Collaborator Author

I knew that! I Swear! HAHAHAHAHA 😄 Honestly, i knew it was in there, but forgot about it till you said something, so, yeah, i would have been going crazy...lol

@TechPerplexed
Copy link
Owner

LOL trust me... I found out the hard way too 😋

@TechPerplexed
Copy link
Owner

The updates are now live... Debian branch will be deleted soon. Make sure you update to the Master branch :)

@bdschuster
Copy link
Collaborator Author

bdschuster commented Jan 29, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants