-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More of a question - Upgrade to v2 #51
Comments
Hiya @bdschuster :) Let me start with some bad news... although you should be able to import your Plex database, you will need to change the paths of your libraries. Unfortunately Plex WILL rescan your content to some degree and depending on the size of your libraries, that might take a while (days). I believe there should be a way to change the paths in the the database itself, but that's not something I have tried myself. What I'd probably do is create a backup of the Plex database, restore it on a server that charges hourly (Hetzner or Scaleway) and then install Gooby and import Plex. Then change the paths, do your scanning, create a backup and restore that on your main server. It'll probably cost you $0.50 for a few days of usage but it's by far the safest method :) |
Thanks! I think i'm going to just try and use a symlink. Anything that you know wrong with that? I mean, I will fix it eventually, but for now. |
I don't think there is anything wrong with using a symlink, but I'm not sure how that would solve your problem. I assume that you currently use /media/Plex (the old location for the mount), right? We'd have to fool the container into using that location (the actual mount doesn't matter). You could try to edit the yaml file (the one located in That way at least Plex won't have to rescan... or so I assume. I wouldn't attempt it on your main server unless you test it but theoretically it should work without having to rescan anything! |
it actually looks like it's using /media/Google currently |
Ahh ok - so yes, edit the 20-plex.yaml file and change that line to /media/Google. I don't envy you having to update a working (and primary) system... you better warn your users 😰 |
Last question (maybe...lol), will the old backups restore to the new location correctly? and does the old backup, backup the Tautulli data? |
No, the old backup only backed up Plex I'm afraid, and it will just restore it to the old Plex location. However, the system should offer to import your old Plex, Tautulli, and Sonarr/Radarr databases (= copy them to the new location) Wishing you luck! Make sure you have 20 backups just in case, ok? I'd hate for you to lose anything in case it didn't go as planned... I mean, it should, but eh, computers have a mind of their own! |
any idea what could be going on here? when I run system cleanup: Shutting everything down ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information. |
and now i appear to be stuck here: Pulling ombi (linuxserver/ombi:latest)... Cleaning Docker leftovers |
Hmmm I get that message too every now and then (the HTTP request taking too long). Usually a server reboot solves that... I'm using a dedi with OneProvider. As for the "cannot overwrite digest" - no idea! I searched Google and it says it is most likely a docker bug... yeah, not helpful! |
don't hate me, another question....lol. How do you have your Plex scan for new files when they are downloaded from Sonarr/Radarr? |
Hate you? Neverrrr 🤣 It's very simple really - go to Settings - Connect - add Plex. Then enter Host: plex That's all :) |
The only issue with that is it initiates a partial scan when download completes, meaning it's still sitting and waiting to be transferred to google drive, so it doesn't find anything and never shows up, untill the next scheduled scan. |
Ah I see what you mean... well if you are using the new MergerFS version, theoretically whatever is sitting in the upload folder should be scanned and added to Plex even before it's uploaded to Google. Is that not how it's behaving? |
I'll tell you when i figure out what's going on here....I CANNOT get the remote access working. I'm not sure if this has something to do with my old data or not, but it's driving me crazy. If you have any info...let me know. |
What remote access do you mean, can you give an example please? |
For Plex. Can't access my server directly from the outside. Only indirectly.
…On Sat, Jan 19, 2019, 10:18 AM Tech Perplexed ***@***.***> wrote:
What remote access do you mean, can you give an example please?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Akc30muB5quyUVyx_ZzOPVmix9hZrUtVks5vE0VmgaJpZM4Z9YQW>
.
|
And I'm also having a problem with locking up on reboot then losing all docker containers after a force reboot. I'm on 18.04 Server. I'm going to roll back to 16.04 and reinstall/restore and see what happens. |
Ok, with Plex, make sure you manually specify port 8443 in external connections. That should be enough... let me know if you already tried that and we'll see if the advanced settings match. Sorry about your reboot/locking problem... how strange that you lose them. Could that be a permissions issue perhaps? I ran Gooby on 16.04, 18.04 and even Debian 9 (with the still-not-tested Docker adaptation currently in beta) and I never ran into that issue. Could it be a permissions issue? |
it's a clean wipe of the server, but it is physical, not virtual. Also, another weird thing is NZBGET was having issues unraring, had to dig in logs, but found out I had to chmod 777 /mnt/uploads/Downloads , then reboot the container, and all is working, same with Sonarr, Radarr, etc. Thanks again for all your help, never had these issues before, everything always just worked...lol |
Just rebooted again. Same thing, no containers, did system cleanup, and had to chmod Downloads, restart downloader containers, and all was good again. |
Heh, I know right... always someone 😛 Teasing, just sorry you're having issues! It's a longshot... but can you check if your cron contains this line? Fingers crossed that's what it is :) |
It wasn't in /etc/crontab, but i added it...i'll reboot a little later and test. It does ask for a user in /etc/crontab now, not sure if it matter if it's not defined. |
I would add it to the user: try |
dang, it's actually in crontab -e. So that's not the issue :-( |
Gah!!! I'm at a loss then... so when you reboot, you say the containers are gone. Do you mean the folder /var/local/Gooby/Docker becomes owned by another user, or root only, or disappears altogether? |
The containers are gone. If I run 'docker ps', nothing shows. The mounts
are all there and good. But the '/mnt/uploads/Downloads' permissions are
gone as well.
|
Weird. Really this shouldn't be an issue, but let's try to manually set the permissions:
If that's not it, you might want to check what docker version you have: |
Set permissions, going to reboot, but looks like my docker is same version:
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad01
… |
rebooted, same thing. docker -ps shows nothing. Running /opt/Gooby/scripts/cron/rclean.sh manually comes back "already running". Did system cleanup, back to normal. I don't get it. |
That makes two of us! The fact that you get "already running" means that the script hasn't finished running after the reboot. This could indicate that your system is very, VERY slow and will eventually get there - or it could mean that it can't finish because it gets hung up on something. The weirdest part is it only hangs after you reboot... not after a regular cleanup. Can you try to wait about 10 minutes after a reboot and see if it sorts itself after a longer time period? I'm as stumped as you are! |
i can only assume it's getting hung on something, I've waited almost 30mins already. The system is for sure not slow, it's a physical server, and NO issues with transcoding of running plex or unpacking or anything...lol. |
Have you tried a gasp rebuild? That is usually my last resort... and then solves all problems (hopefully that will be the case for you too!) |
sorry, explain? lol |
LOL, I just meant wipe the server and start with a fresh installation... if that is an option at all... |
Hi folks, After a system cleanup everything works, but overnight the mounts goes offline again. This has now repeated for a week or more. But, hey, happy about reading these looong threads again (Secretly missed it) Just wanted t say that you're not the only ones having problem. So I'm cheering for a resolution |
Woohoo!!! Sorry, I just love it when I'm not the only one 😊, then I don't look (as) crazy...lol |
GAH 🗡 😿 👊 Well it's lovely to hear from you @deedeefink but that's not what I wanted to hear, heh. Just to clarify: @bdschuster has a problem with the containers going down, not the mount, but you @deedeefink mentioned the mounts coming down. Let's verify you're both experiencing the same issue here... can you describe in more detail what exactly goes down in your case, @deedeefink? |
Just a follow up: I have rebooted my server about 3 million times this last week (ok, I exaggerate), and I can't reproduce either of your problems... sorry :( In better news: we're testing a new syncmount script which not only uploads stuff faster to Google, but it will make @bdschuster particularly happy since it addresses the future date issue you were having. It's been field tested for a few weeks in a private setting and it seems to work fine, so stay tuned for an update in another week or so 👍 (or if you really can't wait, grab the script in the Debian branch and start testing) 😄 |
don't play with me @TechPerplexed LOL! Also, do you think to figure out where the failure is after rebooting, should I kill the script, then run it manually to see where it is getting hung at? Or any other ideas of figuring out? I could give you a temp login to the server if you wanted to look. |
Yeah it's puzzling... so how exactly is it behaving? Let's see if understood everything. You reboot, and then
However, when you run rclean from the menu, it behaves correctly and the containers come up normally, did I get that right? Any more odd behaviour that you notice? |
Yes and No:
|
So my theory about /mnt/uploads/Downlods is correct. I have confirmed when the script deleted and re-creates the Downloads folder, the containers lose their mounts. A restart of the containers bring them back up till the next time the script runs (if Downloads is emptly). I have corrected this by changing your above command to |
Downloads: There you go, even though everything should be identical on two systems, somehow it isn't! Glad you got at least that sorted! Containers: So really it boils down to reboot vs rclean - and while both run the EXACT same script, somehow they behave differently! Grasping at straws here, but what if you delay the script to run a minute after reboot? |
ok, so i figured the containers out. I did try the sleep, i even went up as much as 2mins, with same results. After reboot, i'm seeing that the rclean script is still running (we already knew that), and continues to run until it's killed. Got me thinking what it's hung up on. After deep diving into syslogs, i noticed that even though @reboot is in crontab -e, it is still running as my username, meaning the script must be hanging asking for a sudo password. I ran |
Ok, so looking at your create user script, it does what I mentioned above, BUT, if you already created a user and are not logged as root, it does not ask to create a user and does not run that script, so that's where my problem was. |
AHHHHHHHHHH thank heavens for that!!!!!!!!!! So.... it was a permissions issue after all.... but I never considered the visudo thing (it's so natural for me to add that, and now even more so since Gooby takes care of creating my new user after each reinstall) SO pleased you got it sorted - let me close this issue now and file your experience as a learning moment for me 👍 |
Oh one question (humble request) for you: I'd like to include this line into the script - feel free to edit & send a pull request (if it's not too much trouble) :) |
Thank you so much!! ❤️ |
Done for both master and Debian...Now i'm just trying to figure out where we could fix if you already have a username created so it adds ALL=NOPASSWD: ALL if it does not alrady exist. I think i may know how. Also, going to update to the Debian branch and check out your new uploading...any suggestions before I do? |
Aren't you clever 😆 Debian branch should work out of the box... (name is a bit of a misnomer, it just means that the improved Docker installation should work for Ubuntu and Debian alike). The syncmount script has some very exciting new features. One is complete statistics of what you upload for any given time period through the built in scripts (backup & syncmount) - which will require a little bit of self installation. I'm working on the Wiki right as we speak. Handy to keep an eye on the 750G upload max Google imposes, among other uses. The other big improvement, of course, is a fix for the future date and some significant enhancements on the uploading process. I have to thank my friend kelinger for all those, I think it's no secret that he is the real brain behind this project 👍 Can't wait to hear how it's working for you! |
Sounds Awesome! Should I just be able to pull it and run a system cleanup? |
Well if you run a system cleanup, it would revert right back to the master branch... so you'd have to disable that first in the script (or update the one line to |
I knew that! I Swear! HAHAHAHAHA 😄 Honestly, i knew it was in there, but forgot about it till you said something, so, yeah, i would have been going crazy...lol |
LOL trust me... I found out the hard way too 😋 |
The updates are now live... Debian branch will be deleted soon. Make sure you update to the Master branch :) |
Yay! Been working good for me so far!
… |
Hello! Long time! I have some servers I did not update to 2 yet. What is the best way I should go about it? It is one of my primary Plex servers, so I don't want to screw it up or cause rescans.
Let me know. Thanks!
The text was updated successfully, but these errors were encountered: