-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sshd segfault in ReaR recovery system (also segfaults in SLES15-SP1 system) #2240
Comments
@shaunsJM |
I believe the "Could not load host key" message is not the real issue and is a distraction. If local.conf is configured with a SSH_ROOT_PASSWORD then a public/private key pair is not needed.
This issue only happens on SLES 15, and not on SLES 12 using a very similar local.conf configuration with a SSH_ROOT_PASSWORD defined in local.conf |
On my ReaR test systems I always use After "rear mkrescue/mkbackup" you can check things inside the I am not in the office since some weeks and for some more weeks |
@shaunsJM can you verify your openssh installation on your ORIGINAL system, by running V. |
@gozora Thanks for the help. Here it the rpm verify output: |
@jsmeix I'm thinking something about the sshd changed between the version SLES12 and 15. During my testing yesterday, I managed to stumble upon a scenario where after running a ReaR backup, any new ssh connections into the server would produce a sshd segfault in the messages log. I will spend some time trying to reproduce it. If I can reproduce it, I will log a call with SUSE to have them look at why. |
@shaunsJM hmm, looks good at first sight. I just tested ssh connection to ReaR recovery system of my SUSE Linux Enterprise Server 15 with openssh-7.6p1-7.8.x86_64, and all works fine... Btw, my sshd in ReaR recovery system also suffers from messages like mentioned earlier:
But this is most probably caused by ReaR security feature, described in default.conf For your reference, here is how my sshd is linked:
V. |
@gozora Any chance you can upgrade to SLES 15 SP1? My openssh version is openssh-7.9p1-4.7.x86_64 on SLES 15 SP1. @jsmeix and @gozora This stops the segfault and allows me to login using ssh. Big thanks to @jsmeix and @gozora for looking at this with me. I will create a case with SUSE to see why the segfault is happening. Thanks! |
@shaunsJM following configuration allowed ReaR to copy host keys into ReaR recovery image:
With this setup you should be able to connect to ReaR recovery system without your workaround (which is fully valid by the way ;-)). V. |
@gozora I debated using SSH_UNPROTECTED_PRIVATE_KEYS="yes" but had some concerns about security. So I was leaving that as my last option. But thanks for pointing it out! :) |
According to
the root cause is not in ReaR so that I close this issue accordingly. |
A quick update, I logged the issue with SUSE and they are releasing a patch to fix the ssh segfault issue. |
@shaunsJM |
This should be my last update for this... SUSE has created a PTF (Program Temporary Fix) for sshd. I've been able to test it out and confirm that sshd will no longer segfault if the host keys are missing. SUSE didn't give me any ETA on when this fix will be included in the next update of sshd. |
@shaunsJM FYI A PTF is fully supported so you do not need to have a maintenance update Usually some time after you got a PTF you will get a maintenance update For more details see for example |
Relax-and-Recover (ReaR) Issue Template
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.5 / 2019-05-10
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
LOCAL.CONF
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR): VMWare guest
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
x86_64
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
BIOS and GRUB2
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
CIFS mount to a windows server
Description of the issue (ideally so that others can reproduce it):
While booted to rear recovery RESCUE ISO sshd will segfault when you try to ssh to the server to run the rear recover.
This is my first use of REAR with SUSE SLES 15. Other SLES versions do not have this issue.
Workaround, if any:
None
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
I have console screen printouts since I could not capture the journalctl log by using ssh.
rear-rescue-sshd-segfault.docx
The text was updated successfully, but these errors were encountered: