Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failure due to kadmin port already in use #281

Closed
flo-renaud opened this issue Jun 19, 2019 · 3 comments
Closed

Test failure due to kadmin port already in use #281

flo-renaud opened this issue Jun 19, 2019 · 3 comments
Labels
difficulty:easy groomed Issues already discussed by the dev team prio:high type:bug

Comments

@flo-renaud
Copy link
Contributor

Issue:
sometimes the tests fail during ipa server installation because the port 749 (required by kadmin) is already in use. For instance see this PRCI run and the kadmind log.

A first attempt was done in PR #242 to fix the problem, but added the parameter in ansible/roles/runner/tasks/setup.yml. This means that the runner is configured to keep port 749 free. We should instead modify one of the playbooks in ansible/roles/machine/, as they are applied to the VM launched on the runner (=IPA server, replica or client) rather than to the runner.

@netoarmando netoarmando added difficulty:easy groomed Issues already discussed by the dev team prio:high type:bug labels Jun 24, 2019
netoarmando added a commit that referenced this issue Aug 23, 2019
Change initially applied in 68f5214.

The default value of sunrpc.min_resvport is 665. An NFS mount can block
the kadmin service port and cause IPA installation to fail.

That was applied in the runner, but it didn't update the box used to run
the tests as mentioned in #281.

Package 'nfs-utils' must be installed before changing the sysctl setting.

Signed-off-by: Armando Neto <abiagion@redhat.com>
@netoarmando
Copy link
Member

I've logged into a random runner and initialized a vm running the box freeipa/ci-master-f30 (libvirt, 0.0.4), I could see that the settings was in fact applied for both machines:

[root@big-runner-3 ~]# sysctl -p
sunrpc.min_resvport = 750
[root@ci-master-f30-vm ~]# sysctl -p
sunrpc.min_resvport = 750

Further investigation is needed to check whether NFS is not using that setting or another application is binding to port 749.

@netoarmando
Copy link
Member

I've tried to bring-up and destroy virtual machines defined by:

Vagrant.configure("2") do |config|
  config.vm.box = "freeipa/ci-master-f30"
  config.vm.synced_folder "./", "/vagrant",
    type: "nfs",
    nfs_udp: false,
    nfs_version: 4
   # linux__nfs_options: ["rw,no_subtree_check,all_squash,noresvport"],
   # mount_options: ["noresvport"]

  config.vm.provision "shell",
    inline: "ss --all --tcp --udp --numeric --processes"
end

In the end I could see that NFS is not respecting sunrpc.min_resvport setting, the only report I've found describing a similar scenario was this: https://www.linuxquestions.org/questions/linux-server-73/nfs-issue-rpc-using-reserved-port-756204/

I've tried to set-up NFS to not use reserved ports (noresvport), but I only got 'access denied' errors.

@netoarmando
Copy link
Member

This was fixed by #334, already deployed, nfs was replaced by sshfs as vagrant sync folder mechanism, thus nfs is not binding to the port 749 any more:

2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] Netid  State      Recv-Q  Send-Q     Local Address:Port      Peer Address:Port                                                                                  
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] udp    UNCONN     0       0                0.0.0.0:68             0.0.0.0:*      users:(("dhclient",pid=1605,fd=7))                                             
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] udp    UNCONN     0       0              127.0.0.1:323            0.0.0.0:*      users:(("chronyd",pid=505,fd=5))                                               
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] udp    UNCONN     0       0                  [::1]:323               [::]:*      users:(("chronyd",pid=505,fd=6))                                               
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    LISTEN     0       128              0.0.0.0:22             0.0.0.0:*      users:(("sshd",pid=544,fd=3))                                                  
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:59316    52.219.73.138:80                                                                                    
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:53154    128.172.15.65:80                                                                                    
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:53156    128.172.15.65:80                                                                                    
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    ESTAB      0       0        192.168.121.230:22       192.168.121.1:59938  users:(("sshd",pid=2281,fd=5),("sshd",pid=2263,fd=5))                          
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:49054      18.7.29.125:80                                                                                    
2020-01-09 23:30:31,728    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:53152    128.172.15.65:80                                                                                    
2020-01-09 23:30:35,956    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    ESTAB      0       0        192.168.121.230:22       192.168.121.1:59940  users:(("sshd",pid=2458,fd=5),("sshd",pid=2456,fd=5))                          PASSED [  8%]
2020-01-09 23:30:35,956    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    ESTAB      0       0        192.168.121.230:22      192.168.121.53:42152  users:(("sshd",pid=17175,fd=5),("sshd",pid=17173,fd=5))                        
2020-01-09 23:30:35,956    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:49052      18.7.29.125:80                                                                                    
2020-01-09 23:30:35,956    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:49062      18.7.29.125:80                                                                                    
2020-01-09 23:30:35,957    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:59314    52.219.73.138:80                                                                                    
2020-01-09 23:30:35,957    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    TIME-WAIT  0       0        192.168.121.230:59318    52.219.73.138:80                                                                                    
2020-01-09 23:30:35,957    DEBUG  [ipatests.pytest_ipa.integration.host.Host.master.cmd6] tcp    LISTEN     0       128                 [::]:22                [::]:*      users:(("sshd",pid=544,fd=4))  

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
difficulty:easy groomed Issues already discussed by the dev team prio:high type:bug
Projects
None yet
Development

No branches or pull requests

2 participants