New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mounting with ipv6 hostname leads to failure #3329
Comments
chen1195585098
pushed a commit
to chen1195585098/glusterfs
that referenced
this issue
Mar 22, 2022
Defective code can be found in function af_inet_client_get_remote_sockaddr : if (inet_pton(AF_INET6, remote_host, &serveraddr_ipv6)) { sockaddr->sa_family = AF_INET6; } The problem is, when a ipv6 hostname comes here, it fails to pass the condition. Hence, the sa_family of this ipv6 hostname cannnot be correctly marked as AF_INET6, but AF_INET. it can be verified by following steps: 1. create volume in ipv6 (ip or hostname) 2. mount the volume using hostname 3. mount failed, since the hostname is incorrectly set to AF_INET as displayed in log file. Fixes: gluster#3329 Change-Id: Ibd1b9e71f2d48a205eef1ce3cebd7397ce26e94f Signed-off-by: ChenJinhao <jinhaochen.cloud@gmail.com>
xhernandez
pushed a commit
that referenced
this issue
Mar 23, 2022
Defective code can be found in function af_inet_client_get_remote_sockaddr : if (inet_pton(AF_INET6, remote_host, &serveraddr_ipv6)) { sockaddr->sa_family = AF_INET6; } The problem is, when a ipv6 hostname comes here, it fails to pass the condition. Hence, the sa_family of this ipv6 hostname cannnot be correctly marked as AF_INET6, but AF_INET. it can be verified by following steps: 1. create volume in ipv6 (ip or hostname) 2. mount the volume using hostname 3. mount failed, since the hostname is incorrectly set to AF_INET as displayed in log file. Fixes: #3329 Change-Id: Ibd1b9e71f2d48a205eef1ce3cebd7397ce26e94f Signed-off-by: ChenJinhao <jinhaochen.cloud@gmail.com>
Note that, last I checked - around 10.3 - there were other problems with Gluster and IPv6-only networks. E.g. probing an ipv6-only host fails. Not sure if those have been fixed. I tried to open a conversation on fixing it, but I couldn't understand what @mohit84 would find acceptable - other than that IPv6 only is not supported and should be special-cased. See: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description of problem:
Mounting a volume with ipv6 hostname will leads to failure, since the hostname is incorrectly marked as AF_INET type.
The exact command to reproduce the issue:
gluster volume create mount_test <ipv6_addr>:/gfs/rep_mount_test force (create a volume in ipv6 environment)
gluster volume start mount_test
glusterfs --volfile-id=mount_test --volfile-server=node1 <mount_point>; echo $? (output : 107)
The full output of the command that failed:
log file is as follows:
[2022-03-21 01:54:06.518102] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host node1 The message "E [MSGID: 101075] [common-utils.c:497:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=2}, {ret=Name or service not known}]" repeated 2 times between [2022-03-21 01:54:00.515081] and [2022-03-21 01:54:06.518098]
Expected results:
**Mandatory info:**
**- The output of the `gluster volume info` command**:
Volume Name: mount_test
Type: Distribute
Volume ID: 85b49fb0-0591-43f4-bf00-315839a74fcf
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/rep_mount_test
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet6
nfs.disable: on
**- The output of the `gluster volume status` command**:
Status of volume: mount_test
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/gfs/rep_mount_test 49152 0 Y 28167
Task Status of Volume mount_test
------------------------------------------------------------------------------
There are no active volume tasks
**- The output of the `gluster volume heal` command -**:
Launching heal operation to perform index self heal on volume mount_test has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume mount_test
**- Provide logs present on following locations of client and server nodes - **
/var/log/glusterfs/
mount log:
[2022-03-21 01:54:06.518102] E [name.c:265:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host node1 The message "E [MSGID: 101075] [common-utils.c:497:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=2}, {ret=Name or service not known}]" repeated 2 times between [2022-03-21 01:54:00.515081] and [2022-03-21 01:54:06.518098]
**- Is there any crash ? Provide the backtrace and coredump -** no
**Additional info:**
# ping6 node1
PING node1(node1 ()) 56 data bytes
64 bytes from node1 (): icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from node1 (): icmp_seq=2 ttl=64 time=0.023 ms
64 bytes from node1 (): icmp_seq=3 ttl=64 time=0.020 ms
# cat /etc/glusterfs/glusterd.vol |grep inet6
option transport.address-family inet6
- The operating system / glusterfs version:
Centos 7, glusterfs 8.2
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: