Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
IPv6 and IPv4 not working together on Keepalived 1.3.2 #497
We need to configure multiple IPv4 and IPv6 on keepalived and that is the reason we switched over to Keepalived 1.3.2 .
Earlier when i was using keepalived version 1.2.13 , server goes to hang state when configure with multiple IPv4 and IPv6. With single IPv4 and IPv6 keepalived was working perfectly fine. We need to configure multiple IPv4 and IPv6 on keepalived and that is the reason we switched over to Keepalived 1.3.2 .
It is not possible to configure both IPv4 and IPv6 addresses as virtual_ipaddresses in a single vrrp_instance; the reason is that the VRRP protocol doesn't support it. If you need to associate both IPv4 and IPv6 addresses with a single vrrp_instance, then configure the addresses of one family in a
Although earlier versions of keepalived didn't complain if IPv4 and IPv6 addresses were both configured, it didn't work properly.
Probably a better solution than using a
If this doesn't resolve your issue, then could you please post a copy of your configuration so that specific suggestions can be made.
Thanks a lot pqarmitage for quick and accurate reply...
hello pqarmitage ....we have implemented required changes and it was working fine but now whenever we start or stop keepalived the server goes in hang state. Attached are the file keepalived config file of two servers for your reference.
First of all, I am not clear what you mean by
I note that the files you have attached are DOS format files, i.e. each line ends . Can you confirm whether the actual files (the config files and the check_nginx script) contain the character at the end of each line or not.
Looking at your configurations, there are a few changes that could be made:
There may be other errors, but these are the ones I have noticed.
Since you have some invalid keywords configured, it would appear that you are not checking the keepalived logs to see what errors it is reporting. You need to check and resolve all the reported configuration errors.
The check_nginx script is interesting:
First of all, I would modify it as follows:
The main substance of the change is that it returns a specific exit code. keepalived uses the exit code to determine whether the script has succeeded or failed, and so an explicit exit code should be returned. The other changes are semantics -
The next issue with the script is that if the first value of $counter is 0 (which means nginx is not running), you then attempt to stop the nginx service, but presumably since it is not running it is already stopped, and then it checks again if nginx is running, but since it wasn't running before, and all you have attempted to do is to stop the (non-running) service, it would appear that the second time $counter is checked it will always be 0, and hence keepalived will always be stopped if nginx is not running.
What I think would be more conventional would be if nginx is not running to return a non-zero exit code so the script fails, and then keepalived will reduce the priority of the vrrp instances by 5, due to the
The check_nginx script would then become:
This way, keepalived remains running on both systems, but if nginx fails on 192.168.10.90, 192.168.10.91 will take over as master of V41, and 192.168.10.90 will become backup.
Now our new implementations . that hopefully looks successful
After achieving this , we have also made changes recommended by u which u can find in the attached files. To make it clear earlier chages that was shared was on keepalived upgraded version 1.3.2 and these changes are on older keepalived version i.e. 1.2.13 .
You still have invalid keywords in your config, which will be being reported in the log file, but it appears that you are not checking that. You've added
Since you haven't described what it is you are wanting to achieve, nor your environment, I cannot comment on the suitability of your configurations. From your configurations, it appears that under normal circumstances each system will be master for one of the vrrp instances, and backup for the other 3 instances, but if nginx is not running on a system, then that system will be backup for all vrrp instances. This will work fine unless nginx is not running on any of the systems, in which case the masters will be the same as if nginx is running on all the systems.
I'm not clear why you need 4 IPv4 addresses and 4 IPv6 addresses per vrrp instance, but that is presumably due to your local requirements.
It might be that you would be better off using the IPVS functionality of keepalived for what you are trying to achieve, but without knowing what you are trying to achieve overall I cannot say.
I hope the above helps.
Hi pqarmitage ...
In file keepalived.conf.bkp.txt at line 2, the keyword should be
The check_nginx.sh script appears to be checking if nginx has failed, and if so stops the nginx service. The problem is that thereafter, once a second the check_nginx.sh script is again run, and it will every time attempt again to stop the already stopped nginx service.
Are you finding that the nginx processes are unreliable and that you need this type of checking for the service having failed?
Do you have a separate process/procedure for restarting the nginx service after it has failed? Otherwise you'll eventually end up with all the keepalived instances in fault state.
In keepalived 1.2.13, we just use IPV4 and the config is ipv4_keepalivbed.txt.
In order to configure multiple IPv4 and IPv6, we use keepalived 1.3.5. I also want to achieve the same function -- the smooth failover like only IPV4.
Could you please help to check the new conf(ipv4_6_keepalivbed.txt)?
ENV: Redhat 3.10.0-693.17.1.el7.x86_64
I think it would be better to use separate vrrp instances for the IPv6 addresses, rather than the virtual_ipaddress_excluded blocks, so for example VI_5 could be split into VI_5 for IPv4 and VI6_5 for IPv6.
keepalived will track the interface that the vrrp_instance is configured on, so the track_interface blocks are unnecessary.
Since the IPv6 addresses are being added on eth0, would it be better for the IPv6 vrrp instances to use eth0? Also note that IPv6 instances use VRRP version 3 which doesn't support authentication.
Do you want the vrrp instances for IP address 126.96.36.199 and fe80::f816:3eff:fede:4c1a to be synchronised? If so, a sync group can be used, as I have done in
The maximum authentication password length is 8 characters, so I have truncated them. Also, do you really want/need authentication; VRRP version 3 removed it since it wasn't seen to be of any benefit.
Labels do not work with IPv6 addresses, so I have removed them.
I have attached 3 updated versions of your configuration.
ipv4_6_keepalivbed_1.txt creates separate IPv6 vrrp instances for the IPv6 addresses.
ipv4_6_keepalivbed_sg.txt adds sync groups to synchronise the vrrp instances VI_5 and VI6_5 etc.
ipv4_6_keepalivbed_sg1.txt uses a single sync group for all 6 vrrp instances, since they are all tracking the same items (eth0, eth1, and the 4 track scripts).
If you were using the code from the beta branch, the specifications of the track scripts could be moved into the vrrp_sync_group definition to avoid specifying them against each vrrp instance.
I hope that helps.
Thank you very much for your detailed and patient feedback.
Copyright(C) 2001-2017 Alexandre Cassen, firstname.lastname@example.org
Build options: PIPE2 LIBNL3 RTA_ENCAP RTA_EXPIRES FRA_OIFNAME FRA_TUN_ID RTAX_CC_ALGO RTAX_QUICKACK LIBIPTC LIBIPSET_DYNAMIC LVS LIBIPVS_NETLINK VRRP VRRP_AUTH VRRP_VMAC SOCK_NONBLOCK SOCK_CLOEXEC FIB_ROUTING INET6_ADDR_GEN_MODE SNMP_V3_FOR_V2 SNMP SNMP_KEEPALIVED SNMP_CHECKER SNMP_RFC SNMP_RFCV2 SNMP_RFCV3 SO_MARK
@deshui123 You have asked a number of questions; answers are below:
How to hide the output “Build option”?
1. Since VRRP IPv4 and IPv6 instances are completely independent of each other the virtual_router_id of ipv4 or ipv6 instance can be same or not, and it doesn't matter right?
That is, for example, there could be a IPv4 vrrp instance using VRID 1 and also a IPv6 vrrp instance using VRID1 both running on
2. Use of track_interface
This means that the vrrp instance will run over eth0 (the
This means that the vrrp instance will use
Could you please share some knowledge about interface, track_interface and "dev interface" of virtual_ipaddress?
Compared with ipv4_6_keepalivbed_sg.txt and ipv4_6_keepalivbed_sg1.txt. The different is there is only a single sync group in ipv4_6_keepalivbed_sg1.txt.
If Router 2 is a backup for Router 1 an the routers are responsible for routing between Network 1 and Network 2, then if Router 1 loses its connection to Network 1, it can no longer forward traffic between the two networks; Router 1 will cease being master on Network 1 if eth1 goes down, but it must also stop being master on Network 2 to that Router 2 can take over both VIPs.
Configuration could be:
But all vips in the same vrrp_sync_group, if one instances goes into fault state, the other instances will be forced to fault state as well, right? i want to achieve multi-master mode. Maybe vrrp_sync_group is not needed by my case.
Can i prepare separate config files for each instance and include these configs in keepalived.conf?
I've attached an example configuration file that could be used on both
This may not do what you want to achieve, but it should give you some ideas. To get a feeling for what the configuration generates, you could run keepalived with the