Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keepalived vip across subnets fail on ttl #941

Closed
orenye opened this issue Jul 2, 2018 · 4 comments
Closed

Keepalived vip across subnets fail on ttl #941

orenye opened this issue Jul 2, 2018 · 4 comments

Comments

@orenye
Copy link

orenye commented Jul 2, 2018

Hi,

I’m trying to create a virtual ip between 2 VMs that run on different subnets using dockerized keepalived (osixia/keepalived:1.4.5) with unicast vrrp messages.
Unfortunately the vrrp messages are dropped due to the ttl value since the packet is routed from a remote subnet (log snippet from VM1):

Mon Jul  2 12:41:56 2018: (VI_1): invalid ttl. 252 and expect 255
Mon Jul  2 12:41:56 2018: bogus VRRP packet received on ens160 !!!
Mon Jul  2 12:41:56 2018: VRRP_Instance(VI_1) Dropping received VRRP packet...

If I understand correctly this changelog description the issue should have been handled since Release 1.2.10 (source: http://www.keepalived.org/changelog.html) :

vrrp: disable TTL sanity check for unicast use-case. In order to protect against any packet injection, VRRP provides sanity check over IP header TTL. This TTL MUST be equal to 255 and means both sender and receiver are attached on the same ethernet segment. Now with unicast extension this protection MUST be disabled since VRRP adverts will mostly traverse different network segments. !!! WARNING !!! When using VRRP in unicast use-case in order to protect against any packet injection the best practice is to use IPSEC-AH auth method otherwise you are exposed to potential attackers !

Should this really work out of the box or should I configure somehow the ttl sanity check?

My setup:
VM1 ip: 172.31.245.109
VM2 ip: 172.31.205.162
Virtual ip: 172.31.245.10

VM1 keepalived.conf:

vrrp_script chk_live {  
        script "curl 172.31.245.109:8080"
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}

vrrp_instance VI_1 {
        interface ens160
        state MASTER
        virtual_router_id 51
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            172.31.245.10
        }
        track_script {
            chk_live
        }
        unicast_peer {
          172.31.205.162
        }
}

VM2 keepalived.conf:

vrrp_script chk_live {
        script "curl 172.31.205.162:8080"
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 51
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            172.31.245.10
        }
        track_script {
            chk_live
        }
        unicast_peer {
          172.31.245.109
        }
}

Thanks!

@pqarmitage
Copy link
Collaborator

The code in vrrp_in_chk() in vrrp.c is

                /* MUST verify that the IP TTL is 255 */
                if (LIST_ISEMPTY(vrrp->unicast_peer) && ip->ttl != VRRP_IP_TTL) {
                        log_message(LOG_INFO, "(%s): invalid ttl. %d and expect %d",
                                vrrp->iname, ip->ttl, VRRP_IP_TTL);

From this, if unicast peers are configured, then you shouldn't get the invalid ttl message at all.

With respect to your configurations, you shouldn't configure state MASTER on the lower priority vrrp instance (you don't want it to default to being master). I'm also not sure how this will work, since if VM2 becomes master, unless something else changes somewhere, packets to 172.31.245.10 are not going to be routed to the network that VM2 is on (i.e. 172.31.205.0/24).

@orenye
Copy link
Author

orenye commented Jul 3, 2018

Hmmm... When I set VM2 to BACKUP and restarted both keepalived-s it stopped complaining about the ttl. But then when I returned it to be MASTER on both VMs it still worked OK, didn't get the ttl error and transitioned to the right status. So I assume that I probably messed up the setup in the first place and now it is configured and works as expected.
Sorry for the bother and thanks for the quick reply!
Now for the other part, indeed as you indicated packets can’t arrive to VM2 since the virtual ip is in another subnet. So how can I have 2 VMs in different subnets that are serving the same virtual ip?

@pqarmitage
Copy link
Collaborator

So how can I have 2 VMs in different subnets that are serving the same virtual ip?

This isn't the real question to be asking. With the configuration you have, you do have 2 VMs in different subnets serving the same virtual ip. What I think you need to be asking is how can you route to the second VM when it becomes master. And I'm afraid that my answer is I don't know. I think you need to take a step back and set out what it is you are trying to achieve, and then it might be possible to work out a solution.

@orenye
Copy link
Author

orenye commented Jul 3, 2018

Got it. Thanks!

@orenye orenye closed this as completed Jul 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants