Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable IKE rekeying delay #292

Open
kouli opened this issue Mar 28, 2021 · 3 comments
Open

Configurable IKE rekeying delay #292

kouli opened this issue Mar 28, 2021 · 3 comments

Comments

@kouli
Copy link

kouli commented Mar 28, 2021

Hello,

I have a new feature request. Short description: I would like strongSwan to delay IKE SA negotiation attempt for a configurable time after the IKE SA is restarted (after it fails e.g. due to DPD timeout or due to initial negotiation timeout).

Long motivation: I am trying to configure an IPsec tunnel over a backup LTE line (IPv4 with central NAT in provider's network), the remote IPsec "server" is a normal public IP. It should transfer minimum "control" data (LTE is billed for transferred data amount) yet it should be available most of time. Instead of NAT keep-alive packets, I am using DPD for keeping NAT open: it is more reliable, since it survives NAT or LTE restarts (which sometimes change source UDP port of NAT-T payload: server ignores port change in keep-alive packets, but obeys it with any IPsec packets including DPD). The DPD interval is 50 seconds (UDP NAT on this particular LTE has timeout of 60 s). I have configured IKEv2 retransmission parameters to send 3 packets (at 0, 0.5 and 2.5 seconds) and give up at 10 seconds if it does not receive any reply. Together, it works very well with DPD: it does not only keep NAT open, it is also able to detect link/server failures within 60 seconds! It "costs" only 16 MB of data per month.

However, if the server fails (is not available, or moved and forgotten), the strongSwan client starts to send much more data. It tries to negotiate IKE SA, gives up after 10 seconds (see retransmission settings above) and tries immediately again. This is 223 MB of data per month (!).

Note it is not convenient to use {start,dpd,close}_action = trap to delay IKE SA renegotiation, as it is less reliable (in contrary with documentation!) than {start,dpd,close}_action = start: with "trap", the client must send a packet to restart IKE SA, but what if it is the server who wants to communicate first? Remember the NAT?

All togerher: I wish I could use {start,dpd,close}_action = start for clients behind NAT, but, on slower lines billed per megabyte transferred, it might introduce large (x 10) cost increase without the "keyingtries" delay requested here as a new feature.

@Thermi
Copy link
Contributor

Thermi commented Mar 29, 2021

Note it is not convenient to use {start,dpd,close}_action = trap to delay IKE SA renegotiation, as it is less reliable (in contrary with documentation!) than {start,dpd,close}_action

That is wrong. using "start" does not renegotiate the CHILD_SA if it is explicitely closed by any of the peers and none of the peers negotiate a new one.

@kouli
Copy link
Author

kouli commented Mar 29, 2021

That is wrong. using "start" does not renegotiate the CHILD_SA if it is explicitely closed by any of the peers and none of the peers negotiate a new one.

Fortunately not. I have tested it quite extensively before creating this feature request (strongSwan 5.9.1). Combination of {start,dpd,close}_action = start, or probably at least close_action = start, ensures immediate IKE SA and child SA renegotiation if the opposite peer closes child or even IKE SA cleanly (e.g. swanctl -t -{c|i} ... or clean strongSwan shutdown). And with keyingtries = 0, it repeats the renegotiation until it succeeds. Without this being true, I would have created a more important feature request :-)

I have been using {start,dpd,close}_action = trap exclusively for long time trusting the note in documentation. But if you have a device behind NAT and only want to make it remotely accessible via an IPsec tunnel, I believe {start,dpd,close}_action = start to be the only solution: such a device never sends a packet to the tunnel just on its own behalf (which would trigger the trap policy).

@Thermi
Copy link
Contributor

Thermi commented Oct 11, 2021

Okay, sure, we can implement that. But it costs much more money to implement that than to just eat up the carrier charges, or optimize that first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants