Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: do not return connection if port does not match #2307

Closed

Conversation

dsciarra
Copy link

Pre-Submission Checklist

  • Commit message has the format required by CONTRIBUTING guide
  • Commits are split per component (core, individual modules, libs, utils, ...)
  • Each component has a single commit (if not, squash them into one commit)
  • No commits to README files for modules (changes must be done to docbook files
    in doc/ subfolder, the README file is autogenerated)

Type Of Change

  • Small bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would change existing functionality)

Checklist:

  • PR should be backported to stable branches
  • Tested changes locally
  • Related to issue #XXXX (replace XXXX with an open issue number)

Description

If kamailio is behind a NAT only return connection if port matches. We observed this issue on an instance after ~10h of operation.

If kamailio is behind a NAT only return connection if port matches.
@miconda
Copy link
Member

miconda commented Apr 28, 2020

That part of code gets the connection by its ID, which is unique, and matching address or port should no longer be considered.

Can you give an example where you got issues with the existing behaviour?

@dsciarra
Copy link
Author

dsciarra commented May 5, 2020

When an INVITE comes in, we instruct Kamailio to query a http service to know which clients should receive the INVITE message. If multiple of these "receiver"-clients are running on the same machine (same IP address, but different source port), we see that Kamailio forwards multiple INVITE to one single client (although the clients are listening on different ports!). The problem happens (strangely) only when we run Kamailio in AWS (Fargate) behind a load balancer, we have not observed the issue by using vanilla docker containers.

I am not 100% sure the code in this PR is indeed fixing the issue at the right place but with the patch above everything works fine.
The above estimation of ~10h is an estimation error on our side, the issue is always reproducible.

@miconda
Copy link
Member

miconda commented May 5, 2020

What global parameters did you set for kamailio?

If the issue is reproducible, can you list the tcp connections with:

kamctl rpc core.tcp_list

and see if what is listed there matches the connection expected to be used?

Also, I understand this happens when forwarding requests, not for routing back replies?

Given that tcp connection id is unique, it would mean that the connection is not found if the search matches on id but not on port, so a new connection is created?

@dsciarra
Copy link
Author

dsciarra commented May 5, 2020

among others, we have:

enable_tls=yes
tcp_accept_haproxy=yes
tcp_connection_match=1
tcp_accept_unique=0

The issue only happens when forwarding requests. Running kamctl in AWS Fargate is not a straightforward task but I will give it a try.

@miconda
Copy link
Member

miconda commented May 7, 2020

I am not having access to AWS Fargate, so testing is not going to be easy.

If you say it helps, we can extend the use tcp_connection_match to have different layers of matching, but considering its value as flags.

So tcp_connection_match=1 does like now (which is the first flag 1<<0), then for matching the port along with connection id, the 2nd flag 1<<1 has to be set, tcp_connection_match=3.

The idea is to be able to set the behaviour via parameters, so what fixes in this case is not breaking for others not using same infrastructure.

@dsciarra
Copy link
Author

We have run some additional tests over the last weeks with the latest version from master and fortunately (or unfortunately?) we are not able to reproduce the issue anymore. I would propose to skip this fix for the moment. Thanks anyhow for the support!

@miconda miconda closed this Jun 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants