-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NetworkPolicy ipBlock cannot set the real client IP address #1199
Comments
Hmm... I'm not sure what you mean by You probably need to include more information in your bug report about what you were expecting to happen and what actually happened. Along with logs and what options you run kube-router with. All of the fields that are in our issue template are important and in order to actually be able to resolve issues we need all of them. However, the information you provided only hits about 20% of them. Keep in mind that with DSR you almost always want to combine it with |
Sorry, my English is poor.
my test svc defined as follows
then client access, and the client IP in the POD log is the real address of my client (it`s great)
but my client cannot access it properly
only configure like this, the client can access it properly
|
observe the dropped packets by running tcpdump -i nflog:100 -nnnn, found two problems,please help me to see if this is normal
Q1:Whether I use dsr(Tunnel) mode or not, the source IP in tcpdump is 192.168.110.15,Is this why NetworkPolicy.spec.ingress.from.ipBlock must be set k8s node address,not client real address? |
My best guess at this point, is that your service isn't declared as a local service. If you aren't using a local service there is a chance that your request will ingress one node and be proxied to another node that contains the service pod. When this happens the L3 header is rewritten and the new source IP would be seen as a kubernetes node. Can you try ensuring that your service is a local service via: |
Yes, I tried to set internalTrafficPolicy: Local for the svc, but still got the same result. I see this vip 192.168.120.37 declared by all my work nodes on the upper BGP peer, and then ipvs entries for this vip on all my whork nodes, when external requests are loaded to nodes that are not running pods, the source IP would be seen as this node. I think that when set the internalTrafficPolicy: Local, not all nodes should declare the VIP to their upper BGP peers, but only nodes running pod should declare the VIP and have IPVS entries. Just like Metallb does. |
Yes, this is the way that kube-router should be functioning. If you have I'm having a hard time understanding what could be going wrong here. I know that multiple users have made heavy use of this feature across the last 2 - 3 years of kube-router versions and there has never been an issue with the BGP announcement functionality that I'm aware of. So I'm wondering what could be going wrong. Can you show the following?
|
service definition: kubectl get svc whoami -o yaml
pod selected : kubectl get pods -l run=whoami -o wide
k8s node : kubectl -n kube-system get pods -o wide|grep kube-route
node zdns-yxz-k8s-18 run gobgp:
node zdns-yxz-k8s-15/zdns-yxz-k8s-17 run gobgp:
show the RIB on your upstream router:
|
Can you change kube-router only pays attention to the external policy when deciding how to advertise BGP VIPs |
thx, apologize for my carelessness |
have one more question If externalTrafficPolicy is not set,only set kube-router.io/service.dsr: tunnel , NetworkPolicy.spec.ingress.from.ipBlock must be set k8s node address,not client real address? |
No worries! I sent you down that path a couple of comments back when I accidentally mixed up the internal and external policy. Sorry for my carelessness as well. 😅 So, just double checking, did this resolve the issue you were experiencing with network policy and obtaining the source ip from inside the pod? |
set externalTrafficPolicy: Local have resolve the issue. |
While this will allow traffic for the services marked with Unfortunately I can not see as a solution that can be recommended. If you want to enforce network policies based on real client IP address your best bet is to use services that are marked This is the issue from kube-router on how DSR is implemented. Traffic is tunneled into the pod, so we miss an opportunity to perform proper network policy enforcement as when its done on the node its done on the encapsulated packet with different IP address I will add this limitation to current DSR implementation and potentially see for a solution. |
@murali-reddy |
Great! |
Problem: NetworkPolicy ipBlock set real client IP address not work.
I deployed the whoami service and set externalIPs for it.
access the externalIPs from outside the k8s cluster,, pod log appear as follows:
set kube-router.io/service.dsr: tunnel, it will appear as follows.
here my networkpolicy
** System Information (please complete the following information):**
kube-router --version
): v1.3.2kubectl version
) : v1.22.2The text was updated successfully, but these errors were encountered: