Open
Description
Amazon network policy endpoint shows that there is V4 and V6 cidr even though the Cluster/Instance/Pod and everything is v4. This is a day 1 issue and needs to be fixed the code should be more like if you have v6 enabled then the policy should have v6, if it doesnt have a v6 CIDR enabled network policy shouldn't have it and take over ports. VPC CNI 1.18.x has 24 open port limitation and if we add both v4 and v6 then you have 12 ports each
Example policy
apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
creationTimestamp: "2024-10-03T14:48:49Z"
generateName: policy_name
generation: 19
name: <policy_name>-vl8w4
namespace: default
ownerReferences:
- apiVersion: networking.k8s.io/v1
blockOwnerDeletion: true
controller: true
kind: NetworkPolicy
name: policy_name
uid: 7f6a8d5b-5f5b-491f-8761-945f26094d8f
resourceVersion: "221306648"
uid: 5685a8f2-1e18-44db-8eab-87ad61198aa5
spec:
egress: - cidr: 0.0.0.0/0
ports:- port: 49
protocol: TCP - port: 53
protocol: TCP - port: 53
protocol: UDP - port: 443
protocol: TCP - port: 465
protocol: TCP - port: 514
protocol: TCP - port: 3306
protocol: TCP - port: 6379
protocol: TCP
- port: 49
- cidr: ::/0
ports:- port: 49
protocol: TCP - port: 53
protocol: TCP - port: 53
protocol: UDP - port: 443
protocol: TCP - port: 465
protocol: TCP - port: 514
protocol: TCP - port: 3306
protocol: TCP - port: 6379
protocol: TCP
- port: 49
- cidr:
- cidr:
- cidr:
ingress: - cidr:
- cidr: 0.0.0.0/0
ports:- port: 7000
protocol: TCP - port: 8080
protocol: TCP
- port: 7000
- cidr: ::/0
ports:- port: 7000
protocol: TCP - port: 8080
protocol: TCP
- port: 7000
- cidr:
- cidr:
podIsolation: - Ingress
- Egress
podSelector:
matchLabels:
orch: name
podSelectorEndpoints: - hostIP:
name:
namespace: default
podIP: <pod_ip> - hostIP: <host_ip>
name: <pod_name>
namespace: default
podIP: <pod_ip>
policyRef:
name: <policy_name>
namespace: default
Thanks,
Vignesh
Metadata
Metadata
Assignees
Labels
No labels