You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We upgraded from 1.7.1 to 1.7.3 and on a single node we had following line:
Copying CNI plugin binaries ...
+ PLUGIN_BINS='loopback portmap bandwidth aws-cni-support.sh'
+ for b in '$PLUGIN_BINS'
+ '[' '!' -f loopback ']'
+ for b in '$PLUGIN_BINS'
+ '[' '!' -f portmap ']'
+ for b in '$PLUGIN_BINS'
+ '[' '!' -f bandwidth ']'
+ for b in '$PLUGIN_BINS'
+ '[' '!' -f aws-cni-support.sh ']'
+ HOST_CNI_BIN_PATH=/host/opt/cni/bin
+ echo 'Copying CNI plugin binaries ... '
+ for b in '$PLUGIN_BINS'
+ install loopback /host/opt/cni/bin
+ for b in '$PLUGIN_BINS'
+ install portmap /host/opt/cni/bin
+ for b in '$PLUGIN_BINS'
+ install bandwidth /host/opt/cni/bin
+ for b in '$PLUGIN_BINS'
+ install aws-cni-support.sh /host/opt/cni/bin
+ echo 'Configure rp_filter loose... '
Configure rp_filter loose...
++ curl -X PUT http://169.254.169.254/latest/api/token -H 'X-aws-ec2-metadata-token-ttl-seconds: 60'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 56 100 56 0 0 56000 0 --:--:-- --:--:-- --:--:-- 56000
+ TOKEN=xxxxx
++ curl -H 'X-aws-ec2-metadata-token: xxxx' http://169.254.169.254/latest/meta-data/local-ipv4
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10 100 10 0 0 10000 0 --:--:-- --:--:-- --:--:-- 10000
+ HOST_IP=10.63.7.25
++ ip -4 -o a
++ grep 10.63.7.25
++ awk '{print $2}'
+ PRIMARY_IF='eth0
eth1
eth2'
+ sysctl -w 'net.ipv4.conf.eth0
eth1
eth2.rp_filter=2'
sysctl: cannot stat /proc/sys/net/ipv4/conf/eth0
eth1
eth2/rp_filter: No such file or directory
From around 50 nodes there was a single node that had this issue.
System Info:
Machine ID: ec2808bea1300938b8f094dc685471a3
System UUID: EC2808BE-A130-0938-B8F0-94DC685471A3
Boot ID: 5fef8bf0-fedb-49a0-be74-d88fa9de552a
Kernel Version: 4.14.193-149.317.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.6
Kubelet Version: v1.17.9-eks-4c6976
Kube-Proxy Version: v1.17.9-eks-4c6976
ProviderID: aws:///us-east-1f/i-0115cbe4679fb31d2
machine is: m5.xlarge
What you expected to happen:
I would expect all nodes to behave similar, I think some case was not handled. When checking ENI interfaces, there were 3 attached to it, similar to other nodes aswell.
How to reproduce it (as minimally and precisely as possible):
Not sure how to reproduce, since from around 50 nodes, it only happened on 1.
The text was updated successfully, but these errors were encountered:
@s4mur4i - 1.7.5 release - https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.7.5 was done yesterday which has the fix for this issue. Kindly try it out. Closing this issue for now.
What happened:
We upgraded from 1.7.1 to 1.7.3 and on a single node we had following line:
From around 50 nodes there was a single node that had this issue.
machine is: m5.xlarge
What you expected to happen:
I would expect all nodes to behave similar, I think some case was not handled. When checking ENI interfaces, there were 3 attached to it, similar to other nodes aswell.
How to reproduce it (as minimally and precisely as possible):
Not sure how to reproduce, since from around 50 nodes, it only happened on 1.
The text was updated successfully, but these errors were encountered: