Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partition "aws" is not valid for resource "arn:aws:ec2:*:*:network-interface/*" in AWS China regions #6763

Closed
pahud opened this issue Apr 10, 2019 · 2 comments

Comments

@pahud
Copy link

pahud commented Apr 10, 2019

1. What kops version are you running? The command kops version, will display
this information.

Version 1.12.0-beta.1 (git-70edcc97a)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

cluster not running

3. What cloud provider are you using?

aws

4. What commands did you run? What is the simplest way to reproduce this issue?

kops update cluster

5. What happened after the commands executed?

Got a lot of error

W0410 06:12:21.394641    6729 executor.go:130] error running task "IAMRolePolicy/masters.cluster.zhy.k8s.local" (9m50s remaining to succeed): error creating/updating IAMRolePolicy: MalformedPolicyDocument: Partition "aws" is not valid for resource "arn:aws:ec2:*:*:network-interface/*".
        status code: 400, request id: 948aafc6-5b57-11e9-ae43-9fcad8522885
W0410 06:12:21.394677    6729 executor.go:130] error running task "IAMRolePolicy/nodes.cluster.zhy.k8s.local" (9m50s remaining to succeed): error creating/updating IAMRolePolicy: MalformedPolicyDocument: Partition "aws" is not valid for resource "arn:aws:ec2:*:*:network-interface/*".

6. What did you expect to happen?
no error

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

$ kops get --name cluster.zhy.k8s.local -o yaml
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2019-04-10T06:07:49Z
  name: cluster.zhy.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  assets:
    containerRegistry: 937788672844.dkr.ecr.cn-north-1.amazonaws.com.cn
    fileRepository: https://s3.cn-north-1.amazonaws.com.cn/kops-bjs/fileRepository/
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://pahud-kops-state-store-zhy/cluster.zhy.k8s.local
  docker:
    logDriver: ""
    registryMirrors:
    - https://registry.docker-cn.com
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-cn-northwest-1a
      name: a
    - instanceGroup: master-cn-northwest-1b
      name: b
    - instanceGroup: master-cn-northwest-1c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-cn-northwest-1a
      name: a
    - instanceGroup: master-cn-northwest-1b
      name: b
    - instanceGroup: master-cn-northwest-1c
      name: c
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: https://s3.cn-north-1.amazonaws.com.cn/kubernetes-release/release/v1.11.9
  masterInternalName: api.internal.cluster.zhy.k8s.local
  masterPublicName: api.cluster.zhy.k8s.local
  networkCIDR: 10.0.0.0/16
  networkID: vpc-bb3e99d2
  networking:
    amazonvpc: {}
  nonMasqueradeCIDR: 10.0.0.0/16
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 10.0.32.0/19
    name: cn-northwest-1a
    type: Public
    zone: cn-northwest-1a
  - cidr: 10.0.64.0/19
    name: cn-northwest-1b
    type: Public
    zone: cn-northwest-1b
  - cidr: 10.0.96.0/19
    name: cn-northwest-1c
    type: Public
    zone: cn-northwest-1c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-04-10T06:07:49Z
  labels:
    kops.k8s.io/cluster: cluster.zhy.k8s.local
  name: master-cn-northwest-1a
spec:
  image: ami-0773341917796083a
  machineType: m4.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-cn-northwest-1a
  role: Master
  subnets:
  - cn-northwest-1a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-04-10T06:07:49Z
  labels:
    kops.k8s.io/cluster: cluster.zhy.k8s.local
  name: master-cn-northwest-1b
spec:
  image: ami-0773341917796083a
  machineType: m4.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-cn-northwest-1b
  role: Master
  subnets:
  - cn-northwest-1b

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-04-10T06:07:49Z
  labels:
    kops.k8s.io/cluster: cluster.zhy.k8s.local
  name: master-cn-northwest-1c
spec:
  image: ami-0773341917796083a
  machineType: m4.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-cn-northwest-1c
  role: Master
  subnets:
  - cn-northwest-1c

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-04-10T06:07:50Z
  labels:
    kops.k8s.io/cluster: cluster.zhy.k8s.local
  name: nodes
spec:
  image: ami-0773341917796083a
  machineType: c5.large
  maxSize: 2
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - cn-northwest-1a
  - cn-northwest-1b
  - cn-northwest-1c

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

Resource: stringorslice.Slice([]string{"arn:aws:ec2:*:*:network-interface/*"}),

Please check if current AWS_REGION starts with cn-, if yes, the arn prefix should be arn:aws-cn:. Otherwise, there will be partition error.

@bksteiny
Copy link
Contributor

@pahud - I have a PR for this here: #6762

Also relates to: #6754

@mikesplain
Copy link
Contributor

Thanks @bksteiny, @pahud this has been fixed in master and the 1.12 branch. Should be fixed in next release so I'm going to close.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants