Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation violation when creating cluster in GCE #3375

Closed
mateuszkwiatkowski opened this issue Sep 13, 2017 · 7 comments
Closed

Segmentation violation when creating cluster in GCE #3375

mateuszkwiatkowski opened this issue Sep 13, 2017 · 7 comments
Assignees
Labels
area/gce lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mateuszkwiatkowski
Copy link

Hello,

I'm working on creating cluster with kops using Google Cloud. It works fine with public networks but when I try to create cluster with private networking then it fails with segmentation violation (log below). I'm aware that this is probably not supported configuration but I thought that I should report it anyway because it's not documented (it's reported here: #2081).

Thank you for your hard work on Kops!

export KOPS_STATE_STORE="gs://k8s-state-store"
export ZONES="us-east1-b,us-east1-c,us-east1-d"
export KOPS_FEATURE_FLAGS=AlphaAllowGCE

$ kops create cluster gce.example.com \    
  --zones $ZONES \                 
  --master-zones $ZONES \
  --node-count 3 \
  --project tenfold-backend-services \
  --image "ubuntu-os-cloud/ubuntu-1604-xenial-v20170202" \
  --topology private \
  --bastion="true" \
  --networking calico \
  --yes
I0912 15:14:21.766158   17853 create_cluster.go:659] Inferred --cloud=gce from zone "us-east1-b"
I0912 15:14:21.766256   17853 create_cluster.go:845] Using SSH public key: /home/kwiat/.ssh/id_rsa.pub
I0912 15:14:21.766335   17853 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east1-b
I0912 15:14:21.766348   17853 subnets.go:183] Assigned CIDR 172.20.64.0/19 to subnet us-east1-c
I0912 15:14:21.766357   17853 subnets.go:183] Assigned CIDR 172.20.96.0/19 to subnet us-east1-d
I0912 15:14:21.766367   17853 subnets.go:197] Assigned CIDR 172.20.0.0/22 to subnet utility-us-east1-b
I0912 15:14:21.766377   17853 subnets.go:197] Assigned CIDR 172.20.4.0/22 to subnet utility-us-east1-c
I0912 15:14:21.766387   17853 subnets.go:197] Assigned CIDR 172.20.8.0/22 to subnet utility-us-east1-d
I0912 15:14:21.767650   17853 clouddns.go:86] Using DefaultTokenSource &oauth2.reuseTokenSource{new:(*oauth2.tokenRefresher)(0xc420825c20), mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc420073f80)}
I0912 15:14:21.767680   17853 clouddns.go:100] Successfully got DNS service: &{0xc420825d10 https://www.googleapis.com/dns/v1/projects/  0xc42000d490 0xc42000d498 0xc42000d4a0 0xc42000d4a8}
I0912 15:14:29.098520   17853 clouddns.go:86] Using DefaultTokenSource &oauth2.reuseTokenSource{new:(*oauth2.tokenRefresher)(0xc4208244e0), mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc42041c060)}
I0912 15:14:29.098571   17853 clouddns.go:100] Successfully got DNS service: &{0xc4208245a0 https://www.googleapis.com/dns/v1/projects/  0xc42000c7a8 0xc42000c7b0 0xc42000c7b8 0xc42000c7c0}
I0912 15:14:31.285743   17853 clouddns.go:86] Using DefaultTokenSource &oauth2.reuseTokenSource{new:(*oauth2.tokenRefresher)(0xc4208250b0), mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(0xc42041c3c0)}
I0912 15:14:31.285782   17853 clouddns.go:100] Successfully got DNS service: &{0xc4208252c0 https://www.googleapis.com/dns/v1/projects/  0xc42000c868 0xc42000c870 0xc42000c878 0xc42000c880}
W0912 15:14:31.693034   17853 external_access.go:36] TODO: Harmonize gcemodel ExternalAccessModelBuilder with awsmodel
W0912 15:14:31.693051   17853 firewall.go:35] TODO: Harmonize gcemodel with awsmodel for firewall - GCE model is way too open
W0912 15:14:31.693063   17853 firewall.go:63] Adding overlay network for X -> node rule - HACK
W0912 15:14:31.693067   17853 firewall.go:64] We should probably use subnets?
W0912 15:14:31.693082   17853 firewall.go:118] Adding overlay network for X -> master rule - HACK
I0912 15:14:35.270196   17853 executor.go:91] Tasks: 0 done / 57 total; 34 can run
I0912 15:14:35.659471   17853 address.go:121] GCE creating address: "api-gce-example-com"
I0912 15:14:36.359846   17853 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0912 15:14:36.410310   17853 vfs_castore.go:422] Issuing new certificate: "kops"
I0912 15:14:36.412440   17853 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager"
I0912 15:14:36.532438   17853 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0912 15:14:36.651078   17853 vfs_castore.go:422] Issuing new certificate: "kube-scheduler"
I0912 15:14:36.701281   17853 vfs_castore.go:422] Issuing new certificate: "kube-proxy"
I0912 15:14:41.107651   17853 executor.go:91] Tasks: 34 done / 57 total; 14 can run
I0912 15:14:41.680775   17853 instancetemplate.go:203] We should be using NVME for GCE
I0912 15:14:41.993158   17853 instancetemplate.go:203] We should be using NVME for GCE
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x119b5e2]
goroutine 285 [running]:
k8s.io/kops/upup/pkg/fi.(*ResourceHolder).AsString(0x0, 0xc421215650, 0xc42120d560, 0x0, 0x1)
    /go/src/k8s.io/kops/upup/pkg/fi/resources.go:230 +0x22
k8s.io/kops/upup/pkg/fi/cloudup/gcetasks.(*InstanceTemplate).mapToGCE(0xc42041fe60, 0xc42020e8a0, 0x18, 0x0, 0xc420188e60, 0x0)
    /go/src/k8s.io/kops/upup/pkg/fi/cloudup/gcetasks/instancetemplate.go:258 +0xa3c
k8s.io/kops/upup/pkg/fi/cloudup/gcetasks.(*InstanceTemplate).Find(0xc42041fe60, 0xc42122c000, 0x0, 0x0, 0x0)
    /go/src/k8s.io/kops/upup/pkg/fi/cloudup/gcetasks/instancetemplate.go:80 +0x2c4
reflect.Value.call(0x32a9680, 0xc42041fe60, 0xa13, 0x340362d, 0x4, 0xc4200802a0, 0x1, 0x1, 0xc42000c038, 0x13, ...)
    /usr/local/go/src/reflect/value.go:434 +0x91f
reflect.Value.Call(0x32a9680, 0xc42041fe60, 0xa13, 0xc4200802a0, 0x1, 0x1, 0xc42041fe60, 0xa13, 0xc42076a000)
    /usr/local/go/src/reflect/value.go:302 +0xa4
k8s.io/kops/upup/pkg/fi/utils.InvokeMethod(0x32a9680, 0xc42041fe60, 0x34036ad, 0x4, 0xc421220df8, 0x1, 0x1, 0x0, 0x0, 0x469cf2, ...)
    /go/src/k8s.io/kops/upup/pkg/fi/utils/reflect.go:75 +0x40e
k8s.io/kops/upup/pkg/fi.invokeFind(0x4d32400, 0xc42041fe60, 0xc42122c000, 0x0, 0x0, 0xc42005e000, 0x8)
    /go/src/k8s.io/kops/upup/pkg/fi/default_methods.go:115 +0xc1
k8s.io/kops/upup/pkg/fi.DefaultDeltaRunMethod(0x4d32400, 0xc42041fe60, 0xc42122c000, 0xc420119c08, 0xc4203b69e0)
    /go/src/k8s.io/kops/upup/pkg/fi/default_methods.go:45 +0x7a4
k8s.io/kops/upup/pkg/fi/cloudup/gcetasks.(*InstanceTemplate).Run(0xc42041fe60, 0xc42122c000, 0x16, 0xc420887778)
    /go/src/k8s.io/kops/upup/pkg/fi/cloudup/gcetasks/instancetemplate.go:171 +0x41
k8s.io/kops/upup/pkg/fi.(*executor).forkJoin.func1(0xc420b220e0, 0xe, 0xe, 0xc420424440, 0xc42000c078, 0xc420119c00, 0x2)
    /go/src/k8s.io/kops/upup/pkg/fi/executor.go:158 +0x1ea
created by k8s.io/kops/upup/pkg/fi.(*executor).forkJoin
    /go/src/k8s.io/kops/upup/pkg/fi/executor.go:159 +0x123
@chrislovecnm
Copy link
Contributor

Yes this configuration is not supported and we should fix the code to not allow private ;)

@chrislovecnm
Copy link
Contributor

/area gce

@justinsb justinsb self-assigned this Oct 10, 2017
@shyamram
Copy link

Is there a work around or other options to using private with kops while this feature is enabled?

@chrislovecnm
Copy link
Contributor

@shyamram private subnets are not supported in GCE AFAIK

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/gce lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants