Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iSCSI segfault when provisioning volume #15

Closed
eons44 opened this issue Aug 31, 2019 · 2 comments
Closed

iSCSI segfault when provisioning volume #15

eons44 opened this issue Aug 31, 2019 · 2 comments

Comments

@eons44
Copy link

eons44 commented Aug 31, 2019

Here's the error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x1021af0]

goroutine 8 [running]:
_/opt/edgefs/src/csi/edgefs-csi/csi/edgefs.(*EdgeFS).ListServices(0xc4204fa180, 0x0, 0x0, 0x0, 0x144c7e0, 0xc4200ef340, 0xc4200be070, 0x1141920, 0xc420505b60)
	/opt/edgefs/src/csi/edgefs-csi/csi/edgefs/edgefs.go:650 +0x4f0
_/opt/edgefs/src/csi/edgefs-csi/csi/edgefs.(*EdgeFS).GetClusterData(0xc4204fa180, 0x0, 0x0, 0x0, 0x4, 0xc420044570, 0xb, 0x1, 0x144bc00)
	/opt/edgefs/src/csi/edgefs-csi/csi/edgefs/edgefs.go:714 +0xa3
_/opt/edgefs/src/csi/edgefs-csi/csi/edgefs.(*EdgeFS).CreateIscsiVolume(0xc4204fa180, 0xc420110040, 0x40, 0x0, 0x0, 0x140000000, 0xc420338540, 0xc4204fa180, 0x0, 0x0, ...)
	/opt/edgefs/src/csi/edgefs-csi/csi/edgefs/edgefs.go:319 +0x37e
_/opt/edgefs/src/csi/edgefs-csi/csi/drivers/iscsi.(*ControllerServer).CreateVolume(0xc4200be630, 0x146dfe0, 0xc420338390, 0xc4203a8150, 0xc4200be630, 0xc4203a8150, 0x0)
	/opt/edgefs/src/csi/edgefs-csi/csi/drivers/iscsi/controllerserver.go:149 +0xb88
github.com/container-storage-interface/spec/lib/go/csi._Controller_CreateVolume_Handler.func1(0x146dfe0, 0xc420338390, 0x12dfe80, 0xc4203a8150, 0x20, 0x18, 0x1203140, 0x0)
	/opt/edgefs/src/csi/edgefs-csi/src/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:4947 +0x86
_/opt/edgefs/src/csi/edgefs-csi/csi.(*Driver).grpcErrorHandler(0xc420292cc0, 0x146dfe0, 0xc420338390, 0x12dfe80, 0xc4203a8150, 0xc42000a460, 0xc42000a480, 0x20, 0x1203140, 0xc420083a01, ...)
	/opt/edgefs/src/csi/edgefs-csi/csi/driver.go:203 +0x61
_/opt/edgefs/src/csi/edgefs-csi/csi.(*Driver).(_/opt/edgefs/src/csi/edgefs-csi/csi.grpcErrorHandler)-fm(0x146dfe0, 0xc420338390, 0x12dfe80, 0xc4203a8150, 0xc42000a460, 0xc42000a480, 0x11ca680, 0x1bc5958, 0x12fc320, 0xc4203e6000)
	/opt/edgefs/src/csi/edgefs-csi/csi/driver.go:138 +0x73
github.com/container-storage-interface/spec/lib/go/csi._Controller_CreateVolume_Handler(0x12b5100, 0xc4200be630, 0x146dfe0, 0xc420338390, 0xc4201283c0, 0xc420291e20, 0x0, 0x0, 0xc420090070, 0x70)
	/opt/edgefs/src/csi/edgefs-csi/src/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go:4949 +0x167
google.golang.org/grpc.(*Server).processUnaryRPC(0xc420259980, 0x147a540, 0xc4202faa80, 0xc4203e6000, 0xc4202ada10, 0x1b991a0, 0x0, 0x0, 0x0)
	/opt/edgefs/src/csi/edgefs-csi/src/google.golang.org/grpc/server.go:998 +0x4bc
google.golang.org/grpc.(*Server).handleStream(0xc420259980, 0x147a540, 0xc4202faa80, 0xc4203e6000, 0x0)
	/opt/edgefs/src/csi/edgefs-csi/src/google.golang.org/grpc/server.go:1278 +0xe02
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42035a040, 0xc420259980, 0x147a540, 0xc4202faa80, 0xc4203e6000)
	/opt/edgefs/src/csi/edgefs-csi/src/google.golang.org/grpc/server.go:717 +0x9f
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/opt/edgefs/src/csi/edgefs-csi/src/google.golang.org/grpc/server.go:715 +0xa1

Full log available at:
logs-from-driver-in-edgefs-iscsi-csi-controller-0.txt

Here's the applied yaml:
edgefs.zip

Here's the commands I ran on the mgr pod toolbox:

efscli system init
efscli cluster create cltest
efscli tenant create cltest/test
efscli bucket create cltest/test/files-iscsi -s 512K -r 2 -R 1 -t 1
efscli service create iscsi files-iscsi
efscli service serve files-iscsi cltest/test/files-iscsi/lun1

I've applied weavenet and the kubernetes dashboard but besides that, this is a fresh cluster (has been rebooted though).

Node status:

ServerID 149537A1FFF0F80253CC54C38B65E390 blackmarsh:blackmarsh-0 ONLINE
ServerID D970BBC047EE482D80CA0DC93E609145 hammerfell:hammerfell-0 ONLINE
ServerID 16E1E95A33A4EAC97CEBDEF9853F0930 elsweyr:elsweyr-0 ONLINE

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

This is a bare metal cluster with 3 identical nodes of the following:

Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.2 LTS
Release:	18.04
Codename:	bionic

Linux $HOSTNAME 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Node hardware: https://eons.dev/project/cloud-storage-over-wan/#hardware

@sabbot
Copy link
Collaborator

sabbot commented Sep 1, 2019

Greetings. Looks like you have abandoned iscsi01 Kubernetes service. Could you remove it from cluster? I can see its definition in 06_edgefs-iscsi.yaml. How did you create files-iscsi Kubernetes service? By additional yaml?

apiVersion: edgefs.rook.io/v1beta1
kind: ISCSI
metadata:
  name: iscsi01
  namespace: rook-edgefs

After remove try it again and watch logs. Thanks

@eons44
Copy link
Author

eons44 commented Sep 2, 2019

Thanks sabbot!

It works!

I followed your advice and removed that portion from the yaml. I was wondering why that was there but didn't question it any further.
Beside that, I tore down my edgefs cluster (but not the k8s cluster) but kept the same config with the exception of explicitly stated my nodes and devices (I think I wasn't able to do that before because my config wasn't indented properly).

Anyway, thanks again!

@eons44 eons44 closed this as completed Sep 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants