New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initiator reported error (19 - encountered non-retryable iSCSI login failure) #55678

Closed
f7r opened this Issue Nov 14, 2017 · 4 comments

Comments

Projects
None yet
4 participants
@f7r

f7r commented Nov 14, 2017

I want to use iscsi on k8s 1.8.2, but throw an error on kubelet log

Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.553260    1188 iscsi_util.go:233] iscsi: failed to rescan session with error: iscsiadm: No session found.
Nov 14 16:41:09 server-222 kubelet: (exit status 21)
Nov 14 16:41:09 server-222 kernel: scsi host18: iSCSI Initiator over TCP/IP
Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.632195    1188 iscsi_util.go:293] iscsi: failed to get any path for iscsi disk, last err seen:
Nov 14 16:41:09 server-222 kubelet: iscsi: failed to attach disk: Error: iscsiadm: Could not login to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260].
Nov 14 16:41:09 server-222 kubelet: iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
Nov 14 16:41:09 server-222 kubelet: iscsiadm: Could not log into all portals
Nov 14 16:41:09 server-222 kubelet: Logging in to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260] (multiple)
Nov 14 16:41:09 server-222 kubelet: (exit status 19)
Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.632435    1188 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/iscsi/172.22.117.221:3260:iqn.2017-11.cn.falseuser:storage.target00:0\"" failed. No retries permitted until 2017-11-14 16:41:41.632354441 +0800 CST (durationBeforeRetry 32s). Error: MountVolume.WaitForAttach failed for volume "is" (UniqueName: "kubernetes.io/iscsi/172.22.117.221:3260:iqn.2017-11.cn.falseuser:storage.target00:0") pod "pod-volume" (UID: "336d1e55-c917-11e7-8221-0e67df33d01b") : failed to get any path for iscsi disk, last err seen:
Nov 14 16:41:09 server-222 kubelet: iscsi: failed to attach disk: Error: iscsiadm: Could not login to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260].
Nov 14 16:41:09 server-222 kubelet: iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
Nov 14 16:41:09 server-222 kubelet: iscsiadm: Could not log into all portals
Nov 14 16:41:09 server-222 kubelet: Logging in to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260] (multiple)
Nov 14 16:41:09 server-222 kubelet: (exit status 19)

My iscsi volume config yaml as:

  - name: is
    iscsi:
      targetPortal: 172.22.117.221:3260
      iqn: iqn.2017-11.cn.falseuser:storage.target00
      lun: 0
      fsType: ext4
      readOnly: false
      chapAuthDiscovery: true
      chapAuthSession: true
      secretRef:
        name: chap-secret

I test manually connect iscsi target on node, it works
but on k8s it failed

@f7r

This comment has been minimized.

Show comment
Hide comment
@f7r

f7r Nov 14, 2017

/sig storage

f7r commented Nov 14, 2017

/sig storage

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Feb 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 14, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Mar 14, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Apr 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment