Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TrueNAS Scale 21.08 - Could not log into all portals #112

Closed
jamesagarside opened this issue Sep 3, 2021 · 11 comments
Closed

TrueNAS Scale 21.08 - Could not log into all portals #112

jamesagarside opened this issue Sep 3, 2021 · 11 comments

Comments

@jamesagarside
Copy link

Hello there,

Firstly thank you for making the driver API only, can sleep better without having a root SSH key floating around.

I'm testing democratic-csi v1.3.0 - zfs-api-iscsi on TrueNAS Scale 21.08 however Im getting the error:
{"code":19,"stdout":"Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260] (multiple)\\n","stderr":"iscsiadm: Could not login to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260].\\niscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)\\niscsiadm: Could not log into all portals\\n"}'

Cluster design is:
TrueNAS Scale 21.08 - 2xNICs
Rancher K3s running on 4 x Raspberry Pi 4 (3 manager, 1 worker)

My configuration is the following:

csiDriver:
  name: "org.democratic-csi.iscsi"
storageClasses:
- name: freenas-iscsi-csi
  defaultClass: true
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: xfs
  mountOptions: []
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:
driver:
  config: 
    driver: freenas-api-iscsi
    instance_id: aquila
    httpConnection:
      protocol: https
      host: 192.168.50.10
      port: 443
      apiKey: <key>
      allowInsecure: true
      apiVersion: 2
    zfs:
      datasetParentName: cold/k8s/iscsi/v
      detachedSnapshotsDatasetParentName: cold/k8s/iscsi/s
      zvolCompression:
      zvolDedup:
      zvolEnableReservation: false
      zvolBlocksize:
    iscsi:
      targetPortal:  "10.80.0.2:3260"
      interface: eth0
      namePrefix: csi-
      nameSuffix: "-cluster"
      targetGroups:
        - targetGroupPortalGroup: 1
          targetGroupInitiatorGroup: 3
          targetGroupAuthType: None
          targetGroupAuthGroup: null
      extentInsecureTpc: true
      extentXenCompat: false
      extentDisablePhysicalBlocksize: true
      extentBlocksize: 4096
      extentRpm: "7200"
      extentAvailThreshold: 0

Everything on the TrueNAS side seems to be provisioning fine but its just the Kubernetes nodes side of things where the error seems to be.

@travisghansen
Copy link
Member

travisghansen commented Sep 3, 2021

There are some known bugs (we didn’t get the fixes slipped into 21.08 but already have fixes for them in nightlies). Is it the first target in the system?

EDIT: said differently, did the system boot without any targets/luns?

For reference I think you're hitting this: https://jira.ixsystems.com/browse/NAS-111864

Others that didn't make it into 21.08: https://jira.ixsystems.com/browse/NAS-111870
Everything that did make it into 21.08: https://jira.ixsystems.com/browse/NAS-110637

@travisghansen
Copy link
Member

travisghansen commented Sep 3, 2021

Note, it's an issue if the system boots without any targets/luns (unlikely with ongoing usage). If you find yourself in this situation however the following is a work-around (to run directly on the SCALE cli) and must be done after a target/lun have been added: systemctl restart scst

@jamesagarside
Copy link
Author

Thank you for the advice.

I have just done the following:

  1. Created a Target by creating a PVC in Kubernetes
  2. Ran systemctl restart scst
    This resulted in the pod being able to mount the Target.

Do this bug mean that for any newly created Targets (even if the system booted with 1> targets created) the service will need restarting until patched?

@travisghansen
Copy link
Member

No, as long as the system has at least 1 target/lun at boot time you should be fine from here on out.

@jamesagarside
Copy link
Author

Brilliant.
Next thing I have just experienced.

When deleting that newly created Target leaving a total of 0 left on TrueNAS, any created after that point as require the service to be restarted. It appears its not only at boot time.

@travisghansen
Copy link
Member

That seems really odd. I guess create zvol/target/etc manually as a placeholder :(

@jamesagarside
Copy link
Author

Will give that a go. Thank you for all your help. Happy for this to be closed unless you want to link it to the TrueNAS update which fixes it?

@travisghansen
Copy link
Member

I’ve already linked them up and added a known issues to the README. I think we’re all set at this point.

@travisghansen
Copy link
Member

Any feedback you could share on the api only drivers to this point?

@jamesagarside
Copy link
Author

I have noticed no unexpected behaviour! Really has made management a lot easier. Need to try storing my API key in a secret but I cant imagine that messing with any functionality.

@travisghansen
Copy link
Member

Great! The API key should already be in a secret (the whole config is a giant secret). At least if you used the helm chart..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants