-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TrueNAS Scale 21.08 - Could not log into all portals #112
Comments
There are some known bugs (we didn’t get the fixes slipped into 21.08 but already have fixes for them in nightlies). Is it the first target in the system? EDIT: said differently, did the system boot without any targets/luns? For reference I think you're hitting this: https://jira.ixsystems.com/browse/NAS-111864 Others that didn't make it into 21.08: https://jira.ixsystems.com/browse/NAS-111870 |
Note, it's an issue if the system boots without any targets/luns (unlikely with ongoing usage). If you find yourself in this situation however the following is a work-around (to run directly on the SCALE cli) and must be done after a target/lun have been added: |
Thank you for the advice. I have just done the following:
Do this bug mean that for any newly created Targets (even if the system booted with 1> targets created) the service will need restarting until patched? |
No, as long as the system has at least 1 target/lun at boot time you should be fine from here on out. |
Brilliant. When deleting that newly created Target leaving a total of 0 left on TrueNAS, any created after that point as require the service to be restarted. It appears its not only at boot time. |
That seems really odd. I guess create zvol/target/etc manually as a placeholder :( |
Will give that a go. Thank you for all your help. Happy for this to be closed unless you want to link it to the TrueNAS update which fixes it? |
I’ve already linked them up and added a known issues to the README. I think we’re all set at this point. |
Any feedback you could share on the api only drivers to this point? |
I have noticed no unexpected behaviour! Really has made management a lot easier. Need to try storing my API key in a secret but I cant imagine that messing with any functionality. |
Great! The API key should already be in a secret (the whole config is a giant secret). At least if you used the helm chart.. |
Hello there,
Firstly thank you for making the driver API only, can sleep better without having a root SSH key floating around.
I'm testing democratic-csi v1.3.0 - zfs-api-iscsi on TrueNAS Scale 21.08 however Im getting the error:
{"code":19,"stdout":"Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260] (multiple)\\n","stderr":"iscsiadm: Could not login to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260].\\niscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)\\niscsiadm: Could not log into all portals\\n"}'
Cluster design is:
TrueNAS Scale 21.08 - 2xNICs
Rancher K3s running on 4 x Raspberry Pi 4 (3 manager, 1 worker)
My configuration is the following:
Everything on the TrueNAS side seems to be provisioning fine but its just the Kubernetes nodes side of things where the error seems to be.
The text was updated successfully, but these errors were encountered: