-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure) / Could not log into all portals #140
Comments
Yeah get the scst logs from the server to see what’s going on server side. |
Edit -- Also, two days ago (when I first started seeing these errors), I got a different reason for the error (cannot allocate memory). A reboot seems to have resolved that problem, though.
|
Use juournalctl to get full logs. Probably send over scst.conf file and output of lsmod. |
Requested data attached |
Things generally look sane with the exception of the scst.conf file. It appears the extents have disappeared (for the nextcloud volumes). If you look at the SCALE admin UI how many extents do you see in the list? Essentially the targets are pointing to non-existent extents in the config file which would explain failures I'm guessing. If you look in the If the extents do show up in the admin UI then there's some breakdown in the config file generation process, if the do not show up in the admin UI then it begs the question how/why did they get deleted? |
Can you send over a screenshot of the associated targets tab as well? What exactly do you mean by extra? |
Targets: Associated Targets: I had a leftover extent 'flux-vaultwarden-test-config-vaultwarden` where the target zfs volume (and associated pv and pvc) were deleted. After removing this "leftover" extent, restarting I'm unsure why the extent was left after the targets were removed. Perhaps because the iscsi storageClass and volumeSnapshotClass are set to 'retain', so even if I |
If it was provisioned by this project then the extent should be deleted and for sure everything should get tore down (assuming a delete policy on the pv). The 2nd issue seems to be that the TrueNAS middleware should more gracefully handle that scenario when generating the config file and ignore invalid entries but continue with valid entries. |
With a 'retain' policy, what is the appropriate way to remove? |
Just kubectl delete the pv. Retain doesn’t really do anything special other than prevent it from deleting when a bound pvc is deleted. |
I am suddenly getting this new error message, seems similar to 112. I had not changed my democratic-csi config since mid-October and got this error starting a few days ago. I'm running democratic-csi chart 0.8.3 with
freenas-api-iscsi
driver.Per 112, I have tried running
systemctl restart scst
on SCALE, although I already had several targets available at the time when I started receiving the error. I also tried restarting SCALE, updating SCALE from TrueNAS-SCALE-22.02-RC.1 to TrueNAS-SCALE-22.02-RC.2, and restarting nodes, to no avail.LMK if I can hunt down other logs or provide additional config files that might be of use.
The text was updated successfully, but these errors were encountered: