-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
linstor-csi is not compatible with --enable-controller-attach-detach=false
#4
Comments
It seems that problems somewhere in node_authorizer.go |
I found exactly the problem: That's crazy, because |
You're saying to can't change modify that RBAC at all? What have you tried? What where the error messages you got from those attempts? |
@haySwim hi,
Еhis problem shouldn't and can't be solved by RBAC, because:
I have temporary solution with creating rolebinding rule for every my node.
From kubelets logs directly Look, here is two possible ways: If you have But if you have What's that, is it upstream bug? - say me if so, I'll prepare PR for fix that in node_authorizer.go I have two questions to you right now:
|
I can answer by myself.
No, this is working very bad, eg. detaching is not working at all. About using flexvolume and CSI together I was updated flexvolume driver to not use Unfortunately I can't create PR, because current repo is archived. All the changes are listed here: |
--enable-controller-attach-detach=false
Easy interop between this plugin and the previous two is a non-goal of this project. If you have a workaround, great. If not, you need to migrate your data over to one driver or the other.
Sorry, that should have been "What were..."
I'm not actually sure how that is supposed to interact with Kubernetes's CSI implementation i.e., not this plugin, or the CSI spec, but the actual Kubernetes CSI components. For what it's worth, our plugin is tested and intended to be used with
No. The default,
You'll have to create new volumes with CSI and migrate (or recreate) your data manually. The exact method that you use to do this is up to you, you know your data best. These drivers, although they both talk to LINSTOR, have different internal representations of volumes and different capacities and there's no obviously right way to "register" the old volumes with CSI. I think this issue should be noted in our documentation. I'm sure there's a very sleek and clever way to do a migration in a semi-automatic way, but it would fundamentally be a data migration and this feature would not be perennially useful as more and more people move away from the old driver.
Correct.
Please practice a bit of patience, we are providing this software and our assistance to you for free, after all. You should not expect, and will not receive, super high priority response times unless you actually find a serious bug. Also, I feel that the title change is a bit misleading. So I have edited it to reflect my own understanding of this issue, as is my right as a maintainer of this project.
Again, yes use
This is a general Kubernetes issue as far as I can tell: different volume plugins might require conflicting global settings.
This was the intent behind archiving the repo.
You're welcome to maintain and develop your fork as much as you wish, but it's totally your responsibility. There are currently no plans on our side to put forth any more time and energy towards that project. |
Please do not think that I reproach you for something, I understand everything, you guys are doing a nice project, thank you for that and I'm really glad to help you with that. It seems my phrase was look a bit rude, sorry if it is so, but believe me I didn't want to hurt you at all. I've just reported that I've already tested it and can say for sure that driver is not working with this option. Some information is better than lack of information, isn't it?
Agree, it's up to you.
I'm sure that |
Yes, |
BTW I wrote small script for convert old flexvolumes to CSI: Maybe someone it will be useful |
Hi, my pods are stuck on Init because attaching volumes is not working, here is kubelet log:
It seems something wrong with RBAC:
kubectl version
The text was updated successfully, but these errors were encountered: