-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Config" in the CRD name is redundant IMO. How about a name without "Config"? #8
Comments
@Madhu-1 @nb-ohad @BlaineEXE To follow up from the design doc that was merged, I'd like to continue discussing the naming and structure of the CRDs. CephCSIOperatorConfig --> CephCSIDriverDefaults99% of the settings in this appear to be for the driver defaults. Why not name it as such? The only setting besides the driver defaults appears to be the operator log level, which could split off elsewhere. Then it is obvious to the user that they add defaults to this CR, while they can override the defaults in CephCSICephCluster and CephCSIConfig --> CephCSICluster and CephCSIConnectionThis CRD needs to be split into more than one CRD. One CRD should be for the connection info and another CRD should be settings for the drivers that manage that cephcluster. This way, Rook can generate the connection info and it won't interfere with the settings that the user wants to apply to the cluster. This is why I think we should include different consumer scenarios (e.g. Rook) in the CephCSI operator design doc, so we can recognize what may be generated vs what should be set by users. The settings in this CRD also seem to have a very similar purpose to the settings in CephCSIConfig, since in both CRDs they are for a single CephCluster. What is the difference between them from the user's perspective? Maybe there is some implementation difference under the covers about which secrets or configmaps are generated, but the user doesn't care about that. Seems like we need connection info in a separate CRD from the user settings for a cluster, so we might end up with CRDs such as:
My initial suggestion was going to be CephCSIConfigMapping --> CephCSIClusterMappingBased on the suggestion to refactor the previous CRDs, this CRD could be named |
This is again a place where I feel like well-described user stories will be critical for having productive conversation about this work. I think it will help us all to have different scenarios described, who actors are, and what they can do. With that information, we can better envision and vet whether the designs are meeting goals and workflows we have in mind, or whether there are gaps. @Madhu-1 and @nb-ohad are the ones who have already worked on this design, and I get a sense that they know what the user stories are and who actors are, but for those of us outside, we need these things to be able to understand the design in the context of its intent. Otherwise, we are much in the dark, and I don't think we can provide quality feedback. Scenario 1: Upstream RookIn this scenario, the administrator deploys ceph-csi-operator via default manifests that are present in rook/rook. The manifests set up ceph-csi-operator in single-namespace mode. The admin is actor 1. Rook is actor 2. Rook configures X, Y, and Z resources automatically for the admin. If the admin wants to override configs, they create X new CR to apply overrides. Scenario 2: Consumer modeIn this scenario, an admin deploys ceph-csi-operator via OLM. ceph-csi-operator is set up in X mode. The admin is actor 1. ceph-csi-operator is a client for an external Ceph cluster. The admin has created a client operator as well to help automate things. The client operator is actor 2. The client operator gets connection details for the Ceph cluster via means that are outside ceph-csi-operator's scope. The operator configures X, Y, and Z ceph-csi-operator CRs to connect to the external controller. If the admin wants to override configs, they take X and Y, or Z action. Scenario 3: A user creates ceph-csi-operator and manages it themselvesThis is probably the simplest case to describe. The user configures X, Y, and Z resources to connect to their Ceph cluster. The user does not have to consider overrides or defaults, because they are the only actor in the system. Scenario 4: whatever other modes are necessaryetc. etc. etc. I particularly want to see a scenario description in which multi-namespace is involved. I think I can extrapolate multi-namespace to all-namespaces by assuming that the user has a fully separate k8s cluster just for storage, to keep things segregated for security. |
Based on comments and discussion with @Madhu-1 in rook/rook#14260, here are some follow-ups on CRDs and naming to consider... CephCSICephCluster CRD
CephCSIConfig CRD
CephCSIOperatorConfig CRD
CephCSIConfigMapping
|
CephCSIConfig CRDThe purpose of this CRD is not to define a name, but to decouple the configuration needed to consume Ceph based storage from the storage class itself. This was done for a couple of reasons:
The linkage between a storage class and this configuration is indeed by name/id but this link does not define the essence of the information residing in this CRD. This is why we chose CephCSIOperatorConfigChanging the name of this CRD to CephCSIConfigMappingThis CR enables you to create an "alias" for a config within the context of a single block pool. It is incorrect to assume a purpose based on multi-cluster environment for this CR. There are other usecase that we considered that are internal to a single cluster. Take for example the case of the need to rename a config for a specific storage class (we anticipate a need for this in the future, especially around cluster migrations. |
@Madhu-1 had mentioned that the mount options would be moved to the CephCSICephCluster CRD, so it seems that all that is left in this CRD is to associate the rados namespace or the subvolume group with some name, which can then be added to the storage class. Agreed the name still doesn't make a lot of sense. IMO the naming would be more clear if this were split into two CRDs, then they could be named
What about
What about |
I am not sure we should do that, The fact that the only config we have today is these two pieces of information does not mean it will stay that way for long. We should design our API based on different aspects of CSI and not completely based on the current information available. In This specific case, we identified the need for a CR that represents a central configuration that will be attached to a storage class to allow CSI to understand how that storage class is related to the Ceph cluster. This CR includes any fields that Ceph CSI needs to utilize Ceph correctly. In an Idel world that information should have been embedded in the storage class params but unfortunately, as of the limitations I described in a previous comment it was decided to move it into the CSI config map entry which we are formalizing with this CR
The concern of this CR is not defaults configuration, it is the operator configuration, similar to an operator config map (which is a given when you deal with OLM operators), that fact that we consider default driver configuration as part of the operator configuration is just one aspect of this CR. This CR mainly describes the way the operator should behave
Right now the only concern inside the CR is Pool Mapping, but if you look carefully, the mapping is not the top-level key. This was done as we identified that in the future there are going to be other aspects of mapping between CSI configurations. So pool mapping is just one aspect of this API and for that reason, I will argue that |
What about the CRD name
If this is intended to be based on the naming of the CephCSIConfig CRD (which above is suggested to name it |
I agree that operator is a very generic name. What I think we should do is follow the convention met by the operator framework where they automatically generate an OpeartorConfig config map (which is of course untyped data). This will bring us back to the OperatorConfig name for the CRD (with a fully qualified name of |
Are you saying CephResource shouldn't be used because multiple resources could be configured with a single CR? The difference between singular and plural seems unimportant. CRs can be queried with either their singular or plural names, interchangeably. So CephResource and CephResources would be the same CRD.
Ok, I could go with operatorconfigs.csi.ceph.io. What still seems awkward to me is that defaults can be specified in this CRD that can be overridden by a different CRD. I wonder if there is a different convention for the defaults that would be more natural. For example, perhaps create an instance of |
Given all the separate naming discussions in this issue, discussion in #10, and slack discussions, here is my summary of the naming proposals and open questions. The listed names are based on the current design doc names:
|
Summarizing agreements on names from our recent offline discussion:
In addition, we agreed these name changes will be applied to the repo only after the merging of the following PRs:
@Madhu-1 @travisn @BlaineEXE |
This is done in #62 |
Syncing latest changes from upstream main for ceph-csi-operator
Originally posted by @travisn in #1 (comment)
The text was updated successfully, but these errors were encountered: