-
Notifications
You must be signed in to change notification settings - Fork 1
Conversation
pkg/services/syncer_custom.go
Outdated
Name: s.Name, | ||
Namespace: s.Namespace, | ||
OwnerReferences: []v1.OwnerReference{ | ||
*v1.NewControllerRef(&s, provv1.SchemeGroupVersion.WithKind("ManagedOSVersionChannel")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to my experience on #20 the created reference is blocking the deletion by default, that's why I manually assigned ref.BlockOwnerDeletion = false
in #20. To me it is a far from intuitive default but it is also documented on k8s docs 🤷🏽♂️:
Kubernetes automatically sets this field to true if a controller (for example, the Deployment controller) sets the value of the metadata.ownerReferences field. You can also set the value of the blockOwnerDeletion field manually to control which dependents block garbage collection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in this case we want it to true, no? we don't want the owner to disappear due to gc meanwhile we are running a pod sync
At the moment is collecting json output from logs for creating API data and it's not a real controller. This will be fixed in a follow up. Besides, relying on logging as we are doing it now, even with the initContainer workaround is not correct as we do not have guarantee from kubelet to store logs for us, so this is good only for small amount of data. Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
857068c
to
ad7158b
Compare
Also refactor out syncer code Signed-off-by: Ettore Di Giacinto <edigiacinto@suse.com>
@davidcassany @Itxaka that should be ok for a first round. I plan to have a look at this closer in a follow-up and move out to a specific controller that pokes the sync service when a new versionchannel is being created, but I'd prefer doing that in another batch of changes. This already crosses with CLI changes. Besides, I wanted to add error reporting as well #16 |
On the lines of this, we could have available versions for os2 hosted directly in a git repo and being parsed from there. Seems a simple approach for us to test on os2 opensuse counterpart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I just have a couple of doubts or blurry areas that I do not completely follow.
if mountDir == "" { | ||
mountDir = "/data" | ||
} | ||
if outFile == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand the purpose of it... The output file needs to be under the mountpoint right? I am wondering if we aren't missing a validator for that.
Is this output file being read somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the approach of the operator is the following:
- It creates a pod:
- with a segregated (no SA, etc) initcontainer which is the one specified in the
ManagedOSVersionChannel
. The container instead of outputing the json in stdout, it needs to push it into some file, you can override the default mount location by specifyingmountDir
andoutFile
via options. - When the initcontainer ends, the json is expected to be in a file, which gets then printed to stdout by a container which keeps running until the operator deletes it back, once the data is captured.
- with a segregated (no SA, etc) initcontainer which is the one specified in the
This is because we can't rely on kubelet to retain logs of completed containers. My first approach was getting logs from a container that terminates, but that was flaky as I couldn't always access to its log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see this second reading pod scaped to me. Now it all makes sense 👍
type Config struct { | ||
Requeuer chan interface{} | ||
Clients *clients.Clients | ||
OperatorImage string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am understanding it correctly that this is rancheros-operator wide? I would have expected to see the image tied to an specific channel crd, so you can have multiple custom syncers (or single one parsing multiple pods output) defined with different images. I guess this could be one of the motivations to move to a controller that spawns the image based in new channels, so there could potentially be an image per channel and define the image there.
Are you having in mind something like that for the controller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so this is maybe confusing and needs some inline comment indeed, this operatorImage is the actual image of the operator running in the cluster. The operator will use that to re-start itself in the pod that fetches the JSON data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The image used to spawn the container is defined in another place, that config is "generic" for all the syncers
"command": []string{"/bin/bash", "-c", "--"}, | ||
"mountPath": "/output", // This defaults to /data | ||
"outputFile": "/output/data", // This defaults to /data/output | ||
"args": []string{fmt.Sprintf("echo '%s' > /output/data", string(b))}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is unclear to me how this is ending into pod logs...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the container here pushes the json into a file which is picked up later by a container running in that pod. The file is passed by to the main container of the pod which will sit and display the file content in the logs until the operator reaps it.
I plan to move away from logs passing the JSON structure, but for the time being this minimally satisfies the logics so we can keep going and revisit this into its specific issue #24
MountPath: mount, | ||
}}, | ||
Name: "runner", | ||
Image: j.Image, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the image supplied by the user when creating the ManagedOSVersionChannel
resource
This PR introduces a new
ManagedOSVersionChannel
type (custom
) which can be used to populateManagedOSVersion
with a custom logic, outside of the operator code by specifying a container which returns aJSON
list ofManagedOSVersion
:The operator will create a pod with that container and take the output file indicated in the options and treat it as a json file. A list of
ManagedOSVersions
are expected to be contained in JSON form, that are going to be populated in the resources.This is just a first sketch - relying on logs to pass json doesn't seem that good. I'll try out to find another solution but this covers a minimum working setup for the time being. Besides, I'd like to switch to a controller reconciler logic to poke the syncer, but that probably I'll leave it up for a follow-up.
Note this is also introducing a light CLI framework (with no config sugar on top) as the operator args are mainly CLI ones, so nothing complex like viper/cobra was felt needed.