-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next round of updates: operator details #22
Comments
ping @zekemorton - let's chat when you have some time about what you'd like to jump on (or if you don't have bandwidth no issue either) |
As an additional todo item, what do you think about adding the option for an imagePullSecret to the mini cluster api? Essentially this could default to not having one, and if specified it would be the name of an already created secret to use as the imagePullSecret for non public images. |
@vsoch The delete call back sounds like a an important item on the list, maybe I'll start looking into that and let you know if I run into any issues? |
Yep if we want support for private pulls, that's definitely something to add! It would be a simple check for the attribute, and if it's defined, add it to the jobspec pod. It's not a huge priority right now so I wouldn't do it first (unless you have a private image you want to test!) but let's add to the TODO.
Absolutely, it's yours! Take a look in events.go - that should have examples for different events (that aren't currently used). We would basically want to clean everything up for a CRD given that request. |
@vsoch I may have misunderstood what the goal was for the delete call back, it seems to already do what I thought need implementing. My thoughts were that we needed to implement a way to delete a mini cluster and have that clean up all the pods, jobs, configmaps, etc. It looks to already do this?
Was there something else that you had in mind for this todo item? |
I think the original need was based around the "we need to cleanup" because the jobs don't complete, but if they complete (and that works) then we are good! I'll check it off. |
I am pretty sure that even after a job is completed, all the pods stick around in the Using the But yes, I think we can check this one off! |
Are there any other's in the TODOs that you might want to try? |
I took a look at For size 0 then, would we have some kind of message or error occur? Like if you were to use a string instead of an integer I'd also be happy to try the ImagePullSecret one as well, looks like it shouldn't be too involved! |
I haven't tested this, but I'd guess if the wrong data type is provided it's either going to show an error or convert it automatically.
Sounds good! Do we have any secret containers? If you have an example we can add to the folder here https://github.com/flux-framework/flux-operator/tree/main/docker. If you use uptodate to generate the matrix https://vsoch.github.io/uptodate/ it will be discovered and built automatically. Either we could reproduce that same example (with some private tag in the name) or make a new one, and then have the GitHub package private to test it out. Let me know if you have any questions about that - I developed uptodate for RADIUSS builds last year. |
w.r.t. the ImagePull Secrets, I was able to implement that functionality and test it agains a private copy of the same image/command we have been testing with. I'll open a PR now for some feedback and to address some specific questions I have on it! |
I don't think that they would unless they look at the logs for the operator. It would be nice to have an error message pop up right after the kubectl apply that says it's an invalid argument and fail the creation. Right now it will create the MiniClluster object, but it won't create any of the sub resources. Its probably best not to create anything at all, right? I can look into it a bit more to see how to best achieve this! |
Is there somewhere else to look? Oh, you mean an error directly in the console after running kubectl? Yeah that's a great idea - I'm not sure how to do that!
Sounds good! |
I think we are good to close here - we can't really support any kind of live update if flux doesn't support that, so we can come back to that when the time comes. |
--no-shell
(we probably don't need this anymore given the change in entrypoint)The text was updated successfully, but these errors were encountered: