Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Extract configuration generation to init-container #477

Open
lentzi90 opened this issue Feb 22, 2024 · 20 comments
Open

Proposal: Extract configuration generation to init-container #477

lentzi90 opened this issue Feb 22, 2024 · 20 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue is ready to be actively worked on.

Comments

@lentzi90
Copy link
Member

I think I have come up with a way to solve #468 in a more generic way and improve the user experience of the ironic-image.

The idea is to move all the scripts for generating configuration into its own container. This would run as an init-container just like ironic-ipa-downloader. It would generate all configuration files, setup TLS and auth and of course do the wait_for_interface_or_ip.

Doing it this way has multiple benefits:

  1. It is easy to opt out, since it is a separate container. Just like we can choose to run keepalived or not we could choose to use this init-container for convenience or provide configuration in some other way if we want.
  2. It becomes more obvious to the user when initialization is done. If the init-container gets stuck on wait_for_interface_or_ip it is immediately clear that the issue is during initialization, compared to current "pod not ready" symptoms.
  3. When the configuration is done before starting the actual container/component, it is possible to slim down each of these much more than we do today. The httpd and dnsmasq containers do not need python or ironic installed, and ironic does not need bash, etc.

I realize this may not be easy to achieve but I do think it would be a good way to enable advanced use cases and provide a greater flexibility in how we deploy ironic. What do you think?

@metal3-io-bot metal3-io-bot added the needs-triage Indicates an issue lacks a `triage/foo` label and requires one. label Feb 22, 2024
@mquhuy
Copy link
Member

mquhuy commented Feb 22, 2024

Thank you Lennart. That's a great idea.
I'm wondering though if you intent to run these configuration generation as (1) one init container for each of current container, or (2) one giant configuration init container for the whole system?

If it's (1) then I think there's no way this wouldn't work. For (2) I am not sure, it may get a bit messy for multi-ironic scenario.

I'm still not sure, though, how that could help solve #468 ? Or do you mean that we could do that by bypassing this initContainer and implementing some manual configurations on our own? (If that's the case, I have no idea. If I understand @dtantsur comment correctly, we would still need the pod IP somehow to generate the configuration if JSON_PRC is in use)

@lentzi90
Copy link
Member Author

lentzi90 commented Feb 22, 2024

I mean option 2. Most of the configuration is currently done in "common" scripts anyway (tls-common.sh, auth-common.sh, ironic-common.sh and configure-ironic.sh). Each component then has its own run* script that sources the common scripts and then does some final component-specific things.

My hope and understanding is that there isn't anything preventing us from doing ALL the configuration in one common container as long as we have shared volumes for getting the config from that init-container.

I'm still not sure, though, how that could help solve #468 ? Or do you mean that we could do that by bypassing this initContainer and implementing some manual configurations on our own? (If that's the case, I have no idea. If I understand @dtantsur comment correctly, we would still need the pod IP somehow to generate the configuration if JSON_PRC is in use)

Yes I mean opting out of the init-container. Then it is possible to specify all configuration directly using configMaps and secrets and that solves #468 since every option is directly "exposed". I.e. I can set whatever IP I want in every field without any extra variables or changes in the scripts/image.
For JSON_RPC I guess we would need to generate some part of the configuration dynamically, but this is also possible without the init-container. One way to do it is to put a little script in a configMap, mount it to the pod and run it as entrypoint. Another option is to have a custom init-container (instead of the "standard" init-container). In any way, decoupling the configuration from the actual container image would make it far more flexible.

In theory it is already possible to override and skip the configuration by changing the command that we run. I just think it would be way cleaner to get rid of those scripts in the container image all together and put them in a purpose-built "configure-ironic" container.

@mquhuy
Copy link
Member

mquhuy commented Feb 22, 2024

In any way, decoupling the configuration from the actual container image would make it far more flexible.

Thank you. As said I think this is a great idea to decouple the config generation from the main threads.

Then it is possible to specify all configuration directly using configMaps and secrets and that solves #468 since every option is directly "exposed".

I'm wondering though if this could be the standard? I mean could we use configMap in general case as well, then the issue will just be how to config those maps?

@lentzi90
Copy link
Member Author

I'm wondering though if this could be the standard? I mean could we use configMap in general case as well, then the issue will just be how to config those maps?

Yes it could become the standard, but we need to move carefully in order to not break everything relying on the current behavior. That is why I don't propose to just delete all scripts 😂

@mquhuy
Copy link
Member

mquhuy commented Feb 22, 2024

Yes it could become the standard, but we need to move carefully in order to not break everything relying on the current behavior. That is why I don't propose to just delete all scripts 😂

Yeah. That makes sense, though I guess one could argue that using the initContainer is a pretty big move too 😂 Well, I guess this could be tried out, so that we at least allow using configMaps, then if configMaps turned out to be a good option, we can then move on to make it the standard.

@mquhuy
Copy link
Member

mquhuy commented Feb 22, 2024

I just realized now that having one giant initContainer will also mean that we're stuck at the current one pod manipulation (IMO at some point we need to think about splitting it to smaller pods, especially since it's required to scale up ironic).

It can be, however, easily solved by just have one initContainer for every pod.

@lentzi90
Copy link
Member Author

I don't see how this is related. Of course there will be one init-container per pod and this does not affect in any way (neither improve nor worsen) the current one replica deployment it is just a different way of initializing.
However, having the configuration separated could make it easier to configure the image in different ways, e.g. for multi-replica scenarios. It is not the goal here though

@mquhuy
Copy link
Member

mquhuy commented Feb 22, 2024

I think it's related in a way that if we decide to bundle everything to one initContainer, it will work for our current setup where everything is inside one pod. If we split the deployment later (for e.g 1 pod for ironic, another for mariadb etc.), we will either have to split that init container up, or have some config duplication. It's different from the current set up, in which each container is responsible for its own config generation, so no matter how we might split them, the config handling will stay the same.

Anyway, I'm just getting too far ahead. Having config duplication is not too bad though, and we are not about to change our deployment soon.

@elfosardo
Copy link
Member

@lentzi90 I really like the idea and honestly I've already done some tests in the past with init containers precisely to move the configuration away from the ironic-image, so thumbs up from my side, as a long term goal
it's true that this would help in some way bypassing #468 but I think it's a too big of a change just for that
if we want to go down this road we should probably start discussing during the one of the next metal3 meetings, and maybe consider it as a topic for the next meetup

@lentzi90
Copy link
Member Author

I think it's related in a way that if we decide to bundle everything to one initContainer, it will work for our current setup where everything is inside one pod. If we split the deployment later (for e.g 1 pod for ironic, another for mariadb etc.), we will either have to split that init container up, or have some config duplication. It's different from the current set up, in which each container is responsible for its own config generation, so no matter how we might split them, the config handling will stay the same.

The containers may do their own config as it is now but they are still extremely coupled. They rely on a shared volume and common scripts to do that configuration. The config duplication that you talk about is just what would end up in each container, not something the user would have to store or provide. The init-container would for example generate dnsmasq config also when it is not needed but I don't see how that is an issue. As it is today, each container has a bunch of stuff that they do not really need. For example, they all have python and inspector installed together with scripts to run and configure it, even though we may not even use it. That is far worse than having an extra config file in my opinion.

@mquhuy
Copy link
Member

mquhuy commented Feb 23, 2024

The containers may do their own config as it is now but they are still extremely coupled. They rely on a shared volume and common scripts to do that configuration.

Yes, this is the exact issue that I wanted to say here. Thank you for putting it in correct language!

The config duplication that you talk about is just what would end up in each container, not something the user would have to store or provide. The init-container would for example generate dnsmasq config also when it is not needed but I don't see how that is an issue.

Sure. I'm not meaning that generating excessive config is a bad thing, the issue is that if we have, for e.g., 2 pods with 2 initContainers, then it could happen that these two init generate different configs, and the two pods won't work well together because of that. This is exactly the issue of coupling config that you have mentioned above. It would not be very visible, though, if we still use one pod for all of the containers.

As it is today, each container has a bunch of stuff that they do not really need. For example, they all have python and inspector installed together with scripts to run and configure it, even though we may not even use it. That is far worse than having an extra config file in my opinion.

Yes, no argue about that, but I don't think it's too related to what we're discussing here. IMO it's very easy to prune down the container images to only contain what is necessary, regardless of the initContainer. This issue is just due to a designer's choice to use the same image for almost everything.

@lentzi90
Copy link
Member Author

the issue is that if we have, for e.g., 2 pods with 2 initContainers, then it could happen that these two init generate different configs, and the two pods won't work well together because of that.

I don't see how anything changes in this regard with my proposal. The output of the script does not change because it runs in an init-container.

This proposal is not about how to split up the deployment and separate the containers. It is a valid concern of course to think about if it would be harder to do with this proposal implemented, and my point is that it would be easier. If the config generation is separated from the individual containers it will be possible to run them separately either by

  1. not using the init-container (you can manually write separate config for each container if you want), or
  2. using the init-container (you can give it the same or different input for each separate pod and disregard the config you do not need).

Yes, no argue about that, but I don't think it's too related to what we're discussing here. IMO it's very easy to prune down the container images to only contain what is necessary, regardless of the initContainer. This issue is just due to a designer's choice to use the same image for almost everything.

I disagree. This is more relevant than the discussion about splitting them into separate pods. Please explain how you "prune down the container images to only container what is necessary" when they all use python to render their own config? As far as I'm concerned, the config generation must be separated first, and that is exactly what this proposal is about.

@mquhuy
Copy link
Member

mquhuy commented Feb 23, 2024

I don't see how anything changes in this regard with my proposal. The output of the script does not change because it runs in an init-container.

This proposal is not about how to split up the deployment and separate the containers. It is a valid concern of course to think about if it would be harder to do with this proposal implemented, and my point is that it would be easier.

Sorry, I didn't mean to criticize or disagree with your proposal, as I said I think it's a great idea, and having it would for sure be a great improvement to our current design, and anything we do afterwards would be easier thanks to it.

What I'm trying to express here is just something we should consider if we decide to go with this proposal, or when we implement it.

If the config generation is separated from the individual containers it will be possible to run them separately either by

  1. not using the init-container (you can manually write separate config for each container if you want), or
  2. using the init-container (you can give it the same or different input for each separate pod and disregard the config you do not need).

Fully agree. Just expressing a concern that since this separation to initContainer would be a big task, we may as well keep it in mind that the config generation process might be different if we keep the current 1-pod manifest vs. if we split the pod as well. Again, I'm not saying that you were wrong in any of your logic, I'm just trying to help identifying the bigger picture. IMO if we decide to go with this proposal, we should also redesign the structure to break out of the 1-pod manifest, but just my two-cent.

I disagree. This is more relevant than the discussion about splitting them into separate pods. Please explain how you "prune down the container images to only container what is necessary" when they all use python to render their own config? As far as I'm concerned, the config generation must be separated first, and that is exactly what this proposal is about.

By "what is necessary", I mean it could include python or bash or whatever it is that is needed in config generation right now. I fully agree that having config generation separated would allow us to prune the images better, but I'm just showing an observation: even without the config generation separated, we still can prune the images down, but we just do not do it because we want to use the same image for everything. (That may be the topic we need to discuss first if we want to discuss image prune?).

@lentzi90
Copy link
Member Author

No worries, it is good to scrutinize it before we start! 🙂

IMO if we decide to go with this proposal, we should also redesign the structure to break out of the 1-pod manifest, but just my two-cent.

I agree that we should split it up and I think this proposal makes it slightly easier to do so, but more than anything I think it is a separate topic and will need much larger changes also for users, e.g. they may need to switch to the ironic-standalong-operator instead of using kustomize. This proposal on the other hand could be done without impacting them.

By "what is necessary", I mean it could include python or bash or whatever it is that is needed in config generation right now.

Python and bash are big and bad both from a security perspective and from image size perspective. If we just remove some specific packages but keep the tool chain we do not gain much. But yes, you are right that we need to have separate images for separate containers. I think this proposal is a good place to start on that work because it will enable us to get much greater benefits out of it, especially considering that it can be done without breaking or changing the current deployment method.

@mquhuy
Copy link
Member

mquhuy commented Feb 23, 2024

I agree that we should split it up and I think this proposal makes it slightly easier to do so, but more than anything I think it is a separate topic and will need much larger changes also for users, e.g. they may need to switch to the ironic-standalong-operator instead of using kustomize. This proposal on the other hand could be done without impacting them.

Well, the way I see it is that if we to commit to some change this big, we may as well do it "right". In any case, if we decide to split the big ironic pod and the init config container was in place, it would likely change too, so it's a good thing to keep in mind. (And imo breaking the big ironic pod will need to happen sooner or later, as only ironic container, but not inspector or mariadb, can scale).

Anyway, since you mentioned the ironic-standalone-operator, I agree that there are too many uncertainties that may affect our decisions in the future, so maybe it's a good idea to just implement any improvements that we can agree with each other and know for sure it won't break anything. I think this proposal may suffice that.

Just on the side note, I actually have split the ironic pod up in my multi-ironic setup, and that didn't require any change in user input or the standalone-operator, so imo it's totally doable right now too.

Python and bash are big and bad both from a security perspective and from image size perspective. If we just remove some specific packages but keep the tool chain we do not gain much.

We wouldn't gain as much as if we don't have them at all, but IMO it would be a great difference between pure python vs. python + all the oslo packages + binaries compiled from ironic source code. Of course, this is not the topic of discussion here, but I think it's worth mentioning since that may suggest that there might be other reasons why we have not use separate images and pruned them down yet.

But yes, you are right that we need to have separate images for separate containers. I think this proposal is a good place to start on that work because it will enable us to get much greater benefits out of it, especially considering that it can be done without breaking or changing the current deployment method.

Yeah I agree. As I said I'm just not sure if the decision to use the same image for everything was just an act out of convenience, or if there was any reason behind that.

@Rozzii
Copy link
Member

Rozzii commented Mar 6, 2024

/triage accepted
/kind feature

@metal3-io-bot metal3-io-bot added triage/accepted Indicates an issue is ready to be actively worked on. kind/feature Categorizes issue or PR as related to a new feature. and removed needs-triage Indicates an issue lacks a `triage/foo` label and requires one. labels Mar 6, 2024
@Rozzii Rozzii added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Mar 6, 2024
@dtantsur
Copy link
Member

I very much like this proposal. As you say, it can pave the way for splitting the one giant ironic-image into separate images for independent components.

The next step, I believe, should be to come up with a rough execution plan. I'd hate to see this change introduced as one giant PR. It's possible that my effort to reduce the number of large options in ironic-image (e.g. #487 or inspector removal #483) is step zero since it radically reduces the number of branches in our bash scripts. We need to make sure that no options have conditional defaults (e.g. JSON RPC is on by default in runironic-conductor, but not in runironic).

Step one can be to do the split inside ironic-image without adding an init container just yet. This way we can gradually isolate the configuration step, verifying the functionality at each stage.

@dtantsur
Copy link
Member

Ah, and when ironic-standalone-operator is complete, we can move some configuration logic to it. Right now we're limited by trying to make ironic-image usable as it is.

elfosardo pushed a commit to elfosardo/ironic-image that referenced this issue Apr 29, 2024
OCPBUGS-32386: Use unix sockets by default for reverse proxy communication
@metal3-io-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues will close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@metal3-io-bot metal3-io-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2024
@Rozzii
Copy link
Member

Rozzii commented Jun 17, 2024

/remove-lifecycle stale
/lifecycle frozen

@metal3-io-bot metal3-io-bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue is ready to be actively worked on.
Projects
Status: Ironic-image on hold / blocked
Development

No branches or pull requests

6 participants