AGENT-59: Break assisted-service pod into separate systemd services #13
AGENT-59: Break assisted-service pod into separate systemd services #13
Conversation
This script has not been adding anything useful since we started templating the configmap to insert the host IP directly in 0024c85.
@@ -9,6 +10,7 @@ ExecStart=/usr/local/bin/create-cluster-and-infra-env.sh | |||
|
|||
KillMode=none | |||
Type=oneshot | |||
RemainAfterExit=true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Being that is listed as PartOf, will that not trigger the creation if the assisted-service-pod restarts? Will this be prevented by RemainAfterExit? If not, we'll probably need to have the logic for checking if it is already done in create-cluster-and-infra-env.sh
(which is not a bad idea)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Being PartOf
has no real effect unless RemainAfterExit
is also true (i.e. no restart is triggered by dependencies if a unit has already completed). Setting both ensures that this script gets re-run if the assisted-service pod restarts, which is necessary because a pod restart also clears the database.
Looks good. Just a minor suggestion |
Using "podman kube play" doesn't play that well with systemd, so split the config up and run each container and the pod as a separate systemd service.
The `podman ps` command just lists container names, not the pod they belong to, so for clarity use less-generic names.
Create a separate environment file for each service that needs it.
Ensure that if the assisted-service pod is restarted, we will re-run the create-cluster-and-infra-env script.
We don't actually need this for anything in the automated flow.
@@ -28,7 +28,7 @@ Run the tool using `go run cmd/main.go`. | |||
|
|||
The output ISO is written to `output/fleeting.iso`. | |||
|
|||
Boot the ISO in a VM with at least 4096MiB of RAM. No storage is required. | |||
Boot the ISO in a VM with at least 8192MiB of RAM. No storage is required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we are able to boot up the agents with ISO and we have this HW configuration https://github.com/openshift-agent-team/fleeting/pull/13/files#diff-514469c0981d360617b844887d7bed1da07f0dd12b8198001b83367b17319d4dR11, please update this information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the whole README needs to be updated, but there's probably no point until we have a documented way for people to reproduce the environment.
IPV6_SUPPORT=true | ||
NTP_DEFAULT_SERVER= | ||
PUBLIC_CONTAINER_REGISTRIES=quay.io | ||
RELEASE_IMAGES=[{"openshift_version":"4.10","cpu_architecture":"x86_64","url":"quay.io/openshift-release-dev/ocp-release:4.10.0-rc.1-x86_64","version":"4.10.0-rc.1"}] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any planned task to remove the hard-coded release image version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'll be part of integrating with the installer, since the installer knows what version it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pawanpinjarkar, zaneb The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The
podman kube play
command has many limitations, especially when it comes to integration with systemd. Split the pod and its individual containers into separate systemd services instead.Amongst other thing, this allows us to capture logs from all of these containers in the journal and associate them with separate services.