You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At this very moment it is only possible to customize the podman-run command parameters via few global settings. It would be cool to be able to modify it via job template view. Some parameters like Environment vars, volumes mounted into the EEs (and others), would be cool if they could be set at the job template level.
For example, in Kubernetes, the workloads (pods) that run across the execution plane depend on some sort of definition (in AAP the workloads are EEs running Ansible playbooks). Let's say for example a Kubernetes Deployment object, which is the state definition of a given workload (it has .spec.volumeMounts, .spec.securityContext, .spec,nodeSelector, and countless options). The definition stated in the Deployment object is translated into the workloads (pods), which is why, some pods may mount some volumes whereas others will not need to mount anything. This means that pods living in the same namespace can have different volume mounts, security contexts and so on, because they are managed by different Deployment objects.
In the case of the AAP, I think that the definition of the workloads are the Job Templates, and it would be cool if you could specify the behaviour of podman-run command from the Job Template view.
An interesting use-case for this feature request:
In my case, for me to authenticate against the RHV hypervisor we do it via Kerberos (setting the KRB5CCNAME=FILE:/tmp/file-tgt as an env var for the ovirt_auth module). For me to make this work in AAP I need to make available the file-tgt file inside my execution environment (and can't do it other way because of some security requirements, my playbooks need to authenticate using this tgt file which expires every X hours).
Then my only option is to have this file in all of the execution nodes so I can mount it into the EEs:
Honestly, I do not like to mount this file for all of my workloads (EEs) running across the execution plane, it does not make sense that this file needs to be mounted inside all the EEs for all of the job templates. As I see it, it would be really cool if you could specify podman-run options from the job_template view, so the file only gets mounted when I run my RHV job template.
Let me know if this feature request makes sense.
Kind regards and thank you!
The text was updated successfully, but these errors were encountered:
jangel97
changed the title
Podman run options command settable from job_template view
Podman run options command customisable from job_template view
Jun 7, 2022
jangel97
changed the title
Podman run options command customisable from job_template view
Command podman-run options customisable from job_template view
Jun 7, 2022
Hi,
At this very moment it is only possible to customize the podman-run command parameters via few global settings. It would be cool to be able to modify it via job template view. Some parameters like Environment vars, volumes mounted into the EEs (and others), would be cool if they could be set at the job template level.
For example, in Kubernetes, the workloads (pods) that run across the execution plane depend on some sort of definition (in AAP the workloads are EEs running Ansible playbooks). Let's say for example a Kubernetes Deployment object, which is the state definition of a given workload (it has .spec.volumeMounts, .spec.securityContext, .spec,nodeSelector, and countless options). The definition stated in the Deployment object is translated into the workloads (pods), which is why, some pods may mount some volumes whereas others will not need to mount anything. This means that pods living in the same namespace can have different volume mounts, security contexts and so on, because they are managed by different Deployment objects.
In the case of the AAP, I think that the definition of the workloads are the Job Templates, and it would be cool if you could specify the behaviour of podman-run command from the Job Template view.
An interesting use-case for this feature request:
In my case, for me to authenticate against the RHV hypervisor we do it via Kerberos (setting the
KRB5CCNAME=FILE:/tmp/file-tgt
as an env var for the ovirt_auth module). For me to make this work in AAP I need to make available thefile-tgt
file inside my execution environment (and can't do it other way because of some security requirements, my playbooks need to authenticate using this tgt file which expires every X hours).Then my only option is to have this file in all of the execution nodes so I can mount it into the EEs:
![image](https://user-images.githubusercontent.com/38217290/172393218-52dd9a98-a1c5-44b3-8425-b835be72d638.png)
Honestly, I do not like to mount this file for all of my workloads (EEs) running across the execution plane, it does not make sense that this file needs to be mounted inside all the EEs for all of the job templates. As I see it, it would be really cool if you could specify podman-run options from the job_template view, so the file only gets mounted when I run my RHV job template.
Let me know if this feature request makes sense.
Kind regards and thank you!
The text was updated successfully, but these errors were encountered: