Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Argo ksonnet package #178

Merged
merged 6 commits into from Feb 1, 2018
Merged

Argo ksonnet package #178

merged 6 commits into from Feb 1, 2018

Conversation

jlewi
Copy link
Contributor

@jlewi jlewi commented Jan 31, 2018

@jlewi
Copy link
Contributor Author

jlewi commented Jan 31, 2018

/cc @swiftdiaries @jessesuen

}, // service account

// TODO(jlewi): Do we really need cluster admin privileges? Why?
// is this just because workflow controller is trying to create the CRD?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In v2.0.0-beta1, the controller no longer attempts to create the CRD. This is left to installer now.

cluster-admin would be needed if workflow-controller needs to manage workflows across namespaces. If the controller is configured to only operate in a single namespace (this can be configured in the controller's configmap), then I think admin should be sufficient. But I haven't tried this yet.

We identified the minimum privileges required by the controller here:
https://github.com/argoproj/argo/blob/master/cmd/argo/commands/const.go#L20

Note that UI and controller have different set of privileges.

In 2.0.0-beta1, the installer will create a cluster-admin using these privileges. But if you would like to hand roll the deployment/rbac manifests yourself, argo 2.0.0-beta1 now has --dry-run flag to print the manifests without installing, and you can tweak it from there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very helpful thanks. I updated to the beta1 version and locked down the permissions. PTAL.

schedulerName: "default-scheduler",
securityContext: {},
serviceAccount: "argo",
serviceAccountName: "argo",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I forgot to mention another change is that there are now two service accounts: one for controller, the other for UI with different permissions. (e.g. the UI needs access to logs whereas the controller does not). I notice in this change the UI is using the same service account (argo). I think you will want to create the argo-ui service account mapped to these permissions:

https://github.com/argoproj/argo/blob/master/cmd/argo/commands/const.go#L44

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you did mention that. PTAL.

apiGroups: [""],
resources: [
"pods",
"pods-exec",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be pods/exec instead of pods-exec?

apiGroups: [""],
resources: [
"pods",
"pods-exec",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. I've only ever seen this pods/exec

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the docs say to "use a slash to delimit the resource and subresource"
https://kubernetes.io/docs/admin/authorization/rbac/#referring-to-resources

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this; it was a bad copy and paste error on my part.

Copy link

@jessesuen jessesuen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deployment manifests looks good to me -- just think the pods-exec needs to be changed to pods/exec.

@jessesuen
Copy link

BTW, I will be working with ksonnet to generate an "official" argo workflow libsonnet for the workflow spec parts. This should be coming soon after I generate the OpenAPI specs for workflows.

@jlewi
Copy link
Contributor Author

jlewi commented Jan 31, 2018

Thanks.

@jlewi jlewi merged commit d2eb2fc into kubeflow:master Feb 1, 2018
yanniszark pushed a commit to arrikto/kubeflow that referenced this pull request Feb 15, 2021
* add pv to katibDB

Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>

* fix test

Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>

* add pv to MinikubeDemo/deploy.sh

Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>

* update

Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>

* fix test

Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>
elenzio9 pushed a commit to arrikto/kubeflow that referenced this pull request Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants