-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question/Request: Using one Argo controller to manage workflows in multiple clusters #1802
Comments
Hmmm, I believe the config that you pointed out only allows for one workflow controller to dispatch workflows to a single other K8s cluster. So if this were to be used as a solution to your problem, then you would still need one controller per different cluster you would want to deploy to. Don't take this for granted, but I believe that supporting the multi-context behavior you described would be a pretty large scale change and one we don't currently have in our roadmap, unfortunately. |
Thanks for the info @simster7, it's understandable that the multi-context behavior is a ways out. As a work-around, do you think it'd be feasible for me to do something like:
I think that setup would still address the main pain points I mentioned in the issue description. I'll experiment with it on my own, but if you can see any immediate problems I'd appreciate the heads-up 😄 |
I don't see any issues with that from the get-go, but I have not tried something like that myself, so let me know how it turns out! Also, we are currently working on a refactor of the UI and one of the ideas floating around is to allow the user to specify a |
Closing this, feel free to reopen if necessary |
@simster7 we have a similar requirement, where we want to be able to deploy workflows to multiple clusters and also be able to view all the workflows from a single Argo UI instance.. is the UI refactor still under works ? |
similar issue: #3523 |
@danxmoran Hi, were you able to figure something out? I tried to accomplish your idea, but I ended up just having the kubeconfig setup on the init and wait containers, but the pod still deploys on the core cluster, instead of the external tied up to the second workflow controller I have, I even run the argo workflow with the instanceid and I see the it registered on the wf controllers logs. |
Is this a BUG REPORT or FEATURE REQUEST?:
Question
What happened:
Depending on the project, my team needs to run different workflows in different k8s clusters. Our original plan was to run one instance of Argo in each cluster and update our client-side code to pick the appropriate cluster to use for each submission. This isn't ideal because:
N
Argo UIs to check instead of 1I just found this config, which makes it look like the Argo controller can orchestrate calls to external k8s clusters. How granular is that functionality? Is there a way to register multiple cluster contexts in the mounted
kubeconfig
, and specify the context on a per-submission basis?What you expected to happen:
It would be ideal for our use-case if
Workflow
objects accepted an optionalcontextName
field specifying the external cluster where its pods should run.The text was updated successfully, but these errors were encountered: