New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Programmatically set the default providers #2059
Comments
Currently there is not a mechanism to set the default. The closest available option today is to put all resources inside a component (and pass We definitely want to improve this - I can imagine a couple ways we might be able to do it;
These would both require being careful about timing, as they modify glove state and subtle changes to module loading order could potentially cause this switch to happen at a different time. |
Just to capture this what else is going on in the discussion, and why I think it's a big deal: this is something that bites pretty much 100% of consequential Kubernetes users at one point or another. And when you do, can be very confusing. For example, the other day, Joe was asking why his |
Another option here might be to allow some way to get an "instance" of the import * as kubernetes from "@pulumi/kubernetes";
let aksKubernetes = new kubernetes.Provider("myprovider", {...});
let deployment = new aksKubernetes.apps.v1.Deployment("...", { ... }); |
In my case, I want to explicitly define and configure a provider based on outputs from a Ideally, I'd be able to set the default/ambient provider with explicitly provided arguments and have all related resources use it - i.e. Luke's second option at #2059 (comment). |
This doesn't actually come from the default provider: this comes from the corresponding Node/Python/etc. package performing a |
@pgavlin @lukehoban Just so we have inputs to make sensible prioritization decisions, one of our more advanced users ran into this problem and spend days figuring it out:
I know this is marked for M21 -- but, we've bumped this since M19, and my personal opinion is that this is a very strong candidate to not get bumped this time. :) I don't want to beat a dead horse, so let me know if this is getting annoying, but: every significant user of a managed Kubernetes platform will get bitten by this issue eventually. And when they do, it will usually really hurt. |
Yep - this is important. |
I would argue that a best-practice is to declare all resources in a ComponentResource, which already has this feature. |
@almson is there any convention for the naming of gcp_provider = gcp.Provider(resource_name="default_5_23_0",
opts=pulumi.ResourceOptions(version="5.23.0"),
project="XXX-XXXX")
all_resource = pulumi.ComponentResource(t="gcp:component:all_resource" ,name="all_resource", opts=pulumi.ResourceOptions(provider=gcp_provider))
airflow_XXXX = gcp.composer.Environment(
...
opts=pulumi.ResourceOptions(parent=all_resource ),
.... |
@raphaelauv ComponentResource is an abstract base class. You have to extend it and then set the parent of child resources. See here https://pulumi.awsworkshop.io/45_componens/10_create_component.html Edit: this example doesn't set default providers, but you can do that in the ComponentResourceOptions that are passed to |
We have merged #3383. This doesn't provider a way to programmatically set default providers, but it does prevent accidentally referencing the default providers. |
This seems to disable inheritance of providers for resources inside a Being able to disable the default provider and explicitly specify a "default" provider to be inherited by child resources would be excellent |
latest, in our other stacks we just set the kubeconfig to trash but I was experimenting with the new disable support in this one. package.json{
"name": "performance",
"main": "src/index.ts",
"devDependencies": {
"@google-cloud/container": "^2.6.0",
"@google-cloud/resource-manager": "^3.0.0",
"@kubernetes/client-node": "^0.16.1",
"@types/chai": "^4.0.0",
"@types/mocha": "^9.0.0",
"@types/node": "^16",
"chai": "^4.0.0",
"mocha": "^9.0.0",
"node-fetch": "^2.6.0"
},
"dependencies": {
"@js-joda/core": "^5.0.0",
"@pulumi/gcp": "^6.0.0",
"@pulumi/kubernetes": "^3.0.0",
"@pulumi/pulumi": "^3.0.0",
"@pulumi/random": "^4.0.0",
"@types/promise-poller": "^1.7.1",
"promise-poller": "^1.9.1",
"typescript": "~3.8.0"
},
"scripts": {
"lint": "eslint --fix **/*.ts",
"check": "eslint **/*.ts",
"test": "mocha -r ts-node/register tests/**/*.ts"
}
} package lock
stack config YAMLconfig:
gcp:impersonateServiceAccount: <OMITTED>
gcp:region: europe-west1
pulumi:disable-default-providers:
- kubernetes index.tsconst k8sProvider = k8slib.gkeProvider(...); // tested and working in our other stacks
new PerformancePipelines(
"performance-pipelines",
{
executionNamespace: "ci-epiu9r6l",
},
{
providers: {
kubernetes: k8sProvider,
},
}
); PerformancePipelines.tsimport * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
export interface PerformancePipelinesArgs {
executionNamespace: pulumi.Input<string>;
}
export class PerformancePipelines extends pulumi.ComponentResource {
private readonly name: string;
constructor(name: string, args: PerformancePipelinesArgs, opts?: pulumi.ComponentResourceOptions) {
super("core_gcp:performance:PerformancePipelines", name, args, opts);
this.name = name;
const ns = k8s.core.v1.Namespace.get("execution-namespace", args.executionNamespace);
const msg = pulumi.interpolate`ns: ${ns.metadata}`;
msg.apply((m) => pulumi.log.info(m));
// TODO: Create pipeline
// TODO: Create TriggerTemplate
// TODO: Create TriggerBinding
// TODO: Create EventListener
this.registerOutputs({});
}
}
|
I you still need
|
Jeeeeez, the one thing I forgot to copy 🤦 , alexa, how do you go back in time and take your foot out of your mouth? Cheers! |
This seems like a very common requirement for people who are migrating to micro-stacks, as recommended by the docs here: https://www.pulumi.com/docs/using-pulumi/organizing-projects-stacks/ Having resources fail because you didn't set the Having said that, this seems to be limited to kubernetes clusters deployed by an infra-like microstack, and wanting to reference the kubeconfig from app-like microstacks. The best workaround I can think of for now is to use a temporary kubeconfig file, each app-like microstack must ensure the file exists and is populated before it can deploy echo "$(pulumi stack output kubeconfig --cwd infra)" > .kubeconfig
KUBECONFIG=".kubeconfig" pulumi up --cwd app If you have multiple stacks for each infra environment, make sure to get the kubeconfig output from the relevant stack |
Adding another use case in hope of this issue being prioritized 🤞 you might have an infra microstack that deploys a kafka cluster, and then deploy individual kafka topics in app microstacks |
From @geekflyer on Slack:
The text was updated successfully, but these errors were encountered: