-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic stack linking #109
Comments
This change is mostly just a rename of Moniker to URN. It does also prefix resource URNs to have a standard URN namespace; in other words, "urn:coconut:<name>", where <name> is the same as the prior Moniker. This is a minor step that helps to prepare us for #109.
This could also be a premium feature. |
This change implements the `get` function for resources. Per #83, this allows Lumi scripts to actually read from the target environment. For example, we can now look up a SecurityGroup from its ARN: let group = aws.ec2.SecurityGroup.get( "arn:aws:ec2:us-west-2:153052954103:security-group:sg-02150d79"); The returned object is a fully functional resource object. So, we can then link it up with an EC2 instance, for example, in the usual ways: let instance = new aws.ec2.Instance(..., { securityGroups: [ group ], }); This didn't require any changes to the RPC or provider model, since we already implement the Get function. There are a few loose ends; two are short term: 1) URNs are not rehydrated. 2) Query is not yet implemented. One is mid-term: 3) We probably want a URN-based lookup function. But we will likely wait until we tackle #109 before adding this. And one is long term (and subtle): 4) These amount to I/O and are not repeatable! A change in the target environment may cause a script to generate a different plan intermittently. Most likely we want to apply a different kind of deployment "policy" for such scripts. These are inching towards the scripting model of #121, which is an entirely different beast than the repeatable immutable infrastructure deployments. Finally, it is worth noting that with this, we have some of the fundamental underpinnings required to finally tackle "inference" (#142).
This won't block our customer conversions this sprint, since we can just use strings and other workarounds. However, we should think about the design here, and if something obvious arises, we should do it. Either way, let's try to end the sprint with an idea of what to do in 0.9. |
One idea I had here is to use JavaScript exports to define what is available across stacks. For example, let's say that I have a stack,
Now in the consuming stack, I can say something like this:
This may be an "abuse" of the module system, but I think I like it. If you prefer static linking, go ahead and statically link; if you prefer dynamic linking, you can do that using |
If you are referring to it by the urnName Al alternative might be just |
We found a workaround for this, and so I'm pushing this to 0.9. |
Isn't this a bit of an orchestration problem? Given the example of a non-compatible database migration, wouldn't it be possible to do something along the lines of create a separate Pulumi program that can handle provisioning the new infrastructure, and then setting up a task (AWS Lambda, Azure Serverless, etc.) that can be used to handle migration, and then make the relevant changes in the primary stack? I'm not sure if this helps, but I'm dealing with something that I think is similar to what you've described @joeduffy: http://michaeljswart.com/2012/04/modifying-tables-online-part-1-migration-strategy/ |
How about making some kind of an authorization system for stacks to read the checkpoint of other stacks ? In that case, no need to export things on the source stack, just to authorize, you then reference in the destination stack the source stack whose checkpoint you want to read, then the name of the resource, then the output you need. |
These changes add a new API to the Pulumi SDK, `service.getStack`, that returns the outputs of a given stack. The Pulumi account performing the deployment that calls this API must have access to the indicated stack or the call will fail. This API is implemented as an invoke that is implemented by a builtin provider managed by the engine. This provider will be used for any custom resources and invokes inside the `pulumi:pulumi` module. Currently this provider's API is exactly the `pulumi:pulumi:getStack` invoke. This is the short-term fix for #109.
These changes add a new API to the Pulumi SDK, `service.getStack`, that returns the outputs of a given stack. The Pulumi account performing the deployment that calls this API must have access to the indicated stack or the call will fail. This API is implemented as an invoke that is implemented by a builtin provider managed by the engine. This provider will be used for any custom resources and invokes inside the `pulumi:pulumi` module. Currently this provider's API is exactly the `pulumi:pulumi:getStack` invoke. This is the short-term fix for #109.
These changes add a new API to the Pulumi SDK, `service.getStack`, that returns the outputs of a given stack. The Pulumi account performing the deployment that calls this API must have access to the indicated stack or the call will fail. This API is implemented as an invoke that is implemented by a builtin provider managed by the engine. This provider will be used for any custom resources and invokes inside the `pulumi:pulumi` module. Currently this provider's API is exactly the `pulumi:pulumi:getStack` invoke. This is the short-term fix for #109.
Here are the proposed approaches: Short termWe will introduce a new custom resource type,
We will also update the Pulumi console and Pulumi CLI to display stack references in a richer manner than normal resources. Below is an example of three stacks that are layered using base-vpc stackimport * as pulumi from "@pulumi/pulumi";
import * as awsinfra from "@pulumi/aws-infra";
// Create and export a VPC.
const vpc = new awsinfra.Network("network");
export const network = vpc; eks-cluster stackimport * as pulumi from "@pulumi/pulumi";
import * as eks from "@pulumi/eks";
// Import the base VPC from our base stack.
const baseVpc = new pulumi.service.StackReference("myorg/base-vpc").output("network");
// Create an EKS cluster
const cluster = new eks.Cluster("cluster", {
vpcId: baseVpc.apply(vpc => vpc.vpcId),
subnetIds: baseVpc.apply(vpc => vpc.subnetIds),
});
// Export the cluster's kubeconfig.
export const kubeconfig = cluster.kubeconfig; app-hackmd stackimport * as eks from "@pulumi/eks";
import * as k8s from "@pulumi/kubernetes";
// Import the EKS cluster stack
const kubeconfig = new pulumi.service.StackReference("myorg/eks").output("kubeconfig");
// Create a k8s provider that targets the EKS cluster
const k8sProvider = new k8s.Provider("eks", { kubeconfig: kubeconfig.apply(JSON.stringify) });
// Deploy the HackMD Helm chart.
const hackmd = new k8s.helm.v2.Chart("hackmd", {
repo: "stable",
chart: "hackmd",
version: "0.1.1",
values: {
service: {
type: "LoadBalancer",
port: 80,
},
},
}, { providers: { kubernetes: k8sprovider } }); Medium termWe will flesh out the
|
These changes add a new API to the Pulumi SDK, `service.getStack`, that returns the outputs of a given stack. The Pulumi account performing the deployment that calls this API must have access to the indicated stack or the call will fail. This API is implemented as an invoke that is implemented by a builtin provider managed by the engine. This provider will be used for any custom resources and invokes inside the `pulumi:pulumi` module. Currently this provider's API is exactly the `pulumi:pulumi:getStack` invoke. This is the short-term fix for #109.
Our current model acts more like static linking, in that the resources for dependencies become part of the resource topology in the consuming library. This is okay in some cases -- like when you're building a larger topology out of other smaller 1st party components -- but not in other cases -- like when you are stitching together a complex topology built out of other separately deployed pieces.
As a concrete example, imagine you've factored your overall stack into three key pieces: 1) at the very bottom, one for infrastructure; 2) in the middle, another one for persistent state (presumably parameterized so that restoration from backup state is possible); and 3) at the top, services and applications. Each is revved and deployed independently -- with increasing frequency from bottom-to-top -- ideally in a way that isolates impacts to the other layers as much as possible.
So we will definitely want the equivalent to dynamic linking. But this is pretty fundamental.
As step 1, we should model resource dependencies using URIs. This at least affords some flexibility in the resolution of them to physical resources, at least conceptually. It also gives us guidance around existing schemes for versioning (make a new URI), redirects, and so on. This presumably means we need some way of resolving inter-stack dependencies, however, e.g. a Pulumi "DNS" server.
Even after that, many complexities remain; for example, infrastructure is not always perfectly versionable without a cascading impact to dependencies. A change to the infrastructure that necessitates rebuilding and redeploying the database and/or application tier, for instance, needs to carry that knowledge in a way that at least notifies the operator, if not actually doing something automatically. Even changes that merely alter output values that might have been depended upon, versus destroying and recreating resources, will have similar cascading impacts.
I'm putting this in S10 for consideration. I'm not yet certain when to bite this off, but my inclination is "soon" since it will have some fairly fundamental architectural impacts that we want to front-load.
The text was updated successfully, but these errors were encountered: