Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create regional cluster #124

Closed
rosskevin opened this issue Apr 3, 2019 · 15 comments
Closed

Unable to create regional cluster #124

rosskevin opened this issue Apr 3, 2019 · 15 comments

Comments

@rosskevin
Copy link

@pulumi/gcp@0.18.2

export const k8sCluster = new gcp.container.Cluster('gke-cluster', {
  initialNodeCount: 1,
  nodeVersion: 'latest',
  minMasterVersion: 'latest',
  nodeConfig: {
    machineType: 'n1-standard-1',
    oauthScopes: [
      'https://www.googleapis.com/auth/compute',
      'https://www.googleapis.com/auth/devstorage.read_only',
      'https://www.googleapis.com/auth/logging.write',
      'https://www.googleapis.com/auth/monitoring',
    ],
  },

  // regional cluster
  location: 'us-central1',
  nodeLocations: ['us-central1-f', 'us-central1-b'],
})
  gcp:container:Cluster (gke-cluster):
    error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: location
    error: gcp:container/cluster:Cluster resource 'gke-cluster' has a problem: : invalid or unknown key: node_locations

This should be possible based on #119

@rosskevin
Copy link
Author

rosskevin commented Apr 3, 2019

I'm also using location in node pool, and that seems to error as well on location

new gcp.container.NodePool(
      name,
      {
        cluster: this.instance.name,
        project,

        location,

        nodeCount: (1 + nodeLocations.length) * initialNodeCount,
        nodeConfig: {
          machineType: config.get('machineType') || 'n1-standard-1', // prod: n1-standard-2
          oauthScopes: [
            'https://www.googleapis.com/auth/devstorage.read_only',
            'https://www.googleapis.com/auth/logging.write',
            'https://www.googleapis.com/auth/monitoring.write',
            'https://www.googleapis.com/auth/pubsub',
            'https://www.googleapis.com/auth/service.management.readonly',
            'https://www.googleapis.com/auth/servicecontrol',
            'https://www.googleapis.com/auth/trace.append',
            'https://www.googleapis.com/auth/monitoring',
            'https://www.googleapis.com/auth/cloud-platform',
            'https://www.googleapis.com/auth/devstorage.full_control',
            'https://www.googleapis.com/auth/sqlservice.admin',
            'https://www.googleapis.com/auth/userinfo.email',
          ],
        },
        management: {
          autoRepair: true,
          autoUpgrade: true,
        },
      },
      { parent: this, dependsOn: [this.instance] },
    )
  gcp:container:NodePool (production):
    error: gcp:container/nodePool:NodePool resource 'production' has a problem: : invalid or unknown key: location

@jen20
Copy link
Contributor

jen20 commented Apr 3, 2019

I can't reproduce either of these using the latest plugins. I suspect what is happening is that you have both the latest and an older version of the plugin installed, and the older one is getting selected. This manifests as a runtime error since the TypeScript is compiled against the intended target. In other words, it would be resolved by pulumi/pulumi#2389.

To work around this in the short term, I think all components throughout the entire stack need to be using the latest version of the GCP provider SDK.

@rosskevin
Copy link
Author

rosskevin commented Apr 3, 2019

Confirmed the cause is pulumi/pulumi#2389, thank you.

From slack

  • check plugins pulumi plugin ls (many old versions)
  • rm -rf node_modules
  • pulumi plugin rm --all
  • check plugins again - should be latest pulumi plugin ls

@rosskevin
Copy link
Author

rosskevin commented Apr 3, 2019

Reopening because I am unable to workaround the plugin versioning.

After a rebuild with initial yarn, the plugins are confirmed good:

~/p/advisorIntake ❯❯❯ pulumi plugin ls
NAME        KIND      VERSION  SIZE   INSTALLED       LAST USED
gcp         resource  0.18.2   64 MB  54 seconds ago  now
kubernetes  resource  0.22.0   45 MB  54 seconds ago  now
random      resource  0.5.1    27 MB  54 seconds ago  now

Then I ran pulumi up on a stack with the following dependencies:

"@pulumi/gcp": "^0.18.2",
"@pulumi/kubernetes": "^0.22.0",
"@pulumi/pulumi": "^0.17.4",
"@pulumi/random": "^0.5.1"

I then notice that pulumi up installed the older gcp@0.18.0 plugin:

pulumi plugin ls
NAME        KIND      VERSION  SIZE   INSTALLED       LAST USED
gcp         resource  0.18.2   64 MB  1 minute ago    now
gcp         resource  0.18.0   64 MB  13 seconds ago  7 seconds ago
kubernetes  resource  0.22.0   45 MB  1 minute ago    now
random      resource  0.5.1    27 MB  1 minute ago    now

With the result:

error: gcp:container/cluster:Cluster resource 'production' has a problem: : invalid or unknown key: location

What is bizarre is this originally worked briefly (a few hours), then stopped working. Now I cannot use the code written for the regional cluster even after clearing plugins/rebuilding node_modules.

@lukehoban
Copy link
Member

@rosskevin Could you share what is in the manifest header of the results of pulumi stack export? Does it include references to 0.18.0 of the GCP provider? If so, I believe this is caused by pulumi/pulumi#2576.

@rosskevin
Copy link
Author

Yes, it has it. The stack was up previously, I've just been omitting/adding back the cluster.

        "manifest": {
            "time": "2019-04-03T15:23:13.414846-05:00",
            "magic": "4c9e3af950abf0bb06246576f7dddc0b09f84e5b925a0be3d28d7b943f3cfe41",
            "version": "v0.17.4",
            "plugins": [
                {
                    "name": "nodejs",
                    "path": "/usr/local/bin/pulumi-language-nodejs",
                    "type": "language",
                    "version": "0.17.4"
                },
                {
                    "name": "gcp",
                    "path": "/Users/kross/.pulumi/plugins/resource-gcp-v0.18.0/pulumi-resource-gcp",
                    "type": "resource",
                    "version": "0.18.0"
                }
            ]
        },

I'll try a full destroy and see if that allows me to workaround for the moment.

@lukehoban
Copy link
Member

Okay - sounds like this is directly caused by pulumi/pulumi#2576 then.

@rosskevin
Copy link
Author

I hand edited manifest and imported, I'll see if that works

@rosskevin
Copy link
Author

rosskevin commented Apr 3, 2019

Hand edit + import then pulumi up made the plugin come back. I'll delete the stack and start with 0.18.2

@swgillespie
Copy link
Contributor

Hand edit + import then pulumi up made the plugin come back. I'll delete the stack and start with 0.18.2

Can you provide some more details on this? In particular, if you run pulumi preview --logtostderr -v 7 2> err.txt, err.txt should contain logs that indicate exactly why the engine needed a particular plugin. If you could do that and provide the logs, that would be super helpful for understanding why the plugin comes back.

@rosskevin
Copy link
Author

I deleted the stack already. It is clear that hand editing the manifest + import had no impact. The next pulumi up had manifest back at the older 0.18.0.

@swgillespie
Copy link
Contributor

It is clear that hand editing the manifest + import had no impact. The next pulumi up had manifest back at the older 0.18.0.

Right - the log gathering is to understand why that happened. I'll try to work on a repro, but what is supposed to happen is that the plugins required by the language host (i.e. the ones in your package.json) override everything else for the purposes of new resources.

@casey-robertson
Copy link

I think I'm having the same/similar issue as @rosskevin . I was getting messages about 'region' being deprecated. So I went through our cluster code, updated references to region to the new location. Running preview against 3 different stacks gives me 3 different outcomes. One, no changes. Another 'missing key' like @rosskevin . Yet another, it wants to replace a node pool and shows the change as deleting the region attribute and adding the location.

@casey-robertson
Copy link

Current state of my stack:

 pulumi preview
Previewing update (MINDBODY-Platform/alpha):

     Type                             Name            Plan     Info
     pulumi:pulumi:Stack              viserion-alpha
 >   ├─ pulumi:pulumi:StackReference  identityStack   read
     └─ gcp:container:Cluster         alpha                    1 error

Diagnostics:
  gcp:container:Cluster (alpha):
    error: gcp:container/cluster:Cluster resource 'alpha' has a problem: : invalid or unknown key: location

After reviewing the stack file there is a reference to gcp provider 0.16.6. So I removed it - why not. Imported the file then it does this:

Previewing update (MINDBODY-Platform/alpha):

     Type                             Name                      Plan        Info
     pulumi:pulumi:Stack              viserion-alpha
 >   ├─ pulumi:pulumi:StackReference  identityStack             read
 +-  ├─ gcp:container:Cluster         alpha                     replace     [diff: +enableBinaryAuthorization,enableTpu,location-region~addonsConfig,ipAllocationPolicy,masterAuthorizedNetworksConfig,networkPolicy,privateClusterCon
 +-  ├─ gcp:container:NodePool        k8s-node-pool-private     replace     [diff: +location-region~autoscaling,management,name,nodeConfig]
 +-  └─ gcp:container:NodePool        k8s-node-pool-monitoring  replace     [diff: +location-region~autoscaling,management,name,nodeConfig]

Resources:
    +-3 to replace
    26 unchanged

Changed it back but this is making migrating the provider untenable.

@rosskevin
Copy link
Author

I was looking for an old issue I created and bumped into this one and saw it was open. I'm no longer having problems with regional clusters so I'm going to close this for now, until I bump into it again. pulumi/pulumi#2576 is resolved so perhaps that was indeed the cause.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants