Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't create prometheus ServiceMonitor using CustomResource #1217

Closed
affinity-build-user opened this issue Jul 23, 2020 · 4 comments
Closed
Assignees
Labels
impact/panic This bug represents a panic or unexpected crash kind/bug Some behavior is incorrect or out of spec resolution/duplicate This issue is a duplicate of another issue

Comments

@affinity-build-user
Copy link

affinity-build-user commented Jul 23, 2020

Problem description

I have the prometheus operator installed in my k8s cluster along with all the CRD's. I am trying to create a CustomResource object for the ServiceMonitor CRD by extending the k8s.apiextensions.CustomResource object like the following.

import { CustomResourceOptions } from '@pulumi/pulumi';
import * as k8s from '@pulumi/kubernetes';
import { Input } from '@pulumi/pulumi';
import { meta } from '@pulumi/kubernetes/types/input'
import { PrometheusServiceMonitorArgs } from './inputs';

const API_VERSION = 'monitoring.coreos.com/v1'
const KIND = 'ServiceMonitor'

export interface PrometheusServiceMonitorArgs {
  metadata?: Input<meta.v1.ObjectMeta>;
  [fields: string]: Input<any>;
}

export class PrometheusServiceMonitor extends k8s.apiextensions.CustomResource {
  constructor(name: string, args: PrometheusServiceMonitorArgs, opts?: CustomResourceOptions) {
    const inputs: k8s.apiextensions.CustomResourceArgs = {
      apiVersion: API_VERSION,
      kind: KIND,
      ...args,
    };

    super(name, inputs, opts);
  }
}

I then instantiate this object when I create a deployment for my service . . . .

    const serviceMonitorArgs: PrometheusServiceMonitorArgs = {
      metadata: backendDeployment.metadata,
      spec: {
        selector: backendDeployment.spec.selector,
        endpoints: [ {port: 'http', interval: '10s'} ]
    }
  }
  const serviceMonitor = new PrometheusServiceMonitor(`${appName}-service-monitor`, serviceMonitorArgs)

I am getting the following error about aws credentials.

Errors & Logs

Diagnostics: kubernetes:monitoring.coreos.com:ServiceMonitor (verifier-dev-service-monitor): error: kubernetes:monitoring.coreos.com/v1:ServiceMonitor resource 'affinity-verifier-dev-service-monitor' has a problem: invalid use of reserved internal annotation "pulumi.com/autonamed"

There are no concrete examples of how to do something like this using the Pulumi CustomResource. Maybe something can be added to the examples or docs.

Affected product version(s)

v2.7.1

@lblackstone
Copy link
Member

invalid use of reserved internal annotation "pulumi.com/autonamed"
This error is caused by setting the pulumi.com/autonamed annotation. Perhaps you inadvertently copied that from the Prometheus Service?

@affinity-build-user
Copy link
Author

That error was solved when I removed the metadata I was using from the deployment output. Now I am getting this error.

Diagnostics:
  kubernetes:monitoring.coreos.com:ServiceMonitor (affinity-verifier-dev-service-monitor):
    error: Preview failed: unable to load schema information from the API server: the server has asked for the client to provide credentials
 
  pulumi:pulumi:Stack (affinity-verifier-dev):
    error: preview failed
 
    panic: interface conversion: interface {} is resource.PropertyMap, not string
    goroutine 60 [running]:
    github.com/pulumi/pulumi/pkg/resource.PropertyValue.StringValue(...)
        /home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/pkg/resource/properties.go:359
    github.com/pulumi/pulumi-kubernetes/pkg/provider.parseKubeconfigPropertyValue(0x237bf00, 0xc00030d530, 0x246afe4, 0xa, 0xc0001c2f28)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/util.go:85 +0x169
    github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).DiffConfig(0xc00001a000, 0x26d5660, 0xc00030d4a0, 0xc00024cbd0, 0xc00001a000, 0x226b401, 0xc00008eac0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:204 +0x61b
    github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x26d5660, 0xc00030d4a0, 0x239bb00, 0xc00024cbd0, 0x23b6680, 0x32f6060, 0x26d5660, 0xc00030d4a0)
        /home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/sdk/proto/go/provider.pb.go:1504 +0x86
    github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x26d5660, 0xc00030c1e0, 0x239bb00, 0xc00024cbd0, 0xc0004ecec0, 0xc0004ecf00, 0x0, 0x0, 0x2694b40, 0xc0002a6cb0)
        /home/travis/gopath/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20171105060200-01f8541d5372/go/otgrpc/server.go:61 +0x36e
    github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler(0x23f8be0, 0xc00001a000, 0x26d5660, 0xc00030c1e0, 0xc00027d6d0, 0xc00000c200, 0x26d5660, 0xc00030c1e0, 0xc000110000, 0xf98)
        /home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/sdk/proto/go/provider.pb.go:1506 +0x14b
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc000216180, 0x26f1ea0, 0xc0000b4d80, 0xc000096200, 0xc0003de120, 0x32c21f8, 0x0, 0x0, 0x0)
        /home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:998 +0x46a
    google.golang.org/grpc.(*Server).handleStream(0xc000216180, 0x26f1ea0, 0xc0000b4d80, 0xc000096200, 0x0)
        /home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:1278 +0xd97
    google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00003c390, 0xc000216180, 0x26f1ea0, 0xc0000b4d80, 0xc000096200)
        /home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:717 +0xbb
    created by google.golang.org/grpc.(*Server).serveStreams.func1
        /home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:715 +0xa1

Every other resource gets created properly except for the ServiceMonitor. Do custom resources require some elevated privileges or something ?

@lblackstone
Copy link
Member

I don't think this error is related to privileges. It looks like your kubeconfig is malformed:
parseKubeconfigPropertyValue and the server has asked for the client to provide credentials

Perhaps you are not setting the provider on this resource?

@EvanBoyle EvanBoyle added kind/bug Some behavior is incorrect or out of spec impact/panic This bug represents a panic or unexpected crash labels Jul 3, 2023
@lblackstone lblackstone self-assigned this Jul 14, 2023
@lblackstone lblackstone added the resolution/duplicate This issue is a duplicate of another issue label Jul 14, 2023
@lblackstone
Copy link
Member

This is likely a dupe of #1032

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
impact/panic This bug represents a panic or unexpected crash kind/bug Some behavior is incorrect or out of spec resolution/duplicate This issue is a duplicate of another issue
Projects
None yet
Development

No branches or pull requests

3 participants