You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to build and run on a kubernetes cluster locally but getting the following when i run either
./pyrra kubernetes --disable-webhooks
or
./pyrra kubernetes
I get the following
➜ pyrra git:(main) ✗ ./pyrra kubernetes --disable-webhooks
level=info ts=2023-08-13T20:38:41.415725Z caller=main.go:130 msg="using Prometheus" url=http://localhost:9090
2023-08-13T13:38:41-07:00 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": ":8080"}
2023-08-13T13:38:41-07:00 INFO controller-runtime.builder skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called {"GVK": "pyrra.dev/v1alpha1, Kind=ServiceLevelObjective"}
2023-08-13T13:38:41-07:00 INFO controller-runtime.builder Registering a validating webhook {"GVK": "pyrra.dev/v1alpha1, Kind=ServiceLevelObjective", "path": "/validate-pyrra-dev-v1alpha1-servicelevelobjective"}
2023-08-13T13:38:41-07:00 INFO controller-runtime.webhook Registering webhook {"path": "/validate-pyrra-dev-v1alpha1-servicelevelobjective"}
2023-08-13T13:38:41-07:00 INFO setup starting manager
2023-08-13T13:38:41-07:00 INFO controller-runtime.webhook.webhooks Starting webhook server
2023-08-13T13:38:41-07:00 INFO starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
2023-08-13T13:38:41-07:00 INFO Stopping and waiting for non leader election runnables
2023-08-13T13:38:41-07:00 INFO shutting down server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
2023-08-13T13:38:41-07:00 INFO Stopping and waiting for leader election runnables
2023-08-13T13:38:41-07:00 INFO Starting EventSource {"controller": "servicelevelobjective", "controllerGroup": "pyrra.dev", "controllerKind": "ServiceLevelObjective", "source": "kind source: *v1alpha1.ServiceLevelObjective"}
2023-08-13T13:38:41-07:00 INFO Starting Controller {"controller": "servicelevelobjective", "controllerGroup": "pyrra.dev", "controllerKind": "ServiceLevelObjective"}
2023-08-13T13:38:41-07:00 INFO Starting workers {"controller": "servicelevelobjective", "controllerGroup": "pyrra.dev", "controllerKind": "ServiceLevelObjective", "worker count": 1}
2023-08-13T13:38:41-07:00 INFO Shutdown signal received, waiting for all workers to finish {"controller": "servicelevelobjective", "controllerGroup": "pyrra.dev", "controllerKind": "ServiceLevelObjective"}
2023-08-13T13:38:41-07:00 INFO All workers finished {"controller": "servicelevelobjective", "controllerGroup": "pyrra.dev", "controllerKind": "ServiceLevelObjective"}
2023-08-13T13:38:41-07:00 INFO Stopping and waiting for caches
2023-08-13T13:38:41-07:00 ERROR controller-runtime.source.EventHandler failed to get informer from cache {"error": "Timeout: failed waiting for *v1alpha1.ServiceLevelObjective Informer to sync"}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1
/Users/michael.zupan/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/source/kind.go:68
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1
/Users/michael.zupan/go/pkg/mod/k8s.io/apimachinery@v0.27.4/pkg/util/wait/loop.go:49
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
/Users/michael.zupan/go/pkg/mod/k8s.io/apimachinery@v0.27.4/pkg/util/wait/loop.go:50
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
/Users/michael.zupan/go/pkg/mod/k8s.io/apimachinery@v0.27.4/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1
/Users/michael.zupan/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/source/kind.go:56
2023-08-13T13:38:41-07:00 INFO Stopping and waiting for webhooks
2023-08-13T13:38:41-07:00 INFO Wait completed, proceeding to shutdown the manager
2023-08-13T13:38:41-07:00 ERROR setup failed to run groups {"error": "open /var/folders/2c/4xz5vx451zj8x15_7w62x4qr0000gq/T/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
I did install the crd in my local cluster and my kube context is pointed to my local cluster
➜ pyrra git:(main) ✗ k cluster-info
Kubernetes control plane is running at https://127.0.0.1:56562
CoreDNS is running at https://127.0.0.1:56562/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
➜ pyrra git:(main) ✗ k explain servicelevelobjectives
GROUP: pyrra.dev
KIND: ServiceLevelObjective
VERSION: v1alpha1
DESCRIPTION:
ServiceLevelObjective is the Schema for the ServiceLevelObjectives API.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
ServiceLevelObjectiveSpec defines the desired state of
ServiceLevelObjective.
status <Object>
ServiceLevelObjectiveStatus defines the observed state of
ServiceLevelObjective.
The text was updated successfully, but these errors were encountered:
Thanks for reporting. I'll check it out on my machine. Usually I just run the binaries locally with the filesystem backend or everything directly in Kubernetes. Not sure if there's something off with running the binaries outside of Kubernetes into the cluster.
Trying to build and run on a kubernetes cluster locally but getting the following when i run either
I get the following
I did install the crd in my local cluster and my kube context is pointed to my local cluster
The text was updated successfully, but these errors were encountered: