This directory contains a number of examples showcasing various capabilities of
the kube
crates.
All examples can be executed with:
cargo run --example $name
All examples enable logging via RUST_LOG
. To enable deeper logging of the kube
crates you can do:
RUST_LOG=info,kube=debug cargo run --example $name
For a basic overview of how to use the Api
try:
cargo run --example job_api
cargo run --example pod_api
cargo run --example dynamic_api
cargo run --example dynamic_jsonpath
cargo run --example log_stream -- kafka-manager-7d4f4bd8dc-f6c44
The kubectl
light example supports get
, delete
, and watch
on arbitrary resources:
cargo run --example kubectl -- get nodes
cargo run --example kubectl -- get pods -lapp.kubernetes.io/name=prometheus -n monitoring
cargo run --example kubectl -- watch pods --all
cargo run --example kubectl -- edit pod metrics-server-86cbb8457f-8fct5
cargo run --example kubectl -- delete pod metrics-server-86cbb8457f-8fct5
cargo run --example kubectl -- apply -f configmapgen_controller_crd.yaml
Supported flags are -lLABELSELECTOR
, -nNAMESPACE
, --all
, and -oyaml
.
There are also two other examples that serve as simplistic analogues of kubectl logs
and kubectl events
:
# tail logs
cargo run --example log_stream -- prometheus-promstack-kube-prometheus-prometheus-0 -c prometheus -f --since=3600
# get events for an object
cargo run --example event_watcher -- --for=Pod/prometheus-promstack-kube-prometheus-prometheus-0
Admission controllers are a bit of a special beast. They don't actually need kube_client
(unless you need to verify something with the api-server) or kube_runtime
(unless you also build a complementing reconciler) because, by themselves, they simply get changes sent to them over https
. You will need a webserver, certificates, and either your controller deployed behind a Service
, or as we do here: running locally with a private ip that your k3d
cluster can reach.
export ADMISSION_PRIVATE_IP=192.168.1.163
./admission_setup.sh
cargo run --example admission_controller &
kubectl apply -f admission_ok.yaml # should succeed and add a label
kubectl apply -f admission_reject.yaml # should fail
How deriving CustomResource
works in practice, and how it interacts with the schemars dependency.
cargo run --example crd_api
cargo run --example crd_derive
cargo run --example crd_derive_schema
cargo run --example crd_derive_no_schema --no-default-features --features=openssl-tls,latest
cargo run --example cert_check # showcases partial typing with Resource derive
The no_schema
one opts out from the default schema
feature from kube-derive
(and thus the need for you to derive/impl JsonSchema
).
However: without the schema
feature, it's left up to you to fill in a valid openapi v3 schema, as schemas are required for v1::CustomResourceDefinitions, and the generated crd will be rejected by the apiserver if it's missing. As the last example shows, you can do this directly without schemars
.
Note that these examples also contain tests for CI, and are invoked with the same parameters, but using cargo test
rather than cargo run
.
These example watch a single resource and does some basic filtering on the watchevent stream:
# watch unready pods in the current namespace
cargo run --example pod_watcher
# watch all event events
cargo run --example event_watcher
# watch deployments, configmaps, secrets in the current namespace
cargo run --example multi_watcher
# watch broken nodes and cross reference with events api
cargo run --example node_watcher
# watch arbitrary, untyped objects across all namespaces
cargo run --example dynamic_watcher
# watch arbitrary, typed config map objects, with error toleration
cargo run --example errorbounded_configmap_watcher
The node_
and pod_
watcher also allows using Kubernetes 1.27 Streaming lists via WATCHLIST=1
:
WATCHLIST=1 RUST_LOG=info,kube=debug cargo run --example pod_watcher
Main example requires you creating the custom resource first:
kubectl apply -f configmapgen_controller_crd.yaml
cargo run --example configmapgen_controller &
kubectl apply -f configmapgen_controller_object.yaml
and the finalizer example (reconciles a labelled subset of configmaps):
cargo run --example secret_syncer
kubectl apply -f secret_syncer_configmap.yaml
kubectl delete -f secret_syncer_configmap.yaml
the finalizer is resilient against controller downtime (try stopping the controller before deleting).
These examples watch resources plus log from its queryable store:
# Watch namespaced pods and print the current pod count every event
cargo run --example pod_reflector
# Watch nodes for applied events and current active nodes
cargo run --example node_reflector
# Watch namespaced secrets for applied events and print secret keys in a task
cargo run --example secret_reflector
# Watch namespaced foo crs for applied events and print store info in task
cargo run --example crd_reflector
The crd_reflector
will just await changes. You can run kubectl apply -f crd-baz.yaml
, or kubectl delete -f crd-baz.yaml -n default
, or kubectl edit foos baz -n default
to verify that the events are being picked up.
Disable default features and enable openssl-tls
:
cargo run --example pod_watcher --no-default-features --features=openssl-tls,latest,runtime