Skip to content

Add Deployment informer to replace direct API calls#68

Merged
bdehamer merged 2 commits intomainfrom
bdehamer/deployment-informer
Mar 23, 2026
Merged

Add Deployment informer to replace direct API calls#68
bdehamer merged 2 commits intomainfrom
bdehamer/deployment-informer

Conversation

@bdehamer
Copy link
Contributor

What

Adds a Deployment informer alongside the existing Pod informer so that deploymentExists() checks a local in-memory cache instead of making a live GET call to the Kubernetes API server.

Why

Every pod delete event triggers a clientset.AppsV1().Deployments().Get() call to determine if the parent deployment still exists (to distinguish scale-downs from true decommissions). When a deployment with N pods is deleted, this results in N synchronous API server round-trips checking the same deployment.

Changes

  • internal/controller/controller.go — Added deploymentInformer and deploymentLister fields to Controller. The deployment informer is created from the same SharedInformerFactory used for the pod informer. deploymentExists() now reads from the lister cache (in-memory) instead of calling the API server.
  • internal/controller/controller_integration_test.go — Updated setup() to wait for both informer caches to sync.
  • deploy/manifest.yaml — Added list and watch verbs to the deployments RBAC rule.
  • deploy/charts/deployment-tracker/templates/clusterrole.yaml — Same RBAC update in the Helm chart.
  • README.md — Updated the RBAC permissions table.

Tradeoffs

Before (API GET) After (Informer Cache)
Latency per check Network round-trip In-memory (~ns)
API server load 1 GET per pod delete Single shared WATCH stream
Memory None Caches Deployment objects (small relative to already-cached Pods)
RBAC get on deployments get, list, watch on deployments
Staleness None Negligible (WATCH delivers events within ms)

Resolves https://github.com/github/package-security/issues/4179

Signed-off-by: Brian DeHamer <bdehamer@github.com>
Copilot AI review requested due to automatic review settings March 23, 2026 20:30
@bdehamer bdehamer requested a review from a team as a code owner March 23, 2026 20:30
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a Deployment informer so deploymentExists() can check a local informer cache rather than issuing synchronous Kubernetes API GET calls for each pod delete event, reducing API server load during Deployment deletions/scale-downs.

Changes:

  • Add a Deployment informer + lister to the controller and switch deploymentExists() to use the lister cache.
  • Update integration test setup to wait for both pod and deployment informer caches to sync.
  • Expand RBAC for deployments to include list/watch (manifests, Helm chart, and README table).

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
internal/controller/controller.go Adds deployment informer/lister, starts & syncs both informers, and switches existence checks to the lister cache.
internal/controller/controller_integration_test.go Waits for both pod and deployment informer caches to sync in test setup.
deploy/manifest.yaml Updates deployments RBAC verbs to include list/watch.
deploy/charts/deployment-tracker/templates/clusterrole.yaml Mirrors the RBAC update in the Helm chart ClusterRole.
README.md Updates RBAC permissions table to include deployments list/watch (and replicasets get).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Signed-off-by: Brian DeHamer <bdehamer@github.com>
@bdehamer bdehamer merged commit c6bb7f8 into main Mar 23, 2026
7 checks passed
@bdehamer bdehamer deleted the bdehamer/deployment-informer branch March 23, 2026 22:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants