Skip to content

fix: commitment change API integration test#592

Merged
mblos merged 8 commits intomainfrom
cr-change-api
Mar 18, 2026
Merged

fix: commitment change API integration test#592
mblos merged 8 commits intomainfrom
cr-change-api

Conversation

@mblos
Copy link
Collaborator

@mblos mblos commented Mar 17, 2026

  • fix hypervisor crd Tilt reference
  • make test with proper summary based on gotestsum
  • Adding integration tests for commitment change API
  • Commitment change API handles more corner cases
  • commitment config added
  • moving endpoint to "/v1/commitments/..."

@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a CommitmentUUID field to reservation types and CRD, threads it through reservation creation/patching and sync logic, refactors commitment-change processing for deterministic iteration and aggregated failure handling with configurable watch timeouts, expands tests into an integration-style suite, and integrates gotestsum into Makefile and docs.

Changes

Cohort / File(s) Summary
Type & CRD
api/v1alpha1/reservation_types.go, helm/library/cortex/files/crds/cortex.cloud_reservations.yaml
Add CommitmentUUID/commitmentUUID to committed reservation spec/CRD to persist commitment association for allocations.
Reservation manager & state
internal/scheduling/reservations/commitments/reservation_manager.go, internal/scheduling/reservations/commitments/state.go
Initialize, patch and persist CommitmentUUID on reservations; update logs and adjust CommitmentState doc comment.
Commitment processing logic
internal/scheduling/reservations/commitments/api_change_commitments.go
Refactor to deterministic map iteration (sortedKeys), accumulate per-commitment failures, change watch API to return failures/errors and accept timeout/poll interval, consolidate rollback and aggregated rejection messaging.
API config
internal/scheduling/reservations/commitments/api.go, internal/scheduling/reservations/commitments/config.go
Add Config type and DefaultConfig, new NewAPIWithConfig constructor; wire config into HTTPAPI and update routes.
Tests
internal/scheduling/reservations/commitments/api_change_commitments_test.go
Replace prior tests with a large integration-style harness and many test helpers/types covering scheduling, VM allocation, multi-project cases, rollback, timeouts, and verification utilities.
Build & Docs
Makefile, docs/develop.md
Integrate gotestsum: add GOTESTSUM/GOTESTSUM_VERSION, testsum and gotestsum targets (install rule); document make testsum usage and options.
Module / Misc
go.mod
Dependency updates to support new tests and tooling.

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,220,255,0.5)
    participant Client as Client (Change API)
    participant HTTPAPI as HTTPAPI (with Config)
    participant Scheduler as Commitment Processor
    participant K8s as Kubernetes API (Reservations)
    participant Watcher as Reservation Watcher
    end
    Client->>HTTPAPI: POST /v1/commitments/change-commitments
    HTTPAPI->>Scheduler: enqueue/process commitments (uses Config)
    Scheduler->>K8s: List/Create/Patch Reservations (set CommitmentUUID)
    K8s-->>Scheduler: return reservation objects
    Scheduler->>Watcher: watch reservations until ready (timeout/poll from Config)
    alt reservations ready
        Watcher-->>Scheduler: ready events
        Scheduler->>HTTPAPI: respond success
    else failures/timeouts
        Watcher-->>Scheduler: failure/timeout
        Scheduler->>Scheduler: accumulate failedCommitments (reasons)
        Scheduler->>K8s: best-effort rollback (delete/patch)
        Scheduler->>HTTPAPI: respond with aggregated rejection reasons
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~65 minutes

Suggested reviewers

  • auhlig
  • SoWieMarkus
  • juliusclausnitzer
  • umswmayj

Poem

🐰 I hop through maps in tidy rows,
I tag each seat where commitment shows,
I watch and wait with patient drum,
Then roll back softly when some come undone,
And hum along while gotestsum glows.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding integration tests for the commitment change API with proper test output via gotestsum.
Docstring Coverage ✅ Passed Docstring coverage is 80.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed The pull request description directly relates to the changeset, covering multiple aspects including gotestsum integration, integration tests, API improvements, config additions, and endpoint changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch cr-change-api
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/scheduling/reservations/commitments/reservation_manager.go (1)

208-235: ⚠️ Potential issue | 🟠 Major

Clear stale metadata fields when the desired state unsets them.

This patch path is still one-way: it only updates AvailabilityZone, StartTime, and EndTime when the desired value is non-empty/non-nil. If a commitment drops an AZ restriction or expiry, the old value stays on the Reservation and later syncs keep reconciling against stale metadata. Compare against the full desired state and explicitly clear fields that are no longer set.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/reservation_manager.go` around
lines 208 - 235, The current patch only writes non-empty desired fields and
leaves old values in reservation.Spec, causing stale AZ/StartTime/EndTime (and
commitment) to persist; modify the update block in reservation_manager.go
(around the MergeFrom/reservation.Spec mutations) to mirror the full desired
state: if state.AvailabilityZone == "" set reservation.Spec.AvailabilityZone =
""; if state.StartTime == nil set reservation.Spec.StartTime = nil; if
state.EndTime == nil set reservation.Spec.EndTime = nil; and similarly clear
reservation.Spec.CommittedResourceReservation.CommitmentUUID when
state.CommitmentUUID == "" so the Reservation is explicitly cleared when the
commitment/unset fields are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 704-706: processedReserv is keyed only by reservation name so
recreated reservations with the same name are skipped; change the tracking to
include the reservation's unique identifier (e.g., use reservation.UID or a
name+UID composite) instead of just name in the processedReserv map and/or clear
the entry when a reservation delete is processed; update usages in
processReservations(), the reservation delete handler, and any tests that
reference processedReserv to use the new key (e.g., switch map[string]bool to
map[string]bool where the key is uidKey(reservation) or explicitly
delete(processedReserv, oldKey) on delete) so recreated reservations are
processed anew.
- Around line 373-380: The test suite fails to assert RetryAt behavior: update
the "Knowledge not ready - clear rejection with RetryAt" test (the
ExpectedAPIResponse created by newAPIResponse) to set RetryAtPresent=true and
then modify VerifyAPIResponse() to actually check
APIResponseExpectation.RetryAtPresent (and validate presence/absence of RetryAt
accordingly); ensure the same fix is applied to the other unasserted cases that
use APIResponseExpectation (where RetryAtPresent is relevant) so the test fails
if the retry hint is dropped.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Around line 249-272: watchReservationsUntilReady can return non-empty errors
but no failedReservations, causing requireRollback=true while failedCommitments
stays empty and resp.RejectionReason is left blank; update the rollback path
that checks the (failedReservations, errors) result from
watchReservationsUntilReady so that if failedCommitments is empty but the
returned errors slice/map is non-empty you populate resp.RejectionReason with a
fallback message that summarizes those watch errors (or list reservationsToWatch
/ still-pending reservation IDs so caller can map to commitment UUIDs); do the
same change for the other similar watch handling block that follows the same
pattern.
- Around line 42-45: The readiness loop using watchTimeout and pollInterval only
removes one reservation per poll because each ready/failed branch removes an
entry from reservationsToWatch and then breaks, causing timeouts for larger
batches; modify the loop that processes reservationsToWatch (the readiness loop
in api_change_commitments.go) to scan and remove all ready or failed
reservations in a single poll iteration instead of breaking after the first
removal — e.g., iterate over reservationsToWatch fully (or build a new slice of
remaining items) and only sleep/retry after processing the entire slice; keep
watchTimeout and pollInterval semantics but eliminate the premature break so
multiple ready items are retired per poll.

---

Outside diff comments:
In `@internal/scheduling/reservations/commitments/reservation_manager.go`:
- Around line 208-235: The current patch only writes non-empty desired fields
and leaves old values in reservation.Spec, causing stale AZ/StartTime/EndTime
(and commitment) to persist; modify the update block in reservation_manager.go
(around the MergeFrom/reservation.Spec mutations) to mirror the full desired
state: if state.AvailabilityZone == "" set reservation.Spec.AvailabilityZone =
""; if state.StartTime == nil set reservation.Spec.StartTime = nil; if
state.EndTime == nil set reservation.Spec.EndTime = nil; and similarly clear
reservation.Spec.CommittedResourceReservation.CommitmentUUID when
state.CommitmentUUID == "" so the Reservation is explicitly cleared when the
commitment/unset fields are removed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: efb68466-eed7-49c3-869a-10881d9e3d7a

📥 Commits

Reviewing files that changed from the base of the PR and between 3c025df and eb6c8fe.

📒 Files selected for processing (7)
  • Makefile
  • api/v1alpha1/reservation_types.go
  • helm/library/cortex/files/crds/cortex.cloud_reservations.yaml
  • internal/scheduling/reservations/commitments/api_change_commitments.go
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go
  • internal/scheduling/reservations/commitments/reservation_manager.go
  • internal/scheduling/reservations/commitments/state.go

Comment on lines +704 to +706
availableResources map[string]int64 // host -> available memory MB
processedReserv map[string]bool // track processed reservations
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Recreated reservations are skipped by the fake controller.

processedReserv is keyed only by reservation name and never cleared on delete. Rollback and repair paths reuse deterministic slot names, so a recreated reservation is skipped in processReservations() and never receives a fresh Ready/Failed state. That makes the rollback scenarios optimistic.

Suggested fix
 func (c *FakeReservationController) OnReservationDeleted(res *v1alpha1.Reservation) {
+	delete(c.env.processedReserv, res.Name)
 	if c.env.availableResources != nil && res.Status.Host != "" {
 		memoryQuantity := res.Spec.Resources["memory"]
 		memoryBytes := memoryQuantity.Value()

Also applies to: 717-729, 1016-1022, 1036-1040

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 704 - 706, processedReserv is keyed only by reservation name so
recreated reservations with the same name are skipped; change the tracking to
include the reservation's unique identifier (e.g., use reservation.UID or a
name+UID composite) instead of just name in the processedReserv map and/or clear
the entry when a reservation delete is processed; update usages in
processReservations(), the reservation delete handler, and any tests that
reference processedReserv to use the new key (e.g., switch map[string]bool to
map[string]bool where the key is uidKey(reservation) or explicitly
delete(processedReserv, oldKey) on delete) so recreated reservations are
processed anew.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (2)

719-720: ⚠️ Potential issue | 🟠 Major

Recreated reservations can still be skipped after delete.

Line 1053 marks by res.Name, and Line 731-743 does not clear processedReserv on deletion. Recreated reservations with reused names can be ignored.

🔧 Minimal fix
 func (c *FakeReservationController) OnReservationDeleted(res *v1alpha1.Reservation) {
+	delete(c.env.processedReserv, res.Name)
 	if c.env.availableResources != nil && res.Status.Host != "" {
 		memoryQuantity := res.Spec.Resources["memory"]
 		memoryBytes := memoryQuantity.Value()

Also applies to: 731-743, 1030-1033, 1053-1053

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 719 - 720, The test uses processedReserv map keyed by res.Name and
never clears entries when a reservation is deleted, so if a new reservation is
recreated with the same name it will be treated as already processed; update the
cleanup logic to remove the reservation's key from processedReserv when the test
deletes a reservation (or when handling deletion events) instead of leaving
stale entries—specifically remove processedReserv[res.Name] (or use a unique
identifier instead of Name) in the code paths around the delete/recreate
sequence referenced by processedReserv and res.Name.

374-381: ⚠️ Potential issue | 🟡 Minor

RetryAtPresent is still unasserted in response verification.

Line 564 introduces RetryAtPresent, but VerifyAPIResponse() never checks it, so retry-hint regressions can still pass unnoticed.

Also applies to: 561-565, 1141-1166

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 374 - 381, The test adds RetryAtPresent to ExpectedAPIResponse but
VerifyAPIResponse() does not assert it; update VerifyAPIResponse (used by
commitments/api_change_commitments_test.go) to compare the actual API response's
RetryAtPresent vs the expected ExpectedAPIResponse.RetryAtPresent field (and add
appropriate nil/false handling) so tests like the case "Knowledge not ready -
clear rejection with RetryAt" actually validate the retry hint; also update
other affected test cases (lines around 561-565 and 1141-1166) to ensure their
ExpectedAPIResponse includes the correct RetryAtPresent expectation.
🧹 Nitpick comments (1)
Makefile (1)

32-40: Trim testsum recipe length to satisfy checkmake.

Line 32’s body currently exceeds the configured maxbodylength warning threshold. Consider extracting flags into a variable to keep lint output clean.

♻️ Example refactor
+TESTSUM_FLAGS = \
+	$(if $(WATCH),--watch) \
+	--format $(if $(FORMAT),$(FORMAT),testname) \
+	--hide-summary=all
+
 testsum: gotestsum ## Run all tests (clean output for passing, verbose for failing). Options: WATCH=1, RUN=<pattern>, PACKAGE=<pkg>, FORMAT=<fmt> (e.g., standard-verbose for all output)
-	$(GOTESTSUM) \
-		$(if $(WATCH),--watch) \
-		--format $(if $(FORMAT),$(FORMAT),testname) \
-		--hide-summary=all \
+	$(GOTESTSUM) $(TESTSUM_FLAGS) \
 		-- \
 		$(if $(VERBOSE),-v) \
 		$(if $(RUN),-run $(RUN)) \
 		$(if $(PACKAGE),$(PACKAGE),./...)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 32 - 40, The testsum recipe body is too long for
checkmake's maxbodylength; shorten it by moving the long flags/arguments into a
Makefile variable and using that variable in the testsum target. For example,
create a variable like TESTSUM_FLAGS (or TESTSUM_OPTS) that assembles --format
$(if $(FORMAT),$(FORMAT),testname) --hide-summary=all -- $(if $(VERBOSE),-v)
$(if $(RUN),-run $(RUN)) $(if $(PACKAGE),$(PACKAGE),./...) and the conditional
--watch flag, then update the testsum target to call $(GOTESTSUM)
$(TESTSUM_FLAGS) so the testsum recipe body is concise while preserving all
conditionals and references to GOTESTSUM, WATCH, FORMAT, VERBOSE, RUN and
PACKAGE.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 731-743: The test helper duplicates returning memory to
env.availableResources in FakeReservationController.OnReservationDeleted and
again in the other deletion path (the block around lines 986-1010), causing
available capacity to be double-credited; fix by consolidating the update so
only one code path adjusts env.availableResources: keep the logic in
FakeReservationController.OnReservationDeleted (which reads
res.Spec.Resources["memory"], converts to MB, and updates
env.availableResources[res.Status.Host]) and remove or guard the duplicate
update in the other function/block (the second deletion handler) so the map is
incremented exactly once per deleted reservation.
- Around line 922-937: The test has a data race between the polling goroutine
that calls env.processReservations() and the interceptor callbacks
(OnReservationDeleted) and processNewReservation which access the same maps
(processedReserv and availableResources); protect all reads/writes to these maps
by adding a sync.RWMutex on the env struct or a dedicated mapLock and acquiring
RLock/RUnlock for readers (e.g., where the test reads processedReserv) and
Lock/Unlock for writers (processNewReservation, OnReservationDeleted, and any
other mutating code paths), or alternatively funnel all mutations through a
single goroutine/channel and stop concurrent direct map access; update
env.processReservations(), processNewReservation(), OnReservationDeleted, and
any test reads of processedReserv/availableResources to use the chosen
synchronization.
- Around line 1066-1073: The helper processNewReservation currently iterates
env.availableResources (a map) to pick selectedHost for the
"First-come-first-serve" logic, which is nondeterministic; change it to select
the first host from a deterministic ordering: build a slice of host names (keys
of env.availableResources), sort it (or use an existing deterministic
env.hostOrder if available), then iterate that ordered slice and pick the first
host whose availableMB >= memoryMB and assign it to selectedHost. Ensure you
update only the selection loop (referencing selectedHost,
env.availableResources, and memoryMB) so tests are deterministic and stable.

---

Duplicate comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 719-720: The test uses processedReserv map keyed by res.Name and
never clears entries when a reservation is deleted, so if a new reservation is
recreated with the same name it will be treated as already processed; update the
cleanup logic to remove the reservation's key from processedReserv when the test
deletes a reservation (or when handling deletion events) instead of leaving
stale entries—specifically remove processedReserv[res.Name] (or use a unique
identifier instead of Name) in the code paths around the delete/recreate
sequence referenced by processedReserv and res.Name.
- Around line 374-381: The test adds RetryAtPresent to ExpectedAPIResponse but
VerifyAPIResponse() does not assert it; update VerifyAPIResponse (used by
commitments/api_change_commitments_test.go) to compare the actual API response's
RetryAtPresent vs the expected ExpectedAPIResponse.RetryAtPresent field (and add
appropriate nil/false handling) so tests like the case "Knowledge not ready -
clear rejection with RetryAt" actually validate the retry hint; also update
other affected test cases (lines around 561-565 and 1141-1166) to ensure their
ExpectedAPIResponse includes the correct RetryAtPresent expectation.

---

Nitpick comments:
In `@Makefile`:
- Around line 32-40: The testsum recipe body is too long for checkmake's
maxbodylength; shorten it by moving the long flags/arguments into a Makefile
variable and using that variable in the testsum target. For example, create a
variable like TESTSUM_FLAGS (or TESTSUM_OPTS) that assembles --format $(if
$(FORMAT),$(FORMAT),testname) --hide-summary=all -- $(if $(VERBOSE),-v) $(if
$(RUN),-run $(RUN)) $(if $(PACKAGE),$(PACKAGE),./...) and the conditional
--watch flag, then update the testsum target to call $(GOTESTSUM)
$(TESTSUM_FLAGS) so the testsum recipe body is concise while preserving all
conditionals and references to GOTESTSUM, WATCH, FORMAT, VERBOSE, RUN and
PACKAGE.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 345fa0c1-14fe-4426-ab58-8838437495bf

📥 Commits

Reviewing files that changed from the base of the PR and between eb6c8fe and f040742.

📒 Files selected for processing (3)
  • Makefile
  • docs/develop.md
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go

Comment on lines +731 to +743
func (c *FakeReservationController) OnReservationDeleted(res *v1alpha1.Reservation) {
if c.env.availableResources != nil && res.Status.Host != "" {
memoryQuantity := res.Spec.Resources["memory"]
memoryBytes := memoryQuantity.Value()
memoryMB := memoryBytes / (1024 * 1024)

// Call the handler
api.HandleChangeCommitments(w, req)
if _, exists := c.env.availableResources[res.Status.Host]; exists {
c.env.availableResources[res.Status.Host] += memoryMB
c.env.T.Logf("↩ Returned %d MB to %s (now %d MB available) before deleting reservation %s",
memoryMB, res.Status.Host, c.env.availableResources[res.Status.Host], res.Name)
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Deletion credits host memory in two separate paths.

Line 731-743 and Line 986-1010 both return memory for deleted reservations, which can inflate available capacity and make scheduling outcomes overly optimistic.

Also applies to: 986-1010

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 731 - 743, The test helper duplicates returning memory to
env.availableResources in FakeReservationController.OnReservationDeleted and
again in the other deletion path (the block around lines 986-1010), causing
available capacity to be double-credited; fix by consolidating the update so
only one code path adjusts env.availableResources: keep the logic in
FakeReservationController.OnReservationDeleted (which reads
res.Spec.Resources["memory"], converts to MB, and updates
env.availableResources[res.Status.Host]) and remove or guard the duplicate
update in the other function/block (the second deletion handler) so the map is
incremented exactly once per deleted reservation.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (3)

1118-1123: ⚠️ Potential issue | 🟠 Major

Host choice is nondeterministic due map iteration.

The “first-come-first-serve” behavior is currently random because map iteration order is random in Go.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 1118 - 1123, The host selection loop is nondeterministic because it
iterates over the map env.availableResources; change it to iterate in a
deterministic order by extracting the map keys into a slice, sorting that slice
(e.g., alphabetically or by a stable criterion), then loop over the sorted keys
to pick the first host with availableMB >= memoryMB and assign selectedHost.
Update the loop around env.availableResources and preserve the same comparison
using memoryMB and selectedHost.

756-758: ⚠️ Potential issue | 🔴 Critical

Shared maps are accessed concurrently without synchronization.

availableResources and processedReserv are read/written from the poller goroutine and interceptor callback paths, which can trigger concurrent map writes and flaky tests.

Also applies to: 973-984

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 756 - 758, The test uses shared maps availableResources and
processedReserv that are accessed concurrently by the poller goroutine and
interceptor callbacks causing data races; protect these maps by adding
synchronization (e.g., a sync.Mutex or sync.RWMutex field on the struct that
owns availableResources/processedReserv or replace them with sync.Map) and
ensure every read/write to availableResources and processedReserv (including in
the poller goroutine, interceptor callback functions, and any helpers referenced
in the test) acquires the appropriate lock or uses the sync.Map APIs
consistently to prevent concurrent map writes.

769-777: ⚠️ Potential issue | 🟠 Major

Deletion path credits host memory twice.

Memory is returned in OnReservationDeleted() and again in processReservations() deletion handling, inflating capacity and making scheduling outcomes too optimistic.

Also applies to: 1041-1059

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 769 - 777, OnReservationDeleted is returning host memory and
processReservations also credits memory on deletion, causing double-counting;
remove or guard the credit in OnReservationDeleted so only one path adjusts
c.env.availableResources. Specifically, either delete the memory-crediting block
in FakeReservationController.OnReservationDeleted (the math using
res.Spec.Resources["memory"], Value(), memoryMB and the addition to
c.env.availableResources[res.Status.Host]) or add a clear guard (e.g., check a
deletion-processed flag or reservation status) so processReservations remains
the single place that refunds capacity; ensure references to
OnReservationDeleted and processReservations are used to locate and fix the
duplicated logic.
🧹 Nitpick comments (3)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (1)

868-871: Use maps.Copy for cloning PerHost.

This loop is a manual map copy; prefer maps.Copy for clarity and consistency with project rules.

As per coding guidelines Use maps.Copy instead of iteratively copying a map.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 868 - 871, Replace the manual loop that clones resources.PerHost
into availableResources with a single maps.Copy call: create availableResources
as make(map[string]int64) and call maps.Copy(availableResources,
resources.PerHost); also add the appropriate import for the maps package
(standard library "maps" or project's approved maps package) if not already
imported. This keeps the same semantics but uses maps.Copy for clarity and
consistency.
internal/scheduling/reservations/commitments/api_change_commitments.go (1)

263-265: Make rollback rejection output deterministic.

failedCommitments is a map; iterating it directly makes RejectionReason order unstable across runs.

♻️ Deterministic message order
-			for commitmentUUID, reason := range failedCommitments {
+			for _, commitmentUUID := range sortedKeys(failedCommitments) {
+				reason := failedCommitments[commitmentUUID]
 				reasonBuilder.WriteString(fmt.Sprintf("\n- commitment %s: %s", commitmentUUID, reason))
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`
around lines 263 - 265, The RejectionReason text is non-deterministic because
failedCommitments (a map) is iterated directly; to fix, collect the map keys
from failedCommitments, sort them (e.g., sort.Strings(keys)), then iterate the
sorted keys to append lines to reasonBuilder (instead of ranging the map), and
add the necessary import for sort if not present; reference the
failedCommitments variable and reasonBuilder.WriteString usage to locate where
to change iteration order.
internal/scheduling/reservations/commitments/api.go (1)

26-30: Normalize invalid config values in NewAPIWithConfig().

Non-positive durations can cause immediate rollbacks or effectively hot polling downstream. Consider clamping to defaults at construction time.

♻️ Proposed hardening
 func NewAPIWithConfig(client client.Client, config Config) *HTTPAPI {
+	defaults := DefaultConfig()
+	if config.ChangeAPIWatchReservationsTimeout <= 0 {
+		config.ChangeAPIWatchReservationsTimeout = defaults.ChangeAPIWatchReservationsTimeout
+	}
+	if config.ChangeAPIWatchReservationsPollInterval <= 0 {
+		config.ChangeAPIWatchReservationsPollInterval = defaults.ChangeAPIWatchReservationsPollInterval
+	}
 	return &HTTPAPI{
 		client: client,
 		config: config,
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api.go` around lines 26 - 30,
NewAPIWithConfig currently assigns config as-is which can contain non-positive
durations that cause immediate rollbacks or tight polling; update
NewAPIWithConfig to normalize/clamp duration fields on the passed Config (e.g.,
PollInterval, RetryInterval, CommitTimeout or whatever duration fields exist) to
sensible defaults before storing on the returned *HTTPAPI. Implement by copying
config into a local variable, check each duration field and if <= 0 set it to
the appropriate default constant (create
DefaultPollInterval/DefaultRetryInterval/DefaultCommitTimeout if they don't
exist), then return &HTTPAPI{client: client, config: normalizedConfig}.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Around line 246-248: The loop that populates failedCommitments incorrectly
attributes every entry in failedReservations to "not sufficient capacity";
inspect each reservation failure (from failedReservations produced by
watchReservationsUntilReady) and determine the actual cause—e.g., distinguish
capacity exhaustion vs an error returned from
Get()/watchReservationsUntilReady—and set
failedCommitments[res.Spec.CommittedResourceReservation.CommitmentUUID] to a
message derived from the real failure (propagate the error string or use a
specific "insufficient capacity" marker only when capacity is known), ensuring
you reference failedReservations, failedCommitments,
res.Spec.CommittedResourceReservation.CommitmentUUID and the watcher
(watchReservationsUntilReady/Get) when making the change.
- Around line 350-366: The loop currently only returns when allChecked is true,
but when every reservation is terminal (ConditionFalse) stillWaiting is empty
while allChecked remains false, causing an extra sleep; update the post-loop
termination check in the same block that inspects readyCond/collections
(readyCond, allChecked, stillWaiting, failedReservations, errors) to also return
immediately when len(stillWaiting) == 0 by logging and returning
failedReservations, errors (or set allChecked true) so the function exits
without another poll cycle.

---

Duplicate comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 1118-1123: The host selection loop is nondeterministic because it
iterates over the map env.availableResources; change it to iterate in a
deterministic order by extracting the map keys into a slice, sorting that slice
(e.g., alphabetically or by a stable criterion), then loop over the sorted keys
to pick the first host with availableMB >= memoryMB and assign selectedHost.
Update the loop around env.availableResources and preserve the same comparison
using memoryMB and selectedHost.
- Around line 756-758: The test uses shared maps availableResources and
processedReserv that are accessed concurrently by the poller goroutine and
interceptor callbacks causing data races; protect these maps by adding
synchronization (e.g., a sync.Mutex or sync.RWMutex field on the struct that
owns availableResources/processedReserv or replace them with sync.Map) and
ensure every read/write to availableResources and processedReserv (including in
the poller goroutine, interceptor callback functions, and any helpers referenced
in the test) acquires the appropriate lock or uses the sync.Map APIs
consistently to prevent concurrent map writes.
- Around line 769-777: OnReservationDeleted is returning host memory and
processReservations also credits memory on deletion, causing double-counting;
remove or guard the credit in OnReservationDeleted so only one path adjusts
c.env.availableResources. Specifically, either delete the memory-crediting block
in FakeReservationController.OnReservationDeleted (the math using
res.Spec.Resources["memory"], Value(), memoryMB and the addition to
c.env.availableResources[res.Status.Host]) or add a clear guard (e.g., check a
deletion-processed flag or reservation status) so processReservations remains
the single place that refunds capacity; ensure references to
OnReservationDeleted and processReservations are used to locate and fix the
duplicated logic.

---

Nitpick comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 868-871: Replace the manual loop that clones resources.PerHost
into availableResources with a single maps.Copy call: create availableResources
as make(map[string]int64) and call maps.Copy(availableResources,
resources.PerHost); also add the appropriate import for the maps package
(standard library "maps" or project's approved maps package) if not already
imported. This keeps the same semantics but uses maps.Copy for clarity and
consistency.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Around line 263-265: The RejectionReason text is non-deterministic because
failedCommitments (a map) is iterated directly; to fix, collect the map keys
from failedCommitments, sort them (e.g., sort.Strings(keys)), then iterate the
sorted keys to append lines to reasonBuilder (instead of ranging the map), and
add the necessary import for sort if not present; reference the
failedCommitments variable and reasonBuilder.WriteString usage to locate where
to change iteration order.

In `@internal/scheduling/reservations/commitments/api.go`:
- Around line 26-30: NewAPIWithConfig currently assigns config as-is which can
contain non-positive durations that cause immediate rollbacks or tight polling;
update NewAPIWithConfig to normalize/clamp duration fields on the passed Config
(e.g., PollInterval, RetryInterval, CommitTimeout or whatever duration fields
exist) to sensible defaults before storing on the returned *HTTPAPI. Implement
by copying config into a local variable, check each duration field and if <= 0
set it to the appropriate default constant (create
DefaultPollInterval/DefaultRetryInterval/DefaultCommitTimeout if they don't
exist), then return &HTTPAPI{client: client, config: normalizedConfig}.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 28e5237c-20a5-4a5e-982a-5c13709a373e

📥 Commits

Reviewing files that changed from the base of the PR and between f040742 and d72414d.

📒 Files selected for processing (4)
  • internal/scheduling/reservations/commitments/api.go
  • internal/scheduling/reservations/commitments/api_change_commitments.go
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go
  • internal/scheduling/reservations/commitments/config.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (1)

771-790: ⚠️ Potential issue | 🟠 Major

Deletion may double-credit host memory.

OnReservationDeleted returns memory at lines 776-785, and processReservations also returns memory for reservations with deletion timestamps at lines 1049-1067. If a reservation has finalizers, both paths execute for the same reservation, causing available capacity to be inflated.

🔧 Suggested fix

Track whether memory was already returned, or consolidate to a single path:

 func (c *FakeReservationController) OnReservationDeleted(res *v1alpha1.Reservation) {
-	c.env.mu.Lock()
-	defer c.env.mu.Unlock()
-
-	// Return memory when Delete() is called directly (before deletion timestamp is set)
-	if c.env.availableResources != nil && res.Status.Host != "" {
-		memoryQuantity := res.Spec.Resources["memory"]
-		memoryBytes := memoryQuantity.Value()
-		memoryMB := memoryBytes / (1024 * 1024)
-
-		if _, exists := c.env.availableResources[res.Status.Host]; exists {
-			c.env.availableResources[res.Status.Host] += memoryMB
-			c.env.T.Logf("↩ Returned %d MB to %s (now %d MB available) via OnReservationDeleted for %s",
-				memoryMB, res.Status.Host, c.env.availableResources[res.Status.Host], res.Name)
-		}
-	}
-
+	c.env.mu.Lock()
 	// Clear tracking so recreated reservations with same name are processed
 	delete(c.env.processedReserv, res.Name)
+	c.env.mu.Unlock()
+	// Memory return is handled by processReservations when deletion timestamp is set
 }

This consolidates memory return to processReservations, which handles reservations after the deletion timestamp is set.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 771 - 790, OnReservationDeleted currently returns memory to
env.availableResources and processReservations also returns memory when a
deletion timestamp is present, causing double-crediting for reservations with
finalizers; remove the memory-return logic from OnReservationDeleted (the block
that reads res.Spec.Resources["memory"] and updates env.availableResources) and
instead rely on processReservations to perform the single canonical return;
ensure processedReserv and any tracking for recreated reservations remain
handled by delete(c.env.processedReserv, res.Name) so only processReservations
adjusts env.availableResources when it observes
res.DeletionTimestamp/finalizers.
internal/scheduling/reservations/commitments/api_change_commitments.go (1)

245-247: ⚠️ Potential issue | 🟠 Major

Don't map all watch failures to "not sufficient capacity".

failedReservations contains reservations where Ready=False, but the failure reason could be something other than capacity exhaustion. The actual failure reason should be extracted from the reservation's condition message.

🔧 Suggested approach
 for _, res := range failedReservations {
-    failedCommitments[res.Spec.CommittedResourceReservation.CommitmentUUID] = "not sufficient capacity"
+    reason := "scheduling failed"
+    readyCond := meta.FindStatusCondition(res.Status.Conditions, v1alpha1.ReservationConditionReady)
+    if readyCond != nil && readyCond.Message != "" {
+        reason = readyCond.Message
+    }
+    failedCommitments[res.Spec.CommittedResourceReservation.CommitmentUUID] = reason
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`
around lines 245 - 247, The loop that populates failedCommitments currently
assigns the hardcoded string "not sufficient capacity" for every entry; instead,
inspect each reservation's Ready condition (e.g., check res.Status.Conditions
for Type == "Ready" and Status == "False") and use the condition's Message (or
Reason if Message empty) as the failure text when setting
failedCommitments[res.Spec.CommittedResourceReservation.CommitmentUUID]; if no
Ready condition or no message is present, fall back to a generic message like
"unknown failure". Ensure this change is applied in the same loop that iterates
failedReservations so the actual failure reason is recorded.
🧹 Nitpick comments (3)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (2)

1588-1613: Unused function.

newHypervisorWithAZ is defined but never called. If it's intended for future tests, consider adding a comment explaining its purpose; otherwise, remove it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 1588 - 1613, The helper function newHypervisorWithAZ is defined but
unused; either remove it or document/use it: if it's unnecessary, delete the
newHypervisorWithAZ function to avoid dead code; if it's intended for future
tests, add a short comment above newHypervisorWithAZ explaining its purpose and
why it remains unused or update existing tests to call newHypervisorWithAZ
(instead of newHypervisor) where an availability-zone label is needed
(referencing the newHypervisorWithAZ symbol to locate the code).

606-609: Unused type definition.

ReservationVerification is defined but never used in this file. Consider removing it to reduce dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 606 - 609, Remove the unused type ReservationVerification from the
test file to eliminate dead code: locate the type declaration "type
ReservationVerification struct { Host string; Allocations map[string]string }"
and delete it (or, if it was intended to be used, replace its removal by
actually using it in relevant tests such as those in
api_change_commitments_test.go); ensure no other symbols reference
ReservationVerification before removing to avoid compile errors.
internal/scheduling/reservations/commitments/api_change_commitments.go (1)

172-172: Use camelCase for variable names.

Go convention uses camelCase for local variables. Consider renaming to allReservations.

🔧 Suggested fix
-			var all_reservations v1alpha1.ReservationList
+			var allReservations v1alpha1.ReservationList

Also apply to related references.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go` at
line 172, The local variable all_reservations should follow Go camelCase
conventions; rename the variable to allReservations and update every
usage/reference in the surrounding function(s) (e.g., where all_reservations is
declared and later read/written) including any return values or assignments that
reference it so compilation and behavior remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 1615-1625: The createCommitment function accepts az ...string but
discards it; update it to either persist the AZ into the TestCommitment struct
and propagate it into buildRequestJSON, or remove the az parameter and update
callers (e.g., the "Reservation repair" test) accordingly; specifically, add a
field (e.g., AZ or AvailabilityZone) to TestCommitment, set it in
createCommitment (use az[0] when len(az)>0), and modify buildRequestJSON to
include that field in the generated request, or alternatively delete the az
...string parameter from createCommitment and remove the AZ argument from tests.

---

Duplicate comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 771-790: OnReservationDeleted currently returns memory to
env.availableResources and processReservations also returns memory when a
deletion timestamp is present, causing double-crediting for reservations with
finalizers; remove the memory-return logic from OnReservationDeleted (the block
that reads res.Spec.Resources["memory"] and updates env.availableResources) and
instead rely on processReservations to perform the single canonical return;
ensure processedReserv and any tracking for recreated reservations remain
handled by delete(c.env.processedReserv, res.Name) so only processReservations
adjusts env.availableResources when it observes
res.DeletionTimestamp/finalizers.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Around line 245-247: The loop that populates failedCommitments currently
assigns the hardcoded string "not sufficient capacity" for every entry; instead,
inspect each reservation's Ready condition (e.g., check res.Status.Conditions
for Type == "Ready" and Status == "False") and use the condition's Message (or
Reason if Message empty) as the failure text when setting
failedCommitments[res.Spec.CommittedResourceReservation.CommitmentUUID]; if no
Ready condition or no message is present, fall back to a generic message like
"unknown failure". Ensure this change is applied in the same loop that iterates
failedReservations so the actual failure reason is recorded.

---

Nitpick comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 1588-1613: The helper function newHypervisorWithAZ is defined but
unused; either remove it or document/use it: if it's unnecessary, delete the
newHypervisorWithAZ function to avoid dead code; if it's intended for future
tests, add a short comment above newHypervisorWithAZ explaining its purpose and
why it remains unused or update existing tests to call newHypervisorWithAZ
(instead of newHypervisor) where an availability-zone label is needed
(referencing the newHypervisorWithAZ symbol to locate the code).
- Around line 606-609: Remove the unused type ReservationVerification from the
test file to eliminate dead code: locate the type declaration "type
ReservationVerification struct { Host string; Allocations map[string]string }"
and delete it (or, if it was intended to be used, replace its removal by
actually using it in relevant tests such as those in
api_change_commitments_test.go); ensure no other symbols reference
ReservationVerification before removing to avoid compile errors.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Line 172: The local variable all_reservations should follow Go camelCase
conventions; rename the variable to allReservations and update every
usage/reference in the surrounding function(s) (e.g., where all_reservations is
declared and later read/written) including any return values or assignments that
reference it so compilation and behavior remain unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6768eae7-ae84-44b5-8ebe-94aa4a217274

📥 Commits

Reviewing files that changed from the base of the PR and between d72414d and d514001.

📒 Files selected for processing (2)
  • internal/scheduling/reservations/commitments/api_change_commitments.go
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go

Comment on lines +1615 to +1625
// createCommitment creates a TestCommitment for use in test cases.
// The az parameter is optional - if empty string, no AZ constraint is set.
func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64, az ...string) TestCommitment {
return TestCommitment{
ResourceName: liquid.ResourceName(resourceName),
ProjectID: projectID,
ConfirmationID: confirmationID,
State: state,
Amount: amount,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The az parameter is accepted but never used.

The variadic az ...string parameter is declared but never used in the function body. Test case "Reservation repair" at line 366 passes "az-a" expecting it to affect the commitment, but this value is silently discarded.

🐛 Suggested fix

If the az parameter is intended for future use with per-commitment AZ constraints, either:

  1. Store it in TestCommitment and use it in buildRequestJSON:
 type TestCommitment struct {
     ResourceName   liquid.ResourceName
     ProjectID      string
     ConfirmationID string
     State          string
     Amount         uint64
+    AZ             string // optional AZ constraint
 }

 func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64, az ...string) TestCommitment {
-    return TestCommitment{
+    tc := TestCommitment{
         ResourceName:   liquid.ResourceName(resourceName),
         ProjectID:      projectID,
         ConfirmationID: confirmationID,
         State:          state,
         Amount:         amount,
     }
+    if len(az) > 0 {
+        tc.AZ = az[0]
+    }
+    return tc
 }
  1. Or remove the unused parameter to avoid confusion:
-func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64, az ...string) TestCommitment {
+func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64) TestCommitment {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 1615 - 1625, The createCommitment function accepts az ...string but
discards it; update it to either persist the AZ into the TestCommitment struct
and propagate it into buildRequestJSON, or remove the az parameter and update
callers (e.g., the "Reservation repair" test) accordingly; specifically, add a
field (e.g., AZ or AvailabilityZone) to TestCommitment, set it in
createCommitment (use az[0] when len(az)>0), and modify buildRequestJSON to
include that field in the generated request, or alternatively delete the az
...string parameter from createCommitment and remove the AZ argument from tests.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
internal/scheduling/reservations/commitments/api_change_commitments_test.go (2)

771-790: ⚠️ Potential issue | 🟠 Major

Capacity can be double-credited on reservation deletion.

At Line 775 memory is refunded in OnReservationDeleted(), and memory is refunded again in deletion handling at Line 1048. A single delete flow can therefore inflate availableResources.

Suggested direction

Choose one canonical refund path. For this harness, keeping refunds in processReservations() and removing them from OnReservationDeleted() avoids double accounting.

 func (c *FakeReservationController) OnReservationDeleted(res *v1alpha1.Reservation) {
 	c.env.mu.Lock()
 	defer c.env.mu.Unlock()

-	// Return memory when Delete() is called directly (before deletion timestamp is set)
-	if c.env.availableResources != nil && res.Status.Host != "" {
-		memoryQuantity := res.Spec.Resources["memory"]
-		memoryBytes := memoryQuantity.Value()
-		memoryMB := memoryBytes / (1024 * 1024)
-
-		if _, exists := c.env.availableResources[res.Status.Host]; exists {
-			c.env.availableResources[res.Status.Host] += memoryMB
-			c.env.T.Logf("↩ Returned %d MB to %s (now %d MB available) via OnReservationDeleted for %s",
-				memoryMB, res.Status.Host, c.env.availableResources[res.Status.Host], res.Name)
-		}
-	}
-
 	// Clear tracking so recreated reservations with same name are processed
 	delete(c.env.processedReserv, res.Name)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 771 - 790, The OnReservationDeleted method currently refunds memory
to c.env.availableResources, which duplicates the refund already done in
processReservations and can double-credit capacity; remove the refund logic
inside OnReservationDeleted (the block that reads res.Spec.Resources["memory"],
computes memoryMB and adds it to c.env.availableResources) so only
processReservations performs refunds, and keep the final
delete(c.env.processedReserv, res.Name) behavior intact; alternatively, if you
prefer to keep OnReservationDeleted, add a clear guard (e.g., a deletion-handled
flag on the reservation or env) that processReservations sets so
OnReservationDeleted skips refunding when processReservations already refunded.

1615-1625: ⚠️ Potential issue | 🟡 Minor

az ...string is accepted but ignored in createCommitment().

Line 1617 advertises optional AZ handling, but the parameter is never persisted or serialized. This is misleading, especially with callers like Line 366 passing AZ explicitly.

Suggested cleanup

If per-commitment AZ is not supported in request JSON, remove the parameter and update callers:

-func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64, az ...string) TestCommitment {
+func createCommitment(resourceName, projectID, confirmationID, state string, amount uint64) TestCommitment {
 	return TestCommitment{
 		ResourceName:   liquid.ResourceName(resourceName),
 		ProjectID:      projectID,
 		ConfirmationID: confirmationID,
 		State:          state,
 		Amount:         amount,
 	}
 }

And update Line 366 call accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 1615 - 1625, The createCommitment function signature advertises an
optional az ...string but never stores it; remove the unused az parameter
instead of leaving misleading API surface: update the createCommitment signature
to drop az ...string, remove the az mention from its doc comment, and update all
call sites (tests that call createCommitment with an AZ argument) to stop
passing AZ; ensure the TestCommitment construction remains unchanged and run
tests to confirm no remaining callers reference the removed parameter.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 771-790: The OnReservationDeleted method currently refunds memory
to c.env.availableResources, which duplicates the refund already done in
processReservations and can double-credit capacity; remove the refund logic
inside OnReservationDeleted (the block that reads res.Spec.Resources["memory"],
computes memoryMB and adds it to c.env.availableResources) so only
processReservations performs refunds, and keep the final
delete(c.env.processedReserv, res.Name) behavior intact; alternatively, if you
prefer to keep OnReservationDeleted, add a clear guard (e.g., a deletion-handled
flag on the reservation or env) that processReservations sets so
OnReservationDeleted skips refunding when processReservations already refunded.
- Around line 1615-1625: The createCommitment function signature advertises an
optional az ...string but never stores it; remove the unused az parameter
instead of leaving misleading API surface: update the createCommitment signature
to drop az ...string, remove the az mention from its doc comment, and update all
call sites (tests that call createCommitment with an AZ argument) to stop
passing AZ; ensure the TestCommitment construction remains unchanged and run
tests to confirm no remaining callers reference the removed parameter.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 57044766-a4ce-46b0-a6cc-765fa70d26a1

📥 Commits

Reviewing files that changed from the base of the PR and between d514001 and b8175ab.

📒 Files selected for processing (2)
  • internal/scheduling/reservations/commitments/api.go
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go

@github-actions
Copy link
Contributor

Test Coverage Report

Test Coverage 📊: 69.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:21:							NewMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:39:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:45:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:37:			NewCinderAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:45:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:68:			GetAllStoragePools			73.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:27:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:40:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:51:			SyncAllStoragePools			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:46:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:49:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:52:			UnmarshalJSON				93.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:131:			MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:60:				Reconcile				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:239:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:35:			NewIdentityAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:39:			Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:59:			GetAllDomains				66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:83:			GetAllProjects				72.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:26:			Init					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:41:			Sync					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:54:			SyncDomains				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:74:			SyncProjects				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:16:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:19:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:47:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:50:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:45:			NewLimesAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:50:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:74:			GetAllCommitments			90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:124:			getCommitments				86.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:28:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:41:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:52:			SyncCommitments				63.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:69:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:72:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:41:			NewManilaAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:46:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:69:			GetAllStoragePools			75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:28:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:41:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:52:			SyncAllStoragePools			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:47:			UnmarshalJSON				87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:137:			MarshalJSON				72.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:234:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:237:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:54:				NewNovaAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:59:				Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:85:				GetAllServers				69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:145:				GetDeletedServers			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:200:				GetAllHypervisors			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:254:				GetAllFlavors				68.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:291:				GetAllMigrations			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:344:				GetAllAggregates			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:29:				Init					90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:53:				Sync					50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:75:				SyncAllServers				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:98:				SyncDeletedServers			64.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:128:			SyncAllHypervisors			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:152:			SyncAllFlavors				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:174:			SyncAllMigrations			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:196:			SyncAllAggregates			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:36:			UnmarshalJSON				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:59:			MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:79:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:82:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:119:			UnmarshalJSON				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:142:			MarshalJSON				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:162:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:165:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:198:			UnmarshalJSON				80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:234:			MarshalJSON				85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:266:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:269:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:289:			UnmarshalJSON				54.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:312:			MarshalJSON				55.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:333:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:336:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:360:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:363:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:384:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:387:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:48:		NewPlacementAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:53:		Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:77:		GetAllResourceProviders			66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:105:		GetAllTraits				90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:155:		getTraits				90.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:179:		GetAllInventoryUsages			71.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:229:		getInventoryUsages			77.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:28:		Init					62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:46:		Sync					71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:62:		SyncResourceProviders			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:83:		SyncTraits				57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:112:		SyncInventoryUsages			57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:17:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:20:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:31:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:34:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:74:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:77:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/supported_syncers.go:22:			getSupportedSyncer			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:51:				Reconcile				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:201:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:32:					newTypedSyncer				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:100:					fetch					79.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:205:					getSyncWindowStart			81.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:245:					sync					68.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:295:					Sync					70.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:7:				TriggerMetricAliasSynced		0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:12:				TriggerMetricTypeSynced			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:42:					TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:43:					Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:44:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:45:					GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:46:					GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:47:					With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:90:					TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:91:					Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:92:					GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:93:					GetTimestamp				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:94:					GetValue				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:95:					With					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:145:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:146:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:147:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:148:				GetTimestamp				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:149:				GetValue				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:150:				With					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:170:				TableName				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:171:				Indexes					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:172:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:173:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:174:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:175:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:211:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:212:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:213:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:214:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:215:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:216:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:242:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:243:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:244:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:245:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:246:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:247:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:282:				TableName				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:285:				Indexes					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:286:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:287:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:288:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:289:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:50:								FromSecretRef				6.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:131:								SelectTimed				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:140:								CreateTable				70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:157:								AddTable				66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:168:								TableExists				58.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:197:								ReplaceAll				62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:224:								BulkInsert				86.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:21:								newMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:63:								Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:73:								Collect					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:21:					GetPort					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:25:					Init					70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:69:					Close					50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/env.go:24:							SetupDBEnv				59.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:46:							Reconcile				54.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:237:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:26:							NewMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:44:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:50:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:69:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:78:							monitorFeatureExtractor			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:97:							Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:28:						Init					87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:45:						ExtractSQL				83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:58:						Extracted				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/flavor_groups.go:67:				Extract					77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_az.go:31:					Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_capabilities.go:35:				Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_details.go:59:				Extract					87.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_pinned_projects.go:45:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_utilization.go:47:				Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/libvirt_domain_cpu_steal_pct.go:35:		Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_host_residency.go:53:				Extract					85.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:52:				extractHistogramBuckets			89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:97:				Extract					88.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_long_term.go:39:	Extract					82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_short_term.go:39:	Extract					82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_resolver.go:33:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_project_noisiness.go:33:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/storage/storage_pool_cpu_usage.go:35:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:42:							Reconcile				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:95:							findDependentKnowledge			96.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:142:							triggerKnowledgeReconciliation		100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:175:							enqueueKnowledgeReconciliation		81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:201:							getResourceType				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:213:							mapDatasourceToKnowledge		100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:234:							mapKnowledgeToKnowledge			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:255:							SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:54:							Reconcile				42.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:108:							InitAllKPIs				83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:148:							getJointDB				27.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:186:							handleKPIChange				52.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:311:							handleDatasourceChange			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:336:							handleDatasourceCreated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:346:							handleDatasourceUpdated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:364:							handleDatasourceDeleted			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:376:							handleKnowledgeChange			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:401:							handleKnowledgeCreated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:411:							handleKnowledgeUpdated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:429:							handleKnowledgeDeleted			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:439:							SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:21:								Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:26:								Collect					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:32:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:37:								GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/base.go:24:							Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:32:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:36:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:54:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:58:				Collect					71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:28:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:32:					Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:49:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:54:					Collect					82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:42:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:46:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:69:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:73:				Collect					66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:27:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:31:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:43:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:47:				Collect					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:24:				getBuildingBlock			75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:42:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:46:				Init					87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:138:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:146:				Collect					90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:223:				exportCapacityMetricKVM			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:29:			GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:33:			Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:75:			Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:80:			Collect					67.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:153:			exportCapacityMetricVMware		92.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:30:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:34:					Init					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:81:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:89:					convertLimesMemory			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:106:					Collect					89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:29:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:33:					Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:46:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:50:					Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:57:					collectVMBuckets			71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:28:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:32:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:45:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:49:				Collect					69.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:52:				Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:52:				Collect					94.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:52:				Collect					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:32:					GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:35:					Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:49:					Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:52:					Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:52:				Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:28:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:32:				Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:49:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:54:				Collect					82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/math/histogram.go:7:							Histogram				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/decisions_cleanup.go:30:						DecisionsCleanup			76.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:21:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:26:							checkCinderSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:42:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:50:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:57:					canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:80:					inferPipelineName			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:90:					CinderExternalScheduler			68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:49:				PipelineType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:54:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:74:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:113:				process					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:143:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:156:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:47:						shouldReconcileDecision			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:65:						Reconcile				81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:85:						reconcileHistory			80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:147:						reconcileExplanation			64.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:171:						StartupCallback				77.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:194:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:28:						NewExplainer				75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:41:						Explain					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:46:						getResourceType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:62:						calculateScoreGap			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:80:						fetchDecisionChain			92.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:120:						buildHostSegments			95.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:171:						detectLoop				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:183:						findWinner				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:212:						calculateScoresFromSteps		93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:257:						calculateScoresWithoutStep		83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:271:						findCriticalSteps			78.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:315:						calculateStepImpacts			97.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:397:						buildContextData			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:419:						buildHistoryData			94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:461:						buildWinnerData				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:496:						buildInputData				95.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:548:						buildCriticalStepsData			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:566:						buildDeletedHostsData			90.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:616:						buildChainData				86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:655:						ExplainWithTemplates			87.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:18:						NewTemplateManager			81.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:59:						RenderExplanation			80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:68:						formatTemplateDuration			90.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:12:							NoEffect				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:15:							Norm					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:21:							Apply					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:22:							NewSchedulerMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:32:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:36:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:50:							Callback				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:56:							Respond					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:53:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:64:							Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:75:							CheckKnowledges				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:26:						NewDetectorPipelineMonitor		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:46:						SubPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:52:						Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:58:						Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:76:						monitorDetector				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:99:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:107:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:112:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:33:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:63:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:98:						Combine					97.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_step_opts.go:15:						Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter.go:31:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:23:							monitorFilter				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:36:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:41:							Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:46:							Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:22:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:28:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:33:						validateFilter				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:38:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:45:						InitNewFilterWeigherPipeline		86.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:138:					runFilters				73.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:166:					runWeighers				81.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:206:					normalizeInputWeights			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:215:					applyWeights				80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:251:					sortHostsByWeights			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:261:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:36:					NewPipelineMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:90:					SubPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:97:					observePipelineResult			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:118:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:130:				Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:48:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:63:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:75:					IncludeAllHostsFromRequest		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:85:					PrepareStats				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:42:				monitorStep				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:65:				RunWrapped				48.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:215:				impact					94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_opts.go:15:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:36:						InitAllPipelines			93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:61:						handlePipelineChange			77.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:174:						HandlePipelineCreated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:188:						HandlePipelineUpdated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:201:						HandlePipelineDeleted			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:213:						handleKnowledgeChange			71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:246:						HandleKnowledgeCreated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:259:						HandleKnowledgeUpdated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:281:						HandleKnowledgeDeleted			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:38:						ValidateCreate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:47:						ValidateUpdate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:56:						ValidateDelete				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:65:						validatePipeline			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:149:						SetupWebhookWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:7:								clamp					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:22:								MinMaxScale				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:35:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:40:								Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:45:								CheckKnowledges				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:23:							monitorWeigher				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:36:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:41:							Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:46:							Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:22:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:28:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:33:						validateWeigher				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:38:						Run					81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:53:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:57:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:77:				ProcessNewMachine			89.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:134:			process					70.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:185:			InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:198:			handleMachine				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:238:			SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:21:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:25:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:34:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:44:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/decisions_cleanup.go:32:						DecisionsCleanup			77.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:34:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:39:							checkManilaSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:42:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:50:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:57:					canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:80:					inferPipelineName			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:90:					ManilaExternalScheduler			68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:49:				PipelineType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:54:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:74:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:113:				process					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:143:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:156:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:35:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:53:			Init					60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:64:			Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:110:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/candidate_gatherer.go:29:						MutateWithAllCandidates			94.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/decisions_cleanup.go:30:						DecisionsCleanup			79.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:23:						Start					82.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:62:						Reconcile				70.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:94:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:45:						Reconcile				68.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:256:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_cycle_breaker.go:17:						Filter					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:43:					PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:48:					InitPipeline				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:66:					CreateDeschedulingsPeriodically		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:127:					Reconcile				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:132:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:61:							getHypervisors				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:104:							prepare					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:257:							randomRequest				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:330:							checkNovaSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:360:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:50:						NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:59:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:66:						canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:89:						inferPipelineName			96.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:149:					limitHostsToRequest			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:171:					NovaExternalScheduler			68.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:52:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:57:				Reconcile				91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:77:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:116:				process					80.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:168:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:181:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:48:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:82:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:111:				Reconcile				93.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:178:				handleRemoteHypervisor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:207:				predicateRemoteHypervisor		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:220:				SetupWithManager			33.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:57:							NewNovaClient				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:61:							Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:99:							Get					75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:108:							LiveMigrate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:119:							GetServerMigrations			74.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/pipeline_webhook.go:16:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:26:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:39:				Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:49:				Run					86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:85:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:22:			Run					87.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:54:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:25:				hvToNovaCapabilities			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:48:				Run					81.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:119:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:21:				Run					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:65:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:28:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:43:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:23:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:36:			Run					94.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:86:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:21:			Run					91.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:55:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:24:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:44:			Run					76.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:304:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:24:			Run					95.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:89:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:21:			Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:44:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:19:			Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:54:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:22:		Run					88.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:99:		init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:22:				checkHasSufficientFeatures		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:51:				Run					94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:112:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:30:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:43:			Run					98.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:135:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:23:			Run					93.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:88:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:17:						GetResource				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:18:						GetReason				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:19:						GetHost					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:20:						WithReason				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:29:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:72:					Run					90.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:141:					calcVMResources				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:154:					init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:26:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:30:			GetFailoverHostWeight			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:37:			GetDefaultHostWeight			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:54:			Run					93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:116:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:29:		Run					94.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:85:		init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:29:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:60:			Run					92.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:157:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:29:		Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:44:		Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:55:		Run					81.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:93:		init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:35:	Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:53:	Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:64:	Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:111:	init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:35:	Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:53:	Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:64:	Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:111:	init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:32:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:75:				Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:88:				Run					80.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:165:				calcHostCapacity			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:177:				calcHostAllocation			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:187:				calcVMResources				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:200:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:52:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:56:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:76:				ProcessNewPod				89.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:133:				process					71.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:196:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:209:				handlePod				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:249:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:12:						GetPodResourceRequests			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:31:						AddResourcesInto			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:41:						MaxResourcesInto			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:22:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:26:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:43:				matchesNodeAffinity			88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:62:				matchesNodeSelectorTerm			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:71:				matchesNodeSelectorRequirement		90.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:124:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:21:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:25:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:29:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:42:				isNodeHealthy				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:70:				isNodeSchedulable			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:74:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:22:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:26:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:45:				hasCapacityForPod			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:60:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:21:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:25:					Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:34:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:44:					init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:21:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:25:					Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:29:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:42:					canScheduleOnNode			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:53:					hasToleration				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:67:					init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:21:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:34:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:48:					calculateBinpackScore			85.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:83:					init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:22:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:26:					NewAPIWithConfig			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:33:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:28:			sortedKeys				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:45:			HandleChangeCommitments			76.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:103:			processCommitmentChanges		79.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:288:			watchReservationsUntilReady		63.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:22:					HandleInfo				72.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:58:					buildServiceInfo			22.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_report_capacity.go:19:			HandleReportCapacity			78.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:24:					NewCapacityCalculator			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:29:					CalculateCapacity			91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:60:					calculateAZCapacity			71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:86:					getAvailabilityZones			55.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:46:					NewCommitmentsClient			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:50:					Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:109:					ListProjects				90.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:128:					ListCommitmentsByID			79.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:172:					listCommitments				90.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/config.go:17:					DefaultConfig				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:135:					UnmarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:158:					MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:25:			NewReservationManager			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:46:			ApplyCommitmentState			89.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:201:			syncReservationMetadata			93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:248:			newReservation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:23:					getFlavorGroupNameFromResource		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:51:					FromCommitment				75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:95:					FromChangeCommitmentTargetState		93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:148:					FromReservations			86.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:40:					NewSyncer				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:47:					Init					66.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:54:					getCommitmentStates			62.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:124:					SyncReservations			60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:13:					GetMaxSlotIndex				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:30:					GetNextSlotIndex			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:36:					extractCommitmentUUID			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:76:					Reconcile				53.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:347:					reconcileAllocations			17.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:426:					Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:440:					listServersByProjectID			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:470:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:32:					NewControllerMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:47:					Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:53:					Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:25:						Get					85.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:46:						GetAllFlavorGroups			85.7%
total:																(statements)				69.6%

@mblos mblos merged commit 1669faa into main Mar 18, 2026
8 checks passed
@mblos mblos deleted the cr-change-api branch March 18, 2026 07:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant