Conversation
Allow setting an alternate image via variables to "make deploy". For example: IMAGE_PULL_SECRET_PATH="/path/to/pull/secret.yaml" SQL_IMAGE="registry.redhat.io/rhel8/postgresql-12" SQL_VERSION="1-181"
WalkthroughThis pull request introduces a comprehensive policy management system for device updates in the FlightCtl project. The changes span multiple files and packages, adding new functionality to control and schedule device updates. The implementation includes a new Changes
Sequence DiagramsequenceDiagram
participant Agent
participant PolicyManager
participant SpecManager
participant Device
Agent->>PolicyManager: Sync(desired spec)
PolicyManager-->>Agent: Validate policy readiness
Agent->>SpecManager: CheckPolicy(download/update)
SpecManager-->>Agent: Policy check result
alt Policy Allowed
Agent->>Device: Apply Update/Download
else Policy Not Ready
Agent->>PolicyManager: Requeue
end
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (20)
internal/agent/device/spec/queue.go (8)
30-47: Consider concurrency safeguards if the queueManager’s internal state can be accessed from multiple goroutines.
Even though the current usage may be single-threaded, you might want to protect fields likefailedVersionsandrequeueLookupwith a mutex if concurrent access can occur in the future.
48-64: Validate arguments innewPriorityQueuemore defensively.
While it’s valid to rely on the caller, consider returning an error or setting defaults ifmaxSize < 0,maxRetries < 0,delayThreshold < 0, ordelayDurationis negative. This helps avoid undefined behaviors.
95-96: Add guard for non-existing version.
Currently,Removecallsq.queue.Remove(...)without verifying if the version string is valid or actually in the queue. This might be fine, but consider logging or ignoring if the requested version doesn't exist.
99-128: Improve clarity on re-adding items to the queue.
The re-check logic for policy enforcement and scheduling is good. However, the early re-add in lines 112-113 can repeatedly re-pop and re-add items in near succession if the queue is otherwise empty. Consider a short sleep or a fallback approach to avoid spamming logs.
174-185: Mitigate multiple calls togetOrCreateRequeueState.
Repeated calls with the same version might mean multiple logs for initialization. Possibly track a creation check or unify repeated calls behind a single log statement.
201-204: EnsurehasExceededMaxRetriesaligns with long-running attempts.
If the code runs for extended periods, consider counting total attempts across restarts or incorporate persistent storage for retry counts if critical.
227-245: Review memory usage inpruneRequeueStatus.
Pruning evicts only one item whenlen(m.requeueLookup)exceeds5*maxSize. If you have a large queue backlog, consider a partial or incremental eviction strategy.
307-314: EnsureRemovehandles items not in the heap gracefully.
Currently, you log a debug message if found. If not found, it silently does nothing, which is typical but might hide unwanted states.internal/agent/device/device.go (1)
409-411: Policy manager is already reconciled early
This note is critical for maintainers to remember the logic order. Nicely commented.internal/agent/device/policy/manager.go (2)
14-19:managerstruct
Storing distinct schedules fordownloadandupdateis appropriate. Ensuring concurrency safety if used from multiple threads is recommended, potentially with synchronization or only invoked from a single routine.
63-83:IsReadymethod
- Returns true when no schedule is provided, enabling an “always ready” fallback.
- Clean branching for
DownloadorUpdateschedules.internal/agent/device/policy/manager_test.go (1)
71-80: Rename the test case to avoid confusion.The name is
"not ready: current time equals grace period end", yetexpectedReadyis set totrue. Consider updating the name to reflect that the schedule is ready at that time, preventing confusion among maintainers.- name: "not ready: current time equals grace period end" + name: "ready: current time equals grace period end"internal/agent/device/spec/queue_test.go (3)
25-25: Correct the typo in test name.“ensure priory ordering” appears to contain a typo. Please rename to “ensure priority ordering.”
- name: "ensure priory ordering", + name: "ensure priority ordering",
50-50: Update the outdated comment referencing “maxRetries.”The comment “remove item after maxRetries” may be misleading, since the code no longer references
maxRetriesin this test.- expectOrder: []string{"1"}, // remove item after maxRetries + expectOrder: []string{"1"}, // item remains because size is 1
Line range hint
97-159: Prevent flakiness in time-based tests.Using real delays (e.g.,
time.Sleeporrequire.Eventually) can cause flakiness in CI/CD environments. Consider using a mocked clock or a shorter threshold to reduce timing-based failures.internal/agent/device/spec/mock_spec.go (1)
2-2: Generated file notice.
This line indicates the file is generated by MockGen. Direct modifications in this file might be overwritten next time mocks are regenerated.internal/agent/device/spec/manager.go (1)
52-58: InstantiatingpolicyManagerwithin the queue.
This setup is clear. The queue is now aware of policy checks it must handle. Verify that the queue and manager are orchestrating policies in the correct order.Consider verifying concurrency safety if multiple specs or policies get queued simultaneously.
internal/agent/device/spec/spec_test.go (3)
694-694: Validate rollback logic upon queue failure.
Immediately callingSetFailedafter a rollback operation is consistent with a revert flow. Check if you need to add a test verifying the final device state.
781-781: Enhance resilience in error path.
WhenIsFailedreturnsfalse, an external call fails with a 503. Evaluate adding a transition tofailedstate if recoverable retries are not feasible.
811-813: Confirm concurrency safety in queue calls.
Multiple back-to-back calls toIsFailed,Add,Nextmay benefit from extra concurrency checks if used in parallel code.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
Containerfile.api(1 hunks)internal/agent/agent.go(3 hunks)internal/agent/device/device.go(7 hunks)internal/agent/device/errors/errors.go(2 hunks)internal/agent/device/policy/doc.go(1 hunks)internal/agent/device/policy/manager.go(1 hunks)internal/agent/device/policy/manager_test.go(1 hunks)internal/agent/device/policy/mock_policy.go(1 hunks)internal/agent/device/policy/policy.go(1 hunks)internal/agent/device/spec/docs.go(1 hunks)internal/agent/device/spec/manager.go(9 hunks)internal/agent/device/spec/mock_spec.go(5 hunks)internal/agent/device/spec/queue.go(2 hunks)internal/agent/device/spec/queue_test.go(4 hunks)internal/agent/device/spec/spec.go(3 hunks)internal/agent/device/spec/spec/mock_spec.go(0 hunks)internal/agent/device/spec/spec_test.go(10 hunks)internal/store/device.go(1 hunks)internal/tasks/fleet_rollout.go(1 hunks)internal/tasks/fleet_validate.go(1 hunks)
💤 Files with no reviewable changes (1)
- internal/agent/device/spec/spec/mock_spec.go
✅ Files skipped from review due to trivial changes (3)
- internal/agent/device/policy/doc.go
- internal/agent/device/spec/docs.go
- internal/agent/device/policy/mock_policy.go
🔇 Additional comments (59)
internal/agent/device/spec/queue.go (7)
16-28: Ensure requeueState struct fields remain cohesive and well-documented.
The requeueState struct holds multiple flags (downloadPolicySatisfied, updatePolicySatisfied) and counters (tries). It's essential to keep the meaning of each field unambiguous, documented, and used strictly for that purpose.
68-92: Prevent adding items if the context is canceled and optimize repeated requeue pruning calls.
- The pruning in
Addis beneficial, but consider adding a check or a timed condition to ensure it doesn’t run too frequently in a tight loop. - The context-check logic is properly early-returning.
133-155: Confirm out-of-band calls to CheckPolicy vs. letting queueManager handle it internally.
This method is a helpful direct checker, but ensure the system remains consistent if CheckPolicy is called externally while the queue is also re-checking policies. Alternatively, manage all decisions inside the queue if needed.
189-199: Avoid concurrency race in scheduling.
If delayThreshold is reached and nextAvailable is set, ensure the rest of the logic (like removing or re-adding) doesn’t override it.
250-286: Prevent heap-skip edge cases in queue.Add.
When maxSize is enforced, you pop the lowest version from the heap. Ensure you’re not evicting the needed item if versions arrive out of order. Also confirm that if items are re-added, the logic doesn’t repeatedly bump out the same new additions.
288-296: Pop is straightforward; no issues found.
The logic is clear; item is popped from the heap and from the map.
298-306: Size and IsEmpty are well-structured.
Well done, they rely on len(q.items) as the source of truth.
internal/agent/device/device.go (7)
18-18: Importing the policy package
The import statement is correct, ensuring that references to policy.Type and policy.Manager are properly available.
40-40: New policyManager field
Storing the policy manager is a good design choice for centralizing checks. Ensure concurrency issues are avoided if multiple device operations call policyManager simultaneously.
69-69: Adding policyManager to NewAgent constructor
This is consistent with the new policy-based design. Make sure the caller provides a valid non-nil policyManager to avoid unexpected panics.
86-86: Initialization of policyManager
No problems seen. The constructor sets the manager in the agent.
135-159: Call policyManager.Sync first to reconcile policy
The flow is logical: reconcile policy ahead of device changes. Good error messages, capturing download policy: %w. The added status update for “preparing” in lines 149-158 is also clear.
164-182: Check update policy prior to finalizing changes
Ensuring device readiness with policy.Update is consistent. The logs and status updates for “ready to apply update” appear well handled.
529-537: Enhanced logging for errors
Distinguishing between non-retryable (logged with .Error) and retryable (logged with .Warn) is a great improvement in clarity.
internal/agent/device/policy/policy.go (1)
1-19: Initial definition of the policy package
- The
Typestring plus constantsUpdateandDownloadare straightforward. - The
Managerinterface is well-defined, coveringSyncandIsReady.
Containerfile.api (1)
16-17: Switch to ubi base and use dnf; also include tzdata
- The update from
ubi-minimaltoubiand switching frommicrodnftodnfis consistent. Includingtzdatais helpful for correct local times. - Copying zone info from the
certsstage to the final image is a nice detail for time-dependent processes.
Also applies to: 23-23
internal/agent/device/policy/manager.go (5)
21-25: NewManager
The constructor appropriately focuses on injecting a logger. Future expansions could accept config arguments if additional policy customizations become necessary.
27-61: Sync method
- The separation of schedules into
downloadvs.updateis logically handled. - Good checks for
desired.UpdatePolicy == nil. Consider logging if partial schedules exist. - The parse logic defers to
schedule.Parse, which is consistent.
85-101: schedule struct
Holds location, cron schedule, intervals, and the ability to override nowFn. Good design for testability.
103-140: Parse method
- Properly loads time zone from
TimeZone. - Uses
robfig/cronparser with minimal fields, which is correct. - Grace durations are validated.
142-157: IsReady method
- Checking if current time is within or equal to the grace period is well-structured.
- Detailed logs clarify scheduling logic.
internal/agent/device/policy/manager_test.go (2)
49-58: Good use of table-driven tests for time-based checks.
The test "ready: within grace period America/New_York" effectively verifies that schedules are respected across time zones. Using real time zone data (e.g., nyLoc) and well-defined times ensures that the logic remains robust.
109-109: Consider adding tests for Update schedules.
Currently, s := newSchedule(Download) is being tested. Testing with newSchedule(Update) would ensure coverage of both policy types, especially helpful for verifying the function’s behavior across different policy modes.
internal/agent/device/errors/errors.go (2)
65-67: Naming consistency looks good.
The newly introduced error variables (ErrDownloadPolicyNotReady, ErrUpdatePolicyNotReady, ErrInvalidPolicyType) align with the existing naming pattern and promote clarity within the codebase.
79-80: Add or update tests for newly retryable policy errors.
ErrDownloadPolicyNotReady and ErrUpdatePolicyNotReady have been marked retryable. Ensure coverage in your tests (e.g., IsRetryable tests) to verify that logic.
internal/agent/device/spec/spec.go (1)
64-65: Ensure consistent usage of CheckPolicy.
Both Manager and PriorityQueue interfaces now declare a CheckPolicy method. Confirm that callers consistently use it for each policy phase (e.g., Download, Update) and handle error conditions accordingly.
Also applies to: 80-81
internal/agent/agent.go (4)
21-21: Importing the policy package looks good.
This addition aligns with the new policy management functionality required by the agent.
122-122: Initializing policyManager.
This newly introduced policyManager instance is a good injection point for the manager, enabling the agent to handle policy logic seamlessly.
128-128: Passing policyManager into spec.NewManager.
This properly provides the policyManager to the spec manager, ensuring it can check or sync policies as needed.
236-236: Propagating policyManager into device.NewAgent.
Incorporating the policy manager into the agent is essential for ensuring end-to-end policy enforcement. Ensure the agent’s internal methods handle potential sync or readiness checks gracefully.
internal/tasks/fleet_validate.go (1)
74-74: Including UpdatePolicy in the templateVersion.Status.
Storing the fleet’s UpdatePolicy in each template version improves traceability and can be crucial for future reference, especially when diagnosing upgrade or rollback issues.
internal/agent/device/spec/mock_spec.go (8)
6-6: Package comment ensures clarity.
This line clarifies the generation command. Typically, no issues, but ensure it’s consistent with your actual generation process.
18-18: Importing policy.
The new import references the policy package for policy checks. This keeps the mocks in sync with the actual interfaces.
61-73: New CheckPolicy in MockManager.
Great addition to align the mock manager with the real interface. This ensures your tests can simulate policy checks accurately.
323-325: Changed signature for Add in MockPriorityQueue.
You’ve added context and changed the signature to receive a RenderedDeviceSpec. This is consistent with your shift toward context-aware queue operations.
334-338: Adding CheckPolicy to MockPriorityQueue.
This ensures that queue operations which check policy readiness can be tested consistently.
Also applies to: 342-345
348-351: Rename IsVersionFailed → IsFailed.
Renaming these methods improves clarity and consistency with the new approach. Code referencing them should also be updated accordingly.
Also applies to: 356-359
363-366: Modified Next method to accept a context.Context and return a RenderedDeviceSpec.
This update clarifies the queue’s behavior and is consistent with context-based operations.
Also applies to: 372-374
389-392: Rename SetVersionFailed → SetFailed.
The shorter name is clearer for usage and aligns with the rest of the interface changes for policy handling and queue management.
Also applies to: 395-398
internal/agent/device/spec/manager.go (11)
15-15: Import of policy in manager.go.
Bringing in the policy package is appropriate for implementing the newly introduced checks.
46-46: Adding policyManager to NewManager method signature.
Incorporating the policyManager here is a clean way to ensure the manager can handle policy logic without overcomplicating other components.
68-68: Consolidating the cache creation.
Directly calling newCache(log) clarifies the code. Good housekeeping.
125-126: Using context-aware queue.Add(context.TODO(), desired).
This shift to context-based queue operations can better handle cancellations or deadlines if required in the future.
195-195: Switched from SetVersionFailed to SetFailed.
Matches interface changes and keeps method naming consistent throughout the codebase.
242-242: Still calling SetFailed when rolling back the desired version.
This is a straightforward approach to flag the currently desired version as failed in rollback scenarios.
310-313: Queue usage updated in GetDesired with context.
Re-adding the spec to the queue is now done with the context, enabling a more robust approach to handle interruptions or timeouts.
316-316: Retrieving next spec from the queue with getSpecFromQueue(ctx).
Ensures the main manager loop is context-aware for spec retrieval.
322-324: Refined queue’s Next(ctx context.Context).
You are properly integrating context into the queue retrieval mechanism. This is good practice for graceful shutdowns.
373-375: Implementing CheckPolicy method.
This ties policy checks directly to the queue mechanism. Make sure to handle potential policy readiness errors appropriately, possibly re-queuing or logging detailed status.
410-410: Changed IsVersionFailed → IsFailed usage.
The rename improves readability and matches new queue method names.
internal/tasks/fleet_rollout.go (1)
187-187: Ensure comprehensive handling of the new UpdatePolicy field.
It’s good to see that UpdatePolicy is now passed into DeviceSpec. Double-check that all callers and downstream validations account for a possible nil or empty value.
internal/store/device.go (1)
589-589: Validate newly exposed UpdatePolicy in rendered device spec.
Exposing UpdatePolicy here is logical. Confirm that tests cover scenarios where UpdatePolicy might be unset or contain invalid data.
internal/agent/device/spec/spec_test.go (7)
193-193: Confirm correctness of additional parameter in Add.
The new Add(gomock.Any(), gomock.Any()) call suggests an extended method signature. Verify that all test suites are aligned with this change.
202-202: Avoid unintentional re-queues.
Check that calling Add(gomock.Any(), gomock.Any()) in repeated tests does not cause side effects like duplicate queue entries.
683-683: Ensure error path coverage when calling SetFailed.
When copying specs fails, the queue is marked as failed. Confirm that subsequent rollback attempts handle this failure state properly.
767-767: Use of IsFailed in error handling.
Returning false ensures the test scenario handles normal logic flow. Implicitly, if the queue is actually failed, we might skip certain checks. Consider adding an explicit test for the true case.
795-797: Test that the queue is updated with new specs.
The calls to IsFailed, Add, and Next confirm the queue transitions. If possible, expand coverage for verifying the correct number of queued items.
825-827: Write operation failure scenario.
When writing the desired spec fails, you handle it by returning errors.ErrWritingRenderedSpec. Confirm that Add calls are also rolled back, if needed, to avoid stale data.
1066-1066: Account for the new IsFailed check in getRenderedVersion.
Ensure the desiredRenderedVersion is not incorrectly overshadowed when IsFailed returns true.
| func (m *queueManager) updatePolicy(ctx context.Context, requeue *requeueState) bool { | ||
| changed := false | ||
| if !requeue.downloadPolicySatisfied { | ||
| if m.policyManager.IsReady(ctx, policy.Download) { | ||
| changed = true | ||
| requeue.downloadPolicySatisfied = true | ||
| } | ||
| } | ||
|
|
||
| if !requeue.updatePolicySatisfied { | ||
| if m.policyManager.IsReady(ctx, policy.Update) { | ||
| changed = true | ||
| requeue.updatePolicySatisfied = true | ||
| } | ||
| } | ||
| return changed | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Confirm both Download and Update policies can be satisfied in a single pass.
updatePolicy sets downloadPolicySatisfied and updatePolicySatisfied if ready. If the system requires them to be toggled at different times, ensure no partial updates occur.
|
|
||
| func stringToInt64(s string) int64 { | ||
| i, _ := strconv.ParseInt(s, 10, 64) | ||
| return i | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Handle potential parse errors in stringToInt64.
The code discards the parsing error. If RenderedVersion or external calls can pass invalid strings, consider logging or returning an error.
|
@coderabbitai generate docstrings |
✅ Actions performedDocstrings generation started. |
|
Caution An unexpected error occurred while opening a pull request: Not Found - https://docs.github.com/rest/git/refs#get-a-reference |
|
https://github.com/coderabbitai generate docstrings |
|
@coderabbitai generate docstrings |
✅ Actions performedDocstrings generation started. |
|
Caution An unexpected error occurred while opening a pull request: Not Found - https://docs.github.com/rest/git/refs#get-a-reference |
|
@coderabbitai create a test plan |
|
Tip For best results, initiate chat on the files or code changes. @romfreiman Below is a proposed test plan covering the new functionalities introduced in this PR. This plan includes both unit and integration testing, as well as guidance for manual validation. It focuses on the policy management feature, the queue’s re-queue mechanism, the device specifications’ updates, and ensuring that newly added fields (e.g., 1. Policy Manager & Scheduling1.1 Unit Tests
1.2 Integration Tests
2. Queue & Re-queue Functionality2.1 Unit Tests
2.2 Integration Tests
3. Device Specification Changes3.1 Unit Tests
3.2 Integration Tests
4. Manual Validation & Observability4.1 Logging and Metrics
4.2 CLI or API Commands
5. Regression Testing
ConclusionThis test plan aims to validate the newly introduced policy management logic, queue re-queue functionality, and Feel free to adjust or extend this plan based on additional environment considerations or deployment workflows. |
There was a problem hiding this comment.
PR Summary
This PR introduces significant policy management capabilities for device updates and downloads. Here are the key new changes not covered in the previous review:
- Added schedule-based policy enforcement with timezone support and grace periods in
policy/manager.go - Integrated policy checks into the device sync workflow to control when downloads and updates can occur
- Enhanced error handling to distinguish between policy-related retryable and non-retryable errors
- Improved queue management with policy state tracking and requeue logic
- Added comprehensive test coverage for policy timing scenarios across different timezones
Key points to note:
- The policy manager properly validates cron expressions and timezones before applying schedules
- Download and update policies are checked at appropriate points in the device sync flow
- The queue system now tracks policy satisfaction state to prevent premature updates
- Error handling clearly differentiates between temporary policy blocks and permanent failures
The changes appear well-structured and maintain good separation of concerns between policy management, queue handling, and device synchronization.
💡 (1/5) You can manually trigger the bot by mentioning @greptileai in a comment!
20 file(s) reviewed, 17 comment(s)
Edit PR Review Bot Settings | Greptile
| // the new policy will go into affect on the next sync. | ||
| if err := a.policyManager.Sync(ctx, desired); err != nil { |
There was a problem hiding this comment.
syntax: 'affect' is misspelled as 'affect'
| name: "not ready: current time equals grace period end", | ||
| updateSchedule: &v1alpha1.UpdateSchedule{ | ||
| TimeZone: util.StrToPtr("UTC"), | ||
| At: "0 12 * * *", // 12:00 pm | ||
| StartGraceDuration: util.StrToPtr("5m"), | ||
| }, | ||
| currentTime: time.Date(2024, 12, 20, 12, 5, 0, 0, time.UTC), // 12:05 (end of grace) | ||
| expectedReady: true, | ||
| expectedNextRun: time.Date(2024, 12, 21, 12, 0, 0, 0, time.UTC), // 12:00 next day | ||
| }, |
There was a problem hiding this comment.
logic: Test case name indicates 'not ready' but expectedReady is true, which contradicts the test name. This needs to be clarified whether the end of grace period should be considered ready or not.
| if !ready { | ||
| nextRun := s.cron.Next(tt.currentTime) | ||
| require.Equal(tt.expectedNextRun, nextRun, "expected next run time") | ||
| } |
There was a problem hiding this comment.
style: nextRun is only checked when ready=false, but it would be valuable to verify nextRun for ready=true cases as well to ensure correct scheduling
| func (s *schedule) IsReady(log *log.PrefixLogger) bool { | ||
| now := s.nowFn().In(s.location) | ||
| lastRun := s.cron.Next(now.Add(-s.interval)) | ||
| graceEnd := lastRun.Add(s.startGraceDuration) | ||
| nextRun := s.cron.Next(now) | ||
| log.Debugf("Policy %s current time: %s, last run: %s, next run: %s, grace ends: %s", s.policyType, now, lastRun, nextRun, graceEnd) | ||
|
|
||
| if now.Equal(lastRun) || (now.After(lastRun) && !now.After(graceEnd)) { | ||
| log.Infof("Policy %s schedule is ready", s.policyType) | ||
| return true | ||
| } |
There was a problem hiding this comment.
logic: IsReady() may have edge cases during DST transitions since it uses local timezone calculations. Consider testing with timezone changes.
| nextRun := s.cron.Next(s.nowFn().In(s.location)) | ||
| secondRun := s.cron.Next(nextRun) | ||
| s.interval = secondRun.Sub(nextRun) |
There was a problem hiding this comment.
logic: Calculating interval between runs assumes consistent scheduling. For expressions like '0 1 1,15 * *' this will give incorrect intervals.
| mockPolicyManager := policy.NewMockManager(ctrl) | ||
| mockPolicyManager.EXPECT().IsReady(gomock.Any(), policy.Download).Return(true).Times(1) | ||
| mockPolicyManager.EXPECT().IsReady(gomock.Any(), policy.Update).Return(true).Times(1) |
There was a problem hiding this comment.
logic: Policy checks are hardcoded to return true and only called once each. Should test failure cases and multiple policy check scenarios
| maxSize := 1 | ||
| q := newQueue(log, 0, maxSize, requeueDelayThreshold, requeueDelayDuration) | ||
| item := newItem(&v1alpha1.RenderedDeviceSpec{RenderedVersion: renderedVersion}) | ||
| maxRetries := 0 |
There was a problem hiding this comment.
style: maxRetries=0 means infinite retries - should test with actual retry limits
| require.Eventually(func() bool { | ||
| item, ok := q.Next() | ||
| return ok && item.Spec.RenderedVersion == renderedVersion | ||
| item, ok := q.Next(ctx) | ||
| return ok && item.RenderedVersion == renderedVersion | ||
| }, time.Second, time.Millisecond*10, "retrieval after threshold duration should succeed") |
There was a problem hiding this comment.
style: Test could be flaky due to timing dependency. Consider using a mock clock for deterministic testing
| // defaultMaxRetries is the default number of retries for a spec item set to 0 for infinite retries. | ||
| defaultSpecRequeueMaxRetries = 0 |
There was a problem hiding this comment.
style: comment for defaultMaxRetries mentions 'defaultMaxRetries' but constant is named 'defaultSpecRequeueMaxRetries'
| Add(item *Item) error | ||
| // Remove removes an item from the queue. | ||
| // Add adds a new spec to the scheduler | ||
| Add(ctx context.Context, spec *v1alpha1.RenderedDeviceSpec) |
There was a problem hiding this comment.
logic: Add method lacks error return value which could lead to silent failures
EDM-953: Use approved Postgres
EDM-922: Fine-tune HTTP server settings
Removes the extra endpoints from configuration and enables the agent-server to handle both HTTPs/openAPI requests as well as gRPC over HTTP/2.0. gRPC over HTTP/2.0 didn't work with http.Serve(..) and we had to move it to server.ServeTLS(listener, "", "") where tls configuration is preset on the server.
Now all gRPC communication is consolidated under a single HTTP endpoint.
EDM-955: Use approved Redis
EDM-818: gRPC and agent API under the same https endpoint
EDM-954: Use approved RabbitMQ
Signed-off-by: Lily Sturmann <lkatalin@users.noreply.github.com>
EDM-390: Set enrolled devices to 'Enrolled' lifecycle status
field_selectors_test EDM-781: Field selectors tests based on: 1) Field selectors https://github.com/flightctl/flightctl/blob/main/docs/user/field-selectors.md
EDM-781: Field selectors tests
Fix container port due to discussion here: https://redhat-internal.slack.com/archives/C01KER8QV33/p1733905990408119 Signed-off-by: Eldar Weiss <eweiss@redhat.com>
EDM-768: Fix OpenAPI linting issues
…lightctl#751) * EDM-980: Migrate labels and annotations from arrays to JSONB with GIN indexes - Refactored labels and annotations fields to use JSONB instead of arrays. - Added conditional logic in migrations to create GIN indexes for efficient querying. * EDM-981: Integrate Label Selector into the New Selector Infrastructure for Enhanced Querying * EDM-981: Integrate Annotation Selector into the New Selector Infrastructure for Enhanced Querying * EDM-982: Remove deprecated Filtering and Transition to Field-Selector Framework * EDM-981: Make invert Parameter Mandatory in Label and Annotation Selectors
Container port
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (7)
internal/agent/device/policy/manager_test.go (1)
71-80:⚠️ Potential issueTest case name contradicts the expected result.
The test case name indicates "not ready" but
expectedReadyis set totrue, which contradicts the test name.Apply this diff to fix the contradiction:
- name: "not ready: current time equals grace period end", + name: "ready: current time equals grace period end",internal/agent/device/spec/queue_test.go (3)
104-106: 🛠️ Refactor suggestionAdd test cases for policy check failures.
The current test only verifies the happy path where both policy checks return true. Consider adding test cases for:
- Download policy not ready
- Update policy not ready
- Both policies not ready
mockPolicyManager := policy.NewMockManager(ctrl) -mockPolicyManager.EXPECT().IsReady(gomock.Any(), policy.Download).Return(true).Times(1) -mockPolicyManager.EXPECT().IsReady(gomock.Any(), policy.Update).Return(true).Times(1) + +// Test cases for different policy states +testCases := []struct { + name string + downloadReady bool + updateReady bool + expectSuccess bool +}{ + {"both_ready", true, true, true}, + {"download_not_ready", false, true, false}, + {"update_not_ready", true, false, false}, + {"neither_ready", false, false, false}, +} + +for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + mockPolicyManager.EXPECT(). + IsReady(gomock.Any(), policy.Download). + Return(tc.downloadReady). + Times(1) + mockPolicyManager.EXPECT(). + IsReady(gomock.Any(), policy.Update). + Return(tc.updateReady). + Times(1) + // ... test logic ... + }) +}
110-120: 🛠️ Refactor suggestionAvoid using maxRetries=0 for infinite retries.
Setting maxRetries to 0 for infinite retries is not a good practice as it could lead to resource exhaustion in production. Consider:
- Testing with explicit retry limits
- Adding test cases for retry exhaustion
-maxRetries := 0 +maxRetries := 3 // Set a reasonable retry limit
154-157: 🛠️ Refactor suggestionUse mock clock to prevent test flakiness.
The test relies on real time.Sleep which can make it flaky, especially in CI environments. Consider using a mock clock for deterministic testing.
+type mockClock struct { + currentTime time.Time +} + +func (m *mockClock) Now() time.Time { + return m.currentTime +} + +func (m *mockClock) Sleep(d time.Duration) { + m.currentTime = m.currentTime.Add(d) +} + require.Eventually(func() bool { - item, ok := q.Next(ctx) - return ok && item.RenderedVersion == renderedVersion -}, time.Second, time.Millisecond*10, "retrieval after threshold duration should succeed") + mockClock := &mockClock{currentTime: time.Now()} + q.clock = mockClock + + // Advance time past the delay threshold + mockClock.Sleep(delayDuration + time.Millisecond) + + item, ok := q.Next(ctx) + return ok && item.RenderedVersion == renderedVersion +}, time.Second, time.Millisecond*10, "retrieval after threshold duration should succeed")internal/agent/device/spec/queue.go (3)
227-247:⚠️ Potential issueFix potential unbounded memory growth when maxSize is 0.
Early return when maxSize=0 prevents pruning, which could lead to unbounded memory growth.
func (m *queueManager) pruneRequeueStatus() { - maxRequeueSize := 5 * m.queue.maxSize - if m.queue.maxSize == 0 { - return - } + const defaultMaxRequeueSize = 1000 + maxRequeueSize := defaultMaxRequeueSize + if m.queue.maxSize > 0 { + maxRequeueSize = 5 * m.queue.maxSize + }
133-155:⚠️ Potential issueUse non-retryable error when requeue state is not found.
Using
ErrRetryablewhen requeue state is not found could cause infinite retry loops.- return fmt.Errorf("%w: policy check failed: not found: version: %d", errors.ErrRetryable, v) + return fmt.Errorf("policy check failed: requeue state not found for version: %d", v)
368-371:⚠️ Potential issueHandle potential parse errors in
stringToInt64.The code discards the parsing error. If
RenderedVersionor external calls can pass invalid strings, consider logging or returning an error.-func stringToInt64(s string) int64 { - i, _ := strconv.ParseInt(s, 10, 64) - return i +func stringToInt64(s string) (int64, error) { + i, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return 0, fmt.Errorf("failed to parse version string %q: %w", s, err) + } + return i, nil }
🧹 Nitpick comments (88)
docs/user/field-selectors.md (1)
Line range hint
1-70: Consider adding examples for all operator types.While the documentation comprehensively covers the supported operators and their usage by field type, it would be helpful to include practical examples demonstrating each operator type (String, Timestamp, Number, Boolean, Array) in action.
Would you like me to generate examples for each operator type to enhance the documentation?
test/util/util.go (2)
134-134: Confirm security requirements for the raw TCP listener.
Switching frommiddleware.NewTLSListenerto a rawnet.Listen("tcp", "")means you're taking direct responsibility for TLS upgrades or encryption. Ensure this approach still meets your security requirements and consider any additional middleware you may need.
184-184: IPv6 support is enabled.
Adding"::"tocfg.Service.AltNamesis helpful for test scenarios involving IPv6. Ensure you have corresponding tests and environment configurations to validate IPv6 connectivity if intended.internal/api_server/middleware/server_test.go (1)
55-55: Rename local variable to avoid overshadowingconfigimport
Using the same identifierconfigfor both the package import and a local variable can cause confusion. Consider renaming the local variable to something more descriptive (e.g.,cfg) to improve clarity.serverLog := log.InitLogs() - config := config.NewDefault() + cfg := config.NewDefault() cfg.Service.CertStore = tempDir ... - srv := middleware.NewHTTPServerWithTLSContext(testTLSCNServer{}, serverLog, listener.Addr().String(), config) + srv := middleware.NewHTTPServerWithTLSContext(testTLSCNServer{}, serverLog, listener.Addr().String(), cfg)deploy/quadlets/flightctl-kv/flightctl-kv.container (1)
21-43: Consider external config handling
Generating Redis configuration viaExecStartPreis convenient but can introduce complexity if concurrency or ephemeral restarts occur. Consider bundling a default config in the image or a well-tested external script.internal/kvstore/kvstore.go (4)
28-39: Connection initialization with ping
Usingredis.NewClientand a ping check helps validate connectivity. Consider a shorter timeout if frequent connection failures are expected to avoid long startup delays.
53-57: Returning kvStore
Returning the privatekvStoremaintains abstraction. If concurrency is expected, ensure logger fields are safe and consistent across threads.
77-81: SetNX logic
UsingSetNXensures only new keys are set. If you need ephemeral locks, consider adding an expiration for more robust concurrency control.
135-144: PrintAllKeys for debugging
This method is helpful for diagnostics. Consider using structured logs instead of prints to unify logging.internal/store/model/templateversion.go (1)
17-22: Transition to JSONB fields
Switching Labels and Annotations to JSONMap withtype:jsonboffers more flexibility. If you query by these fields, consider proper indexing strategies.internal/api_server/agentserver/server.go (1)
51-62: TLS config injection
Allowing a custom TLS config fosters flexible deployment scenarios, such as certificate rotations or advanced cipher suites.internal/store/selector/types.go (3)
90-94: Validate or sanitize empty names.
WhileselectorNameis straightforward, consider enforcing validation for empty strings. For example, you might return an error ifstrings.TrimSpace(name)is an empty string, to avoid confusions or edge cases with unused selectors.func NewSelectorName(name string) SelectorName { + trimmed := strings.TrimSpace(name) + if trimmed == "" { + // Optionally implement error handling or fallback + // e.g., log.Warn("Selector name is empty") + } return selectorName{trimmed} }
105-109: Unify withselectorName.
BecausehiddenSelectorNamehas similar logic toselectorName, consider refactoring both to share the same struct, perhaps with an internal “hidden” attribute or separate flag.
110-114: Consider explicit “hidden” indicator.
A constructor with a “hidden” parameter might simplify maintenance if more behaviors differ in the future.internal/service/templateversion.go (1)
110-113: Consider concurrency or batch size control.
Reading 100 records, deleting them, and repeating in a loop is fine for moderate data sets. If you expect a large number of template versions, you could consider parallelization or adjusting the batch size.internal/api_server/agentserver/grpc.go (1)
42-45: Revisit TLS strategy.
PrepareGRPCServiceprovides a non-TLS gRPC server. If TLS is a requirement, consider providing an optional TLS configuration or ensuring that external layers handle encryption.internal/store/enrollmentrequest.go (2)
21-21: Add docstring for new interface method.
Consider documentingUpdate()to clarify expected behavior and error cases.
77-81: Naming consistency.
Create()internally callscreateOrUpdate(...)withModeCreateOnly. This is fine, but a direct naming alignment might improve clarity down the road.pkg/queryparser/sqljsonb/sqljsonb.go (5)
14-17: Add doc comments.
Consider adding doc comments for these type definitions—particularly beneficial forFunctionHandlerandverificationHandlerto clarify usage patterns.
23-27: Rename “handle” to something more descriptive.
Using “function” or “runner” might communicate its purpose more clearly.
74-87: Improve error detail.
When a function does not exist, the error simply says “does not exist.” Including the function name in the message would help with debugging.if !exists { - return fmt.Errorf("does not exist") + return fmt.Errorf("function '%s' does not exist", name) }
100-150: Refactor predefined function creation if it grows.
All the built-in function definitions are inline; factoring them out into a helper could simplify reading and maintenance once the library expands.
152-155: Avoid shadowing standard “parser.”
Renaming this struct might reduce confusion with language-level parsers.internal/store/resourcesync.go (1)
49-51: Check migration error handling thoroughly.
It’s good to see an immediate return on migration failure. Consider adding logging or wrapped context if you want more insight in case of errors.internal/store/selector/label.go (4)
22-41: Use of panic inNewLabelSelectorFromMapOrDie.
Panics can surprise users. For library-like code, returning an error is often safer. If the panic is desired, ensure it’s well documented.
74-91:NewLabelSelectorOrDiemirrors the MapOrDie design.
Similar caution about panics applies here. Document clearly that this is best used in contexts where a panic is acceptable.
179-241:Tokenizemethod for generating parser tokens.
This effectively transforms label requirements into tokens. If usage grows, consider caching or short-circuiting logic for repeated tokens.
299-309:queryFieldoverride.
The override is succinct and trusts prior checks. Ensure future changes do not bypass validations.internal/store/selector/selector.go (1)
186-187: Assigning theNamefield explicitly.
This ensures consistent usage of theSelectorName, but confirm that name transformations or mapping are handled properly upstream.internal/store/selector/annotation.go (2)
24-43:NewAnnotationSelectorFromMapOrDie: caution with panics.
Similar to the label counterpart, consider returning errors for library scenarios or clarifying usage scope.
76-93:NewAnnotationSelectorOrDie: identical pattern to labels.
Keep usage consistent withNewAnnotationSelectorFromMapOrDie. Panics should be well documented in library code.internal/store/certificatesigningrequest.go (1)
46-48: Handle migration error at once.
Returning early on migration error is a good practice. Consider logging the error for debugging.internal/service/enrollmentrequest.go (2)
403-403: Return typed HTTP error for device creation failures
Currently, a generic Golang error is returned. Consider returning a typed server response (e.g., 500) for better clarity and alignment with the other methods that use structured error responses.
443-447: Unimplemented endpoint
ThePatchEnrollmentRequestStatusfunction is not implemented yet. If this endpoint needs to remain as a placeholder, add a TODO comment or a clarifying error message. Otherwise, consider completing the implementation before merging.internal/store/selector/field.go (3)
39-56: NewFieldSelectorFromMapOrDie
Panicking on error is suitable for internal usage in code where failing to construct the selector is catastrophic. No issues found.
111-137: NewFieldSelector refactor
Creating the baseFieldSelectorfrom a raw string follows the new data flow to a typed selector. Check if additional validations are needed for complex cross-field relationships.
600-600: queryField override
Bypassing column name verification is acceptable given prior checks.internal/api/server/server.gen.go (1)
Line range hint
1-6473: Consider adding OpenAPI documentation for PATCH operationsWhile the implementation is solid, it would be beneficial to enhance the OpenAPI documentation to clearly specify:
- The expected format of PATCH requests (JSON Patch, Merge Patch, etc.)
- Sample request/response payloads
- Validation rules for partial updates
test/login/login.go (1)
10-34: Consider enhancing error handling and token validationWhile the implementation is functional, there are several areas that could be improved:
- Token validation could be more robust
- Error messages could be more descriptive
- Consider adding timeout to context for API calls
- The retry logic could benefit from a max retry count
func LoginToAPIWithToken(harness *e2e.Harness) { + const maxRetries = 3 + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() // Login Arguments and token var token string loginArgs := []string{"login", "${API_ENDPOINT}", "--insecure-skip-tls-verify"} if token != "" { loginArgs = append(loginArgs, "--token", token) } - // login - out, err := harness.CLI(loginArgs...) + + var out string + var err error + for retries := 0; retries < maxRetries; retries++ { + // login + out, err = harness.CLI(loginArgs...) + if !strings.Contains(out, "You must obtain an API token by visiting") { + break + } + + // Get new token + token, err = harness.SH("oc", "whoami", "-t") + if err != nil { + continue + } + + token = strings.TrimSpace(token) + if token == "" { + continue + } + + loginArgs = append(loginArgs, "--token", token) + } - if strings.Contains(out, "You must obtain an API token by visiting") { - token, err = harness.SH("oc", "whoami", "-t") - token = strings.TrimSpace(token) - // Validate Token Retrieval - Expect(err).ToNot(HaveOccurred(), "Failed to retrieve token") - Expect(token).ToNot(BeEmpty(), "Token from 'oc whoami' should not be empty") - - // Retry login with the retrieved token - loginArgs = append(loginArgs, "--token", token) - out, err = harness.CLI(loginArgs...) - } // Validate Login Expect(err).ToNot(HaveOccurred()) Expect(strings.TrimSpace(out)).To(BeElementOf("Auth is disabled", "Login successful")) }internal/store/model/resource.go (1)
26-28: Enhanced metadata storage with JSONBThe change from
pq.StringArraytoJSONMap[string, string]for Labels and Annotations:
- Improves query flexibility with JSONB
- Provides better type safety
- Enables more efficient indexing
- Maintains compatibility with Kubernetes metadata patterns
Consider adding database indexes on commonly queried label/annotation keys to improve query performance:
CREATE INDEX idx_resource_labels ON resources USING gin (labels); CREATE INDEX idx_resource_annotations ON resources USING gin (annotations);Also applies to: 30-32
internal/store/model/jsonfield.go (1)
73-86: Validate error handling.The
Scanmethod correctly handles different source types. Ensure that calling this for an unexpected source type is tested, verifying that it returns the expected error message.internal/agent/device/resource/cpu_test.go (1)
94-94: Reusing variables for updates.Here you reassign
monitorSpecand callrm.FromCpuResourceMonitorSpec(monitorSpec)again. Just confirm that it's necessary to re-instantiateResourceMonitoreach time if you want to avoid potential state leakage.internal/api_server/middleware/tls.go (2)
23-31: Dynamic configuration of server timeouts.Reading timeouts from
cfg.Service.*is a good approach. Be sure to document these fields in the config struct so that future maintainers know the valid ranges and default values.
87-103: Good request size limiter middleware.This is a solid approach to prevent oversized requests and potential DoS vectors. Consider logging attempts that exceed the limits for troubleshooting.
internal/agent/device/resource/memory_test.go (1)
92-92: CPU Spec in Memory Monitor TestThe struct name references CPU in a memory test, which might be slightly confusing. If the logic is shared between CPU and memory, consider adding clarifying comments or using a more generic name.
cmd/flightctl-api/main.go (1)
99-104: Graceful error handling and logging.Creating the TCP listener with
net.Listen("tcp", cfg.Service.AgentEndpointAddress)is straightforward, but consider wrapping any errors with contextual information (e.g., the address and parameters used) to facilitate debugging. The current logging message is acceptable, but more details can improve troubleshooting.internal/api_server/server.go (2)
106-107: Reinforce request validity.Combining
middleware.RequestSize(...)andtlsmiddleware.RequestSizeLimiter(...)ensures robust protection against overlong URLs and headers. Confirm that correct error responses are returned when these limits are breached.Do you want me to propose test scenarios that intentionally breach these limits to confirm error behavior?
132-132: Expanding TLS config usage.
tlsmiddleware.NewHTTPServer(router, s.log, s.cfg.Service.Address, s.cfg)now includess.cfg. This may allow advanced TLS configuration. Ensure that any relevant fields ins.cfgare validated for correctness (e.g., ciphers, TLS versions).test/integration/kvstore/kvstore_test.go (2)
19-19: Consider abstracting logger usage.
Usinglogrus.Loggerdirectly is perfectly fine, though you could alternatively declare alogrus.FieldLoggerinterface to allow more flexible injection of different logger implementations in future.
27-27: Avoid hardcoded credentials in test code.
Embedding"adminpass"might be acceptable in test scope, but consider using environment variables or config constants to prevent potential leakage.internal/store/model/selectors_test.go (1)
49-49: Enrich the error message.
Returning"does not exist in API schema"is correct, but consider including the specific missing selector for clearer debugging.internal/store/selector/selector_test.go (1)
153-153: Potentially extend the unknown selector test.The code checks for an error on unknown selectors, which is good. Consider verifying the error message (if relevant) to ensure clarity for debugging and consistent error handling.
internal/config/config.go (1)
37-56: Consider validating newly added HTTP config fields.
These new fields insvcConfigintroduce adjustable HTTP request limits and timeouts, which is great for flexibility. However, you may want to validate invalid or extreme values (e.g., negative timeouts, excessively large sizes, etc.) to safeguard performance and memory usage.func Validate(cfg *Config) error { + // Example validation checks + svc := cfg.Service + if svc.HttpReadTimeout < 0 || svc.HttpWriteTimeout < 0 { + return fmt.Errorf("HTTP timeouts must be non-negative") + } return nil }pkg/k8s/selector/labels/selector.go (2)
29-59: Clarify the difference between ParseSelectorOrDie and Parse.
The docstring states that these functions parse "different selectors with different syntaxes," but the distinctions are subtle. Consider elaborating on what specifically differs, or unify them if most logic is duplicated.
60-66: Avoid panicking on parse failures if possible.
ParseSelectorOrDiepanics when it encounters parsing errors. This is acceptable for test code or initialization, but in library code, returning an error is often preferable. If this is intended for quick-fail scenarios, you may leave it as is; otherwise, consider refactoring to return an error.func ParseSelectorOrDie(s string) selector.Selector { parsedSelector, err := Parse(s) if err != nil { - panic(err) + // return a default selector or propagate error + // return nil } return parsedSelector }internal/store/templateversion.go (2)
44-46: Consider logging migration errors for better traceability
While returning the error is good, adding a log statement before returning would help debug migration failures without having to rely solely on the caller’s error handling.if err := s.db.AutoMigrate(&model.TemplateVersion{}); err != nil { + s.log.Errorf("AutoMigrate error: %v", err) return err }
48-60: DRY approach for index creation
You have a repetitive block of code to check the dialect and create the GIN index. Consider abstracting this logic into a helper function to avoid duplication.test/e2e/field_selectors/field_selectors_test.go (2)
78-83: Avoid fixed sleeps in E2E tests
Usingtime.Sleep(30 * time.Second)can cause slow test runs and flakiness. Consider repeatedly polling until the fleet is ready or a timeout occurs.- time.Sleep(30 * time.Second) // to Establish fleet before adding device + // Poll until the fleet is ready, or timeout after 30s + eventuallyFleetIsReady := func() bool { + // Insert logic that checks readiness + return checkFleetReady() + } + Eventually(eventuallyFleetIsReady, 30*time.Second, 5*time.Second).Should(BeTrue())
196-209: Potentially unify YAML parsing
You have a custom YAML parsing flow for extracting device names. Consider reusing a shared YAML parsing utility if you parse other resources similarly, to avoid code duplication.api/v1alpha1/util.go (1)
365-397: Strengthen tests for operator-specific paths.The
String()method forMatchExpressionreturns different outputs based on the operator (Exists, DoesNotExist, In, NotIn). Consider adding specific unit tests for each operator branch to ensure no future modifications break these logic branches. The fallback to an empty string for unsupported operators is clear and sensible.internal/tasks/resourcesync.go (2)
97-101: Validate newly constructed FieldSelector.The creation of
fsviaNewFieldSelectorFromMapis appropriately guarded by an error check on line 99. Consider logging a more descriptive message that includes the invalid map or additional details for easier troubleshooting. Otherwise, the approach looks fine.
103-104: Use consistent paging limit strategy.Currently, the limit is hard-coded to 100. Confirm that it aligns with broader pagination strategies used throughout the codebase. If different listings use different default limits, consider centralizing them into a constants file for maintainability and clarity.
test/integration/store/repository_test.go (1)
172-173: Ensure the label selector creation is appropriate for test usage.In testing scenarios, using
selector.NewLabelSelectorFromMapOrDieis convenient, but it will panic upon errors. This is acceptable in test code, yet consider catching errors if there is a need for partial test execution or a more graceful failure.api/v1alpha1/agent/spec.gen.go (1)
22-125: Consider externalizing the large embedded Swagger spec.Embedding a large Base64-encoded, gzipped specification can generate verbose diffs and complicate version control. An external file with a build step to embed the spec could improve maintainability.
internal/service/fleet.go (1)
366-369: Method is not implemented yet.A placeholder returning a "not yet implemented" error is acceptable temporarily. However, adding a TODO or reference to a tracking issue would help future maintainers understand when to complete it.
internal/service/device.go (2)
303-303: Retaining existing logic for GetRenderedDeviceSpec.No direct changes to the function body, but keep an eye on how potential new controller logic might require adjustments for rendered specs.
381-384: PatchDeviceStatus remains unimplemented.Same as in
fleet.go, adding a TODO comment or referencing a tracking ticket helps manage pending functionality.internal/store/selector/field_test.go (1)
61-67: Verify Input Validation for Bad Strings
The approach to continue upon error is consistent, but consider adding logs or counters to track how many failures occur for better debugging.internal/cli/get.go (1)
85-86: Consider clarifying equality operators in the help text.Both lines reference
'='and'=='as valid operators. While this is technically correct, it may cause confusion among users. Recommend clarifying the usage or aligning the example with the actual implementation for better consistency.internal/api/server/agent/server.gen.go (4)
434-441: Add unit tests for the new 403 response scenario.Now that we've introduced a distinct 403 error response, ensure coverage in integration tests. This helps confirm correct behavior and expected responses for forbidden scenarios.
452-459: Add unit tests for the new 503 response scenario.Include tests to confirm behavior when the service is unavailable, ensuring that error handling and user feedback are properly managed.
487-494: Add 403 scenario tests for 'ReadEnrollmentRequest'.Having a dedicated 403 response path suggests new ACL or permission handling. Recommend adding at least a single test verifying the correct HTTP status, JSON structure, and message.
505-512: Add 503 scenario tests for 'ReadEnrollmentRequest'.Newly introduced 503 handling can be tested by simulating downtime or forced unavailability, ensuring your code paths set the correct status and payload.
deploy/kind.yaml (1)
20-21: LGTM! Consider updating deployment documentationThe port mapping change is consistent with the Keycloak service updates. Consider updating any deployment documentation or README files that might reference the old port number.
deploy/helm/flightctl/charts/keycloak/templates/keycloak-service.yaml (1)
Line range hint
1-1: Verify deployment update strategyThese Keycloak port changes appear to be infrastructure-related modifications in a PR described as "Policy v4". Please clarify:
- The relationship between these port changes and the policy management features
- The deployment update strategy to handle existing installations
- Any required updates to client configurations that might be connecting to Keycloak
Consider documenting the port change in release notes and providing migration steps for existing deployments.
deploy/helm/flightctl/charts/ui/templates/flightctl-ui-cm.yaml (1)
Line range hint
15-15: Consider documenting the rationale for the port change.The port change from 8080 to 8081 appears to be consistently applied across all configurations. However, it would be helpful to:
- Document the reason for this change in the PR description
- Add upgrade instructions for existing deployments
- Update any relevant documentation
- Consider adding a note about potential conflicts with other services that might use port 8081
This will help maintainers understand the motivation behind these changes and ensure smooth upgrades for users.
Also applies to: 20-21, 19-19
.github/workflows/pr-e2e-testing.yaml (1)
68-74: LGTM! Good addition of log collection.The log collection step is well-configured to run even on failure and collects logs from both namespaces.
Consider adding log retention configuration to manage artifact storage:
- name: Collect and Upload Logs if: always() && steps.filter.outputs.notdocs == 'true' uses: ./.github/actions/collect-logs with: namespace-external: 'flightctl-external' namespace-internal: 'flightctl-internal' log-directory: 'e2e-logs' + retention-days: 7 # Optional: Configure log retention perioddeploy/helm/flightctl/templates/flightctl-db-deployment.yaml (1)
20-23: Fix indentation in imagePullSecrets block.The conditional block for imagePullSecrets is functionally correct, but the indentation should be adjusted to match Kubernetes manifest conventions (2 spaces).
Apply this diff to fix the indentation:
{{- if .Values.global.imagePullSecretName }} - imagePullSecrets: - - name: {{ .Values.global.imagePullSecretName }} + imagePullSecrets: + - name: {{ .Values.global.imagePullSecretName }} {{- end }}🧰 Tools
🪛 yamllint (1.35.1)
[warning] 21-21: wrong indentation: expected 4 but found 6
(indentation)
[error] 20-20: syntax error: expected the node content, but found '-'
(syntax)
.github/actions/collect-logs/action.yml (3)
33-47: Reduce code duplication in log collection loops.The log collection logic for deployments and statefulsets is very similar and could be refactored into a function for better maintainability.
Consider refactoring like this:
timestamp=$(date +%Y%m%d_%H%M%S) log_files_collected=false + collect_resource_logs() { + local resource_type=$1 + local resource_name=$2 + local namespace=$3 + + echo "Collecting logs for $resource_name..." + if kubectl logs -n $namespace $resource_type/$resource_name --all-containers=true \ + > ${{ inputs.log-directory }}/${resource_name}_${timestamp}.log; then + log_files_collected=true + else + echo "Error collecting logs for $resource_name" + fi + } + for deployment in flightctl-api flightctl-db flightctl-periodic flightctl-worker; do - echo "Collecting logs for $deployment..." namespace="" if [[ "$deployment" == "flightctl-api" ]]; then namespace="${{ inputs.namespace-external }}" else namespace="${{ inputs.namespace-internal }}" fi - if kubectl logs -n $namespace deployment/$deployment --previous --all-containers=true \ - > ${{ inputs.log-directory }}/${deployment}_${timestamp}.log; then - log_files_collected=true - else - echo "Error collecting logs for $deployment" - fi + collect_resource_logs "deployment" "$deployment" "$namespace" done for statefulset in flightctl-kv flightctl-rabbitmq; do - echo "Collecting logs for $statefulset..." - if kubectl logs -n ${{ inputs.namespace-internal }} statefulset/$statefulset --all-containers=true \ - > ${{ inputs.log-directory }}/${statefulset}_${timestamp}.log; then - log_files_collected=true - else - echo "Error collecting logs for $statefulset" - fi + collect_resource_logs "statefulset" "$statefulset" "${{ inputs.namespace-internal }}" doneAlso applies to: 49-57
25-27: Consider compressing old logs before deletion.Instead of directly deleting logs older than 7 days, consider compressing them first for archival purposes.
mkdir -p ${{ inputs.log-directory }} # Cleanup logs older than 7 days - find ${{ inputs.log-directory }} -type f -name "*.log" -mtime +7 -delete + find ${{ inputs.log-directory }} -type f -name "*.log" -mtime +7 -exec gzip {} \; + find ${{ inputs.log-directory }} -type f -name "*.log.gz" -mtime +14 -delete
59-62: Enhance error reporting in the output.Consider adding more detailed error information to the GitHub output for better debugging.
if [[ "$log_files_collected" == false ]]; then echo "No log files were collected." fi - echo "log_files_collected=$log_files_collected" >> $GITHUB_OUTPUT + { + echo "log_files_collected=$log_files_collected" + echo "timestamp=$timestamp" + echo "total_size=$(du -sh ${{ inputs.log-directory }} | cut -f1)" + } >> $GITHUB_OUTPUTdeploy/helm/flightctl/templates/flightctl-rabbitmq-statefulset.yaml (1)
70-72: Consider adding additional configuration mount.While the basic configuration is present, consider mounting a full RabbitMQ configuration file for more advanced settings.
deploy/helm/flightctl/templates/flightctl-kv-statefulset.yaml (1)
20-36: Review Redis configuration parameters.The Redis configuration includes essential settings, but consider adding:
maxmemoryandmaxmemory-policytimeoutfor client connectionstcp-keepalivefor connection healthredis.conf: |- bind 0.0.0.0 protected-mode yes port 6379 dir /var/lib/redis/data + maxmemory 1536mb + maxmemory-policy allkeys-lru + timeout 300 + tcp-keepalive 300test/scripts/deploy_with_helm.sh (1)
7-8: Consider improving the comment format for IMAGE_PULL_SECRET_PATH.The comment should follow shell script documentation conventions and include example usage.
-# If using images from a private registry, specify a path to a Kubernetes Secret yaml for your pull secret (in the flightctl-internal namespace) -# IMAGE_PULL_SECRET_PATH= +# IMAGE_PULL_SECRET_PATH - Path to a Kubernetes Secret yaml for private registry pull secret +# Must be in the flightctl-internal namespace +# Example: IMAGE_PULL_SECRET_PATH="./secrets/registry-pull-secret.yaml"docs/developer/architecture/field-selectors.md (2)
22-31: Improve documentation formatting and clarity.The section on hidden and private selectors has some formatting inconsistencies:
- Inconsistent list marker style (mix of asterisks and dashes)
- Loose punctuation marks
- Redundant word "Selectors" in the heading
-#### Using Hidden and Private Selectors -Selectors can be further annotated as hidden or private: +#### Hidden and Private Selectors +Selectors can be annotated as hidden or private: ```go Labels JSONMap[string, string] `gorm:"type:jsonb" selector:"metadata.labels,hidden,private"`-*
hidden: The selector will not be exposed to end-users during discovery. It will not appear in the list of available selectors.-*
private: The selector cannot be directly used by the field-selector. However, it may still be utilized internally by other selectors, such as the label selector.
+-hidden: The selector will not be exposed to end-users during discovery. It will not appear in the list of available selectors.
+-private: The selector cannot be directly used by the field-selector. However, it may still be utilized internally by other selectors, such as the label selector.<details> <summary>🧰 Tools</summary> <details> <summary>🪛 LanguageTool</summary> [duplication] ~22-~22: Possible typo: you repeated a word Context: ...elector. #### Using Hidden and Private Selectors Selectors can be further annotated as hidden or p... (ENGLISH_WORD_REPEAT_RULE) --- [uncategorized] ~28-~28: Loose punctuation mark. Context: ....labels,hidden,private"` ``` * `hidden`: The selector will not be exposed to end... (UNLIKELY_OPENING_PUNCTUATION) --- [uncategorized] ~30-~30: Loose punctuation mark. Context: ...ist of available selectors. * `private`: The selector cannot be directly used by... (UNLIKELY_OPENING_PUNCTUATION) </details> </details> --- `52-55`: **Fix code block indentation.** The code block contains hard tabs. Replace them with spaces for consistent rendering across different platforms. ```diff - return []selector.SelectorName{ - selector.NewSelectorName("metadata.name"), - selector.NewSelectorName("metadata.alias"), - } + return []selector.SelectorName{ + selector.NewSelectorName("metadata.name"), + selector.NewSelectorName("metadata.alias"), + }🧰 Tools
🪛 Markdownlint (0.37.0)
52-52: Column: 1
Hard tabs(MD010, no-hard-tabs)
53-53: Column: 1
Hard tabs(MD010, no-hard-tabs)
54-54: Column: 1
Hard tabs(MD010, no-hard-tabs)
55-55: Column: 1
Hard tabs(MD010, no-hard-tabs)
internal/agent/agent.go (1)
122-123: Add error handling for policy manager initialization.Consider handling potential initialization errors from
policy.NewManager(). If the policy manager fails to initialize, it could lead to runtime issues.-policyManager := policy.NewManager(a.log) +policyManager, err := policy.NewManager(a.log) +if err != nil { + return fmt.Errorf("failed to initialize policy manager: %w", err) +}Also applies to: 128-128
internal/agent/device/spec/queue.go (1)
312-324: Optimize heap traversal in Remove method.The linear search through the heap could be inefficient for large queues. Consider maintaining a reverse index from version to heap position.
type queue struct { heap ItemHeap items map[int64]*Item + // Track heap positions for O(1) removal + positions map[int64]int maxSize int log *log.PrefixLogger } func (q *queue) Remove(version int64) { if _, ok := q.items[version]; ok { delete(q.items, version) + if pos, ok := q.positions[version]; ok { + heap.Remove(&q.heap, pos) + delete(q.positions, version) + } q.log.Debugf("Forgetting item version %v", version) } - // ensure heap removal - for i, heapItem := range q.heap { - if heapItem.Version == version { - q.log.Debugf("Removing item version from heap: %d", version) - heap.Remove(&q.heap, i) - break - } - } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (149)
.github/actions/collect-logs/action.yml(1 hunks).github/workflows/pr-e2e-testing.yaml(1 hunks).github/workflows/pr-smoke-testing.yaml(2 hunks)Containerfile.api(1 hunks)api/v1alpha1/agent/openapi.yaml(2 hunks)api/v1alpha1/agent/spec.gen.go(1 hunks)api/v1alpha1/openapi.yaml(19 hunks)api/v1alpha1/spec.gen.go(1 hunks)api/v1alpha1/types.gen.go(12 hunks)api/v1alpha1/util.go(1 hunks)api/v1alpha1/validation.go(1 hunks)cmd/flightctl-api/main.go(3 hunks)cmd/flightctl-worker/main.go(2 hunks)deploy/helm/flightctl/charts/keycloak/templates/keycloak-deployment.yaml(3 hunks)deploy/helm/flightctl/charts/keycloak/templates/keycloak-gateway-route.yaml(1 hunks)deploy/helm/flightctl/charts/keycloak/templates/keycloak-route.yaml(2 hunks)deploy/helm/flightctl/charts/keycloak/templates/keycloak-service.yaml(1 hunks)deploy/helm/flightctl/charts/ui/templates/flightctl-ui-cm.yaml(1 hunks)deploy/helm/flightctl/templates/flightctl-api-config.yaml(2 hunks)deploy/helm/flightctl/templates/flightctl-api-deployment.yaml(0 hunks)deploy/helm/flightctl/templates/flightctl-api-gateway-route-agent-grpc.yaml(0 hunks)deploy/helm/flightctl/templates/flightctl-api-route-agent-grpc.yaml(0 hunks)deploy/helm/flightctl/templates/flightctl-api-service-agent-grpc.yaml(0 hunks)deploy/helm/flightctl/templates/flightctl-db-deployment.yaml(1 hunks)deploy/helm/flightctl/templates/flightctl-kv-statefulset.yaml(2 hunks)deploy/helm/flightctl/templates/flightctl-rabbitmq-statefulset.yaml(3 hunks)deploy/helm/flightctl/values.dev.yaml(2 hunks)deploy/helm/flightctl/values.yaml(3 hunks)deploy/kind.yaml(1 hunks)deploy/podman/compose.yaml(1 hunks)deploy/quadlets/flightctl-kv/flightctl-kv.container(1 hunks)docs/developer/architecture/field-selectors.md(2 hunks)docs/user/auth-resources.md(1 hunks)docs/user/building-images.md(1 hunks)docs/user/field-selectors.md(3 hunks)docs/user/getting-started.md(1 hunks)examples/device.yaml(1 hunks)go.mod(3 hunks)internal/agent/agent.go(5 hunks)internal/agent/client/client.go(1 hunks)internal/agent/client/enrollment.go(1 hunks)internal/agent/config.go(0 hunks)internal/agent/device/device.go(8 hunks)internal/agent/device/errors/errors.go(2 hunks)internal/agent/device/policy/doc.go(1 hunks)internal/agent/device/policy/manager.go(1 hunks)internal/agent/device/policy/manager_test.go(1 hunks)internal/agent/device/policy/mock_policy.go(1 hunks)internal/agent/device/policy/policy.go(1 hunks)internal/agent/device/resource/common.go(1 hunks)internal/agent/device/resource/common_test.go(2 hunks)internal/agent/device/resource/cpu_test.go(3 hunks)internal/agent/device/resource/memory_test.go(3 hunks)internal/agent/device/resource/resource.go(2 hunks)internal/agent/device/spec/docs.go(1 hunks)internal/agent/device/spec/manager.go(10 hunks)internal/agent/device/spec/mock_spec.go(5 hunks)internal/agent/device/spec/queue.go(2 hunks)internal/agent/device/spec/queue_test.go(4 hunks)internal/agent/device/spec/spec.go(3 hunks)internal/agent/device/spec/spec/mock_spec.go(0 hunks)internal/agent/device/spec/spec_test.go(11 hunks)internal/agent/device/status/status.go(1 hunks)internal/api/client/agent/client.gen.go(4 hunks)internal/api/client/client.gen.go(36 hunks)internal/api/server/agent/server.gen.go(4 hunks)internal/api/server/server.gen.go(26 hunks)internal/api_server/agentserver/grpc.go(1 hunks)internal/api_server/agentserver/server.go(6 hunks)internal/api_server/middleware/server_test.go(1 hunks)internal/api_server/middleware/tls.go(3 hunks)internal/api_server/server.go(3 hunks)internal/cli/get.go(1 hunks)internal/client/config.go(2 hunks)internal/config/config.go(4 hunks)internal/console/console.go(1 hunks)internal/container/container_test.go(1 hunks)internal/crypto/tls.go(2 hunks)internal/flterrors/flterrors.go(1 hunks)internal/kvstore/kvstore.go(2 hunks)internal/service/certificatesigningrequest.go(1 hunks)internal/service/common/enrollmentrequest.go(1 hunks)internal/service/console.go(1 hunks)internal/service/device.go(6 hunks)internal/service/device_test.go(1 hunks)internal/service/enrollmentconfig.go(0 hunks)internal/service/enrollmentrequest.go(6 hunks)internal/service/fleet.go(4 hunks)internal/service/fleet_test.go(2 hunks)internal/service/handler.go(2 hunks)internal/service/repository.go(2 hunks)internal/service/resourcesync.go(2 hunks)internal/service/templateversion.go(2 hunks)internal/store/certificatesigningrequest.go(2 hunks)internal/store/common.go(1 hunks)internal/store/common_test.go(0 hunks)internal/store/device.go(8 hunks)internal/store/enrollmentrequest.go(5 hunks)internal/store/fleet.go(4 hunks)internal/store/model/certificatesigningrequest.go(2 hunks)internal/store/model/device.go(2 hunks)internal/store/model/enrollmentrequest.go(2 hunks)internal/store/model/fleet.go(2 hunks)internal/store/model/jsonfield.go(2 hunks)internal/store/model/repository.go(2 hunks)internal/store/model/resource.go(1 hunks)internal/store/model/resourcesync.go(2 hunks)internal/store/model/selectors.go(2 hunks)internal/store/model/selectors_test.go(4 hunks)internal/store/model/templateversion.go(3 hunks)internal/store/repository.go(2 hunks)internal/store/resourcesync.go(2 hunks)internal/store/selector/annotation.go(1 hunks)internal/store/selector/annotation_test.go(1 hunks)internal/store/selector/field.go(18 hunks)internal/store/selector/field_test.go(4 hunks)internal/store/selector/label.go(1 hunks)internal/store/selector/label_test.go(1 hunks)internal/store/selector/selector.go(7 hunks)internal/store/selector/selector_test.go(4 hunks)internal/store/selector/types.go(2 hunks)internal/store/store.go(2 hunks)internal/store/templateversion.go(1 hunks)internal/tasks/callback_manager_test.go(2 hunks)internal/tasks/fleet_rollout.go(4 hunks)internal/tasks/fleet_selector.go(3 hunks)internal/tasks/fleet_validate.go(1 hunks)internal/tasks/resourcesync.go(2 hunks)internal/util/util.go(1 hunks)internal/worker_server/server.go(1 hunks)pkg/k8s/selector/labels/selector.go(1 hunks)pkg/queryparser/sqljsonb/sqljsonb.go(1 hunks)test/e2e/agent/agent_test.go(1 hunks)test/e2e/agent/agent_updates_test.go(1 hunks)test/e2e/field_selectors/field_selectors_test.go(1 hunks)test/harness/test_harness.go(1 hunks)test/integration/kvstore/kvstore_test.go(1 hunks)test/integration/store/device_test.go(10 hunks)test/integration/store/enrollmentrequest_test.go(5 hunks)test/integration/store/fleet_test.go(12 hunks)test/integration/store/repository_test.go(2 hunks)test/integration/store/resourcesync_test.go(3 hunks)test/integration/tasks/repotester_conditions_test.go(2 hunks)test/integration/tasks/repotester_test.go(1 hunks)test/login/login.go(1 hunks)test/scripts/deploy_with_helm.sh(5 hunks)test/scripts/kind_cluster.yaml(1 hunks)test/util/create_utils.go(2 hunks)test/util/util.go(3 hunks)
💤 Files with no reviewable changes (8)
- internal/service/enrollmentconfig.go
- internal/agent/config.go
- deploy/helm/flightctl/templates/flightctl-api-service-agent-grpc.yaml
- deploy/helm/flightctl/templates/flightctl-api-gateway-route-agent-grpc.yaml
- deploy/helm/flightctl/templates/flightctl-api-route-agent-grpc.yaml
- deploy/helm/flightctl/templates/flightctl-api-deployment.yaml
- internal/store/common_test.go
- internal/agent/device/spec/spec/mock_spec.go
✅ Files skipped from review due to trivial changes (5)
- internal/console/console.go
- internal/agent/device/resource/common.go
- internal/agent/device/resource/common_test.go
- examples/device.yaml
- docs/user/building-images.md
🚧 Files skipped from review as they are similar to previous changes (11)
- internal/agent/device/spec/docs.go
- internal/agent/device/policy/doc.go
- Containerfile.api
- internal/tasks/fleet_rollout.go
- internal/tasks/fleet_validate.go
- internal/agent/device/policy/policy.go
- internal/agent/device/spec/spec.go
- internal/store/device.go
- internal/agent/device/policy/manager.go
- internal/agent/device/policy/mock_policy.go
- internal/agent/device/spec/mock_spec.go
🧰 Additional context used
🪛 yamllint (1.35.1)
deploy/helm/flightctl/templates/flightctl-db-deployment.yaml
[warning] 21-21: wrong indentation: expected 4 but found 6
(indentation)
[error] 20-20: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/flightctl/templates/flightctl-rabbitmq-statefulset.yaml
[error] 2-2: syntax error: expected '', but found ''
(syntax)
deploy/helm/flightctl/templates/flightctl-kv-statefulset.yaml
[error] 60-60: syntax error: could not find expected ':'
(syntax)
🪛 LanguageTool
docs/developer/architecture/field-selectors.md
[duplication] ~22-~22: Possible typo: you repeated a word
Context: ...elector. #### Using Hidden and Private Selectors Selectors can be further annotated as hidden or p...
(ENGLISH_WORD_REPEAT_RULE)
[uncategorized] ~28-~28: Loose punctuation mark.
Context: ....labels,hidden,private" ``` *hidden`: The selector will not be exposed to end...
(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~30-~30: Loose punctuation mark.
Context: ...ist of available selectors. * private: The selector cannot be directly used by...
(UNLIKELY_OPENING_PUNCTUATION)
[style] ~68-~68: To form a complete sentence, be sure to include a subject.
Context: ...e resource model. > - NewSelectorName can be replaced with `NewHiddenSelectorName...
(MISSING_IT_THERE)
🪛 Markdownlint (0.37.0)
docs/developer/architecture/field-selectors.md
66-66: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
67-67: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
68-68: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
52-52: Column: 1
Hard tabs
(MD010, no-hard-tabs)
53-53: Column: 1
Hard tabs
(MD010, no-hard-tabs)
54-54: Column: 1
Hard tabs
(MD010, no-hard-tabs)
55-55: Column: 1
Hard tabs
(MD010, no-hard-tabs)
56-56: Column: 1
Hard tabs
(MD010, no-hard-tabs)
57-57: Column: 1
Hard tabs
(MD010, no-hard-tabs)
61-61: Column: 1
Hard tabs
(MD010, no-hard-tabs)
🔇 Additional comments (401)
docs/user/field-selectors.md (4)
17-17: LGTM! Clear guidance on label filtering.The note effectively guides users to use Label Selectors for label filtering, which aligns with Kubernetes conventions.
49-49: LGTM! Example correctly demonstrates label filtering.The example properly demonstrates the separation between field selectors and label selectors using the
-lflag, following Kubernetes conventions.
57-58: LGTM! Example demonstrates complex filtering.The example effectively shows how to combine field selectors for status with label filtering using the
-lflag.
70-70: LGTM! Error message accurately reflects supported fields.The error message provides a comprehensive list of supported fields, helping users understand valid selector options.
deploy/helm/flightctl/values.dev.yaml (2)
14-14: Verify nodePort availability and client configurationsThe Keycloak nodePort has been changed to 8081. This change requires verification of:
- Port availability in the development environment
- Client configurations that might reference this port
- Documentation updates if the port is mentioned
Run this script to check for port references in documentation and configuration files:
✅ Verification successful
Port change verification successful
The port change to 8081 for Keycloak is consistently reflected across all configurations, documentation, and client references. No conflicts or outdated references found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for port references in docs and configs # Search for port references in documentation echo "Checking documentation for port references..." fd -e md -e rst -e txt | xargs rg "8080|8081" # Search for port references in configuration files echo "Checking configuration files for port references..." fd -e yaml -e json -e conf -e ini | xargs rg "8080|8081"Length of output: 2623
4-4: Verify Keycloak service configuration alignmentThe internal OIDC authority URL port has been updated to 8081. Please ensure this change is synchronized with the Keycloak service configuration in other deployment files.
Run this script to verify the port configuration across all Helm templates:
✅ Verification successful
Port configuration is properly synchronized
The port change from 8080 to 8081 is consistently implemented across all Keycloak components, including service definitions, deployment configuration, and routes. All necessary configurations are properly aligned.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify Keycloak port configuration consistency across Helm templates # Check port references in Keycloak-related templates echo "Checking Keycloak port configuration in templates..." rg -l "port.*8081|8080" deploy/helm/flightctl/templates/keycloak* -A 2 # Check for any remaining references to port 8080 echo "Checking for any remaining references to old port..." rg "8080" deploy/helm/flightctl/Length of output: 1165
Script:
#!/bin/bash # First find all Keycloak-related files echo "Finding Keycloak-related files..." fd keycloak deploy/helm/flightctl/ echo -e "\nChecking port configurations in values files..." rg "keycloak.*port|port.*keycloak|oidc.*authority" deploy/helm/flightctl/ -A 2 echo -e "\nChecking service definitions..." rg "kind:\s+Service" deploy/helm/flightctl/ -A 10 | rg -B 10 -A 2 "keycloak"Length of output: 5845
Script:
#!/bin/bash echo "Checking Keycloak service port configuration..." cat deploy/helm/flightctl/charts/keycloak/templates/keycloak-service.yaml echo -e "\nChecking Keycloak deployment port configuration..." cat deploy/helm/flightctl/charts/keycloak/templates/keycloak-deployment.yaml echo -e "\nChecking for specific port references in Keycloak files..." rg "8080|8081" deploy/helm/flightctl/charts/keycloak/Length of output: 3821
internal/agent/device/spec/spec_test.go (6)
193-193: LGTM: Updated mock expectations for the newAddmethod signature.The test correctly updates the mock expectations to match the new method signature that takes two arguments.
Also applies to: 202-202
683-683: LGTM: Renamed method fromSetVersionFailedtoSetFailed.The test correctly updates the mock expectations to use the new method name, maintaining consistency with the interface changes.
Also applies to: 694-694
767-767: LGTM: Comprehensive updates to mock expectations inTestGetDesired.The test cases have been properly updated to:
- Use the renamed
IsFailedmethod- Include the new
Addmethod signature- Add proper
Nextmethod expectationsAlso applies to: 781-781, 795-797, 811-813, 825-827
1066-1066: LGTM: Updated mock expectation in version test.The test correctly uses the renamed
IsFailedmethod in the version reconciliation test.
1081-1083: LGTM: Added OS specification in test helper.The test helper function has been updated to include the OS specification, which is necessary for the new policy management features.
Line range hint
1-1100: Verify interface changes across the codebase.The test file shows consistent updates to the priority queue interface methods. Let's verify these changes are reflected in the implementation:
- Renamed:
SetVersionFailed→SetFailed- Renamed:
IsVersionFailed→IsFailed- Updated:
Addmethod signature✅ Verification successful
Interface changes are properly implemented and consistent
The changes to the PriorityQueue interface methods are consistently implemented across the codebase, including the concrete implementation in queue.go and mock implementation in mock_spec.go. All method signatures match the interface definition and test usage.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify interface changes are consistent across the codebase # Check for any remaining occurrences of old method names echo "Checking for old method names..." rg "SetVersionFailed|IsVersionFailed" # Verify the new Add method signature echo "Checking Add method implementation..." ast-grep --pattern 'interface PriorityQueue { $$$ Add($_, $_) $$$ }'Length of output: 277
Script:
#!/bin/bash # Find interface definition and implementations # Find the interface definition echo "Looking for PriorityQueue interface definition..." ast-grep --pattern 'type PriorityQueue interface { $$$ }' # Find implementations of new methods echo "Looking for SetFailed implementations..." ast-grep --pattern 'func $_($_) SetFailed($_) { $$$ }' echo "Looking for IsFailed implementations..." ast-grep --pattern 'func $_($_) IsFailed($_) { $$$ }' echo "Looking for Add implementations..." ast-grep --pattern 'func $_($_) Add($_) { $$$ }'Length of output: 1757
Script:
#!/bin/bash # Find concrete implementations echo "Looking for concrete implementation file..." rg -l "type.*struct.*PriorityQueue" echo "Looking for concrete implementations with full type name..." ast-grep --pattern 'func ($_ *$_) SetFailed(version string) { $$$ }' ast-grep --pattern 'func ($_ *$_) IsFailed(version string) bool { $$$ }' ast-grep --pattern 'func ($_ *$_) Add(ctx context.Context, spec *v1alpha1.RenderedDeviceSpec) { $$$ }'Length of output: 3692
test/util/util.go (2)
128-128: Capture the correct TLS configuration parameter.
Here, you discard the first return value and store the second astlsConfig. Double-check that this usage is valid, aligning with the intended output ofTLSConfigForServer.
141-141: PasstlsConfigcarefully toagentserver.New.
The additional argument toagentserver.Newensures the agent server uses the proper TLS configuration. Verify thatagentserver.Newindeed expects this parameter and confirm it aligns with the rest of your server’s TLS handling.internal/api_server/middleware/server_test.go (1)
48-48: Confirm that ignoring the server certificates is intentional
Currently, the returned server certificates fromTLSConfigForServerare discarded. Ensure that this is indeed the desired behavior and that no additional validation or usage is required.As a follow-up, run the following script to see all references of
TLSConfigForServerin the repo and confirm that ignoring the first return value is acceptable in all usages:✅ Verification successful
Ignoring the first TLS config return value is correct
The code is using the second return value (agentTlsConfig) which is specifically configured for mTLS with client certificate verification, which is the appropriate configuration for this test case. This is consistent with other usages in the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Searching references for TLSConfigForServer usage to verify if ignoring the server certificates is consistent across the codebase rg -A 5 $'TLSConfigForServer('Length of output: 120
Script:
#!/bin/bash # First, let's find the implementation of TLSConfigForServer ast-grep --pattern 'func TLSConfigForServer($$$) ($$$)' # Also search with ripgrep as backup, properly escaped rg -A 5 "TLSConfigForServer" # Search for the crypto package to find the implementation fd -t f "crypto" --exec cat {}Length of output: 4245
deploy/quadlets/flightctl-kv/flightctl-kv.container (3)
8-8: Use pinned image for reproducible builds
Specifying a versioned Redis image benefits deployment consistency and prevents unexpected breaks due to latest-tag updates.
13-14: Validate volume usage
The volume mapping has changed fromflightctl-kvtoflightctl-redis, along with a new config volume. Confirm that the named volumeflightctl-redisis properly initialized and data is retained across restarts.
17-17: Ensure systemd notify sequence
Switching the service type tonotifyis beneficial if the container indeed sends systemd-ready signals. Verify that the container's entrypoint or command chain callssystemd-notify.internal/kvstore/kvstore.go (10)
6-9: New imports for Redis integration
The addition ofgithub.com/redis/go-redis/v9andgithub.com/sirupsen/logrusaligns with Redis functionality and structured logging. No immediate issues found.
18-18: Error return for DeleteAllKeys
Having the method return an error is a solid improvement, allowing the caller to handle potential failures.
23-25: Struct fields for logging and scripts
Storinglog,client, andgetSetNxScriptis well-structured. Ensure theclientgets properly closed and reconnected during lifecycle events if needed.
41-41: Logging upon connection
Emitting an informational message confirms successful connections for debugging.
44-51: Lua script for atomic get/set
The script-based approach ensures concurrency safety when retrieving or setting keys.
61-64: Close method logs failures
Logging any shutdown errors helps detect resource leaks or connection issues that occur on close.
67-72: FlushAll usage
FlushAllresets all keys in the current Redis instance. Carefully confirm that this won't remove data belonging to other services in the same instance.
86-93: Get method returns nil for missing keys
This behavior aligns with typical KV store semantics and clarifies handling of absent keys.
97-110: Lua-based GetOrSetNX
Relying on Redis scripting for atomic operations is correct. The result-type switch ensures robust handling of Lua outputs.
114-130: Prefix-based key deletion
This scanning logic is valid for removing multiple keys. Be mindful of performance if your prefix can match many keys.internal/store/model/templateversion.go (2)
67-68: Safeguard against nil maps
Usinglo.FromPtrOris a concise way to avoid nil references and ensures you'll always have a valid map at runtime.
97-98: Ensure map usage for output
Applyingutil.EnsureMapensures your JSON fields remain valid maps, preventing unexpected nil pointer concerns.internal/api_server/agentserver/server.go (6)
5-5: TLS & gRPC imports
Introducingcrypto/tlsandgoogle.golang.org/grpcaligns the server with secure gRPC capabilities.Also applies to: 25-25
34-41: Expanded AgentServer fields
Adding atlsConfigandgrpcServerfield improves clarity when combining HTTP and gRPC endpoints in a single server.
70-72: Expose GetGRPCServer
Offering direct gRPC server access is convenient. Confirm this does not allow unwanted external modifications.
75-101: Run method integrates gRPC and HTTP
Hosting both protocols on a single listener is efficient. The graceful shutdown approach looks correct and properly closes the server on context cancellation.
Line range hint
104-138: prepareHTTPHandler improvements
Sending a clear error if the Swagger spec fails to load is beneficial for troubleshooting. The updated middleware ordering to limit request sizes early is a good security measure.
140-148: grpcMuxHandlerFunc dispatch
This well-known pattern checks theapplication/grpcheader to dispatch to gRPC; otherwise it follows HTTP. Very clean approach.internal/store/common.go (3)
70-75: Parsing FieldSelector
Replacing complex match-expression code withFieldSelector.Parseimproves readability and maintainability.
78-85: LabelSelector
Usingselector.NewHiddenSelectorNameensures that labels are transparent to users, yet properly managed in the storage layer.
Line range hint
87-93: AnnotationSelector
Mirroring label-based parsing for annotations leads to consistency. However, ensure that large annotation sets are handled efficiently.internal/store/selector/types.go (6)
86-88: Elevate interface usage.
Introducing an interface forSelectorNameis a flexible design choice. Ensuring all selector name types conform to a single contract will help maintain consistent usage throughout the codebase.
95-98: Check for empty or invalid names.
Similarly, inNewSelectorName, you might want to bail out or log a warning if the trimmed name is empty. This prevents potential confusion in other parts of the code that assume a non-blank name.
100-102: Looks good.
TheString()method is concise and clear.
115-117: No issues found.
TheString()method aligns with the interface and remains consistent withselectorName.
194-196: Good use of map-based set.
Usingmap[string]struct{}is a neat and idiomatic pattern for sets in Go. This is perfectly acceptable.
202-202: EnsureOptionsis initialized.
If code elsewhere expects a non-nil map, confirm thatSelectorField.Optionsis properly instantiated to avoid potential nil map assignments.internal/service/templateversion.go (4)
44-48: Verify invalid or emptyrequest.Fleet.
When you setfieldSelectorto{"metadata.owner": request.Fleet}, consider handling cases whererequest.Fleetmight be empty. An invalid or empty value could disrupt the query.
49-49: Merging additional field selectors is well-handled.
Your approach of parsing and merging is clean and consistent.Also applies to: 51-55
58-63: Label selector creation is coherent.
Allocating aLabelSelectoronly if non-nil ensures code clarity and avoids unnecessary overhead.
68-68: Straightforward integration.
PassinglabelSelectorintoListParamskeeps the code symmetrical with field selector usage.internal/store/enrollmentrequest.go (8)
44-46: Error-driven migration is correct.
Properly returning the error ensures that migration failures are surfaced.
48-60: GIN index creation for labels looks solid.
The fallback logic for non-Postgres dialectors is correct.
61-73: Identical approach for annotations.
Consistent index creation keeps the schema in sync.
74-75: No concerns.
Direct approach.
82-86: Efficient use of retry.
Incorporating a retry approach for updates is a sound choice for concurrency handling or race conditions.
193-196: Confirm unsetting Annotations.
Manually settingenrollmentrequest.Annotations = nildisregards user input if it was present. Verify whether this is the intended behavior.
207-207: Conversion logic is straightforward.
Error checking withmodel.NewEnrollmentRequestFromApiResourceis consistent and robust.
238-238: CreateOrUpdate with retry is well-designed.
This approach helps avoid stale version conflicts.pkg/queryparser/sqljsonb/sqljsonb.go (20)
1-2: Package naming is appropriate.
Naming the packagesqljsonbclearly conveys its purpose for JSONB-related SQL parsing.
3-9: Import references look good.
Use ofstrings,fmt,regexpcovers typical string processing needs.
10-12: Integration with gorm
The import approach is consistent with the rest of the codebase.
18-21: Simple and effective result container.
Encapsulating query and args inFunctionResultis concise.
29-32: Ensure thread safety if used concurrently.
SQLParser.funcsis a map; if used from multiple goroutines, consider synchronizing access or documenting that the parser is not thread-safe.
34-52: Preceding Key Query verification approach.
This ensures the first argument isK. The logic is clear and should help maintain consistent usage across nested calls.
54-63: Enforcing no values.
Using this verification to forbid direct values is a nice pattern.
65-72: Tokenizer extension is valuable.
Allowing custom tokenizers broadens your parser’s applicability.
89-99: Generics usage is elegant.
Wrap()neatly leverages generics to unify argument types.
156-197: Comprehensive parse routine.
Handling nil input gracefully and returning empty results if no query instructions exist is good.
199-240: Checks and dispatch logic.
Verifications plus function invocation show a solid, modular design.
242-251: Logical AND operation.
The approach using parentheses and joining with AND is logically sound.
252-260: Similarly correct OR operation.
Mirrors the AND approach with consistent design.
262-271: NOT operator is straightforward.
Accepting exactly one argument is consistent: “NOT (expr)”.
273-282: Postgres JSON containment.
@>accurately handles “contains” semantics in PostgreSQL.
284-294: Watch argument types forEXISTS.
The?operator in Postgres expects a string for JSONB object keys. Ensure higher-level usage always supplies valid string arguments.
296-309: Validating column names.
The fallback toisValidColumnNameis prudent for safety.
311-320: Parameter placeholder approach is fine.
Substituting “?” for the single argument is consistent with gorm.
322-326: Consistent naming rules.
Limiting column names to[a-zA-Z_][a-zA-Z0-9_]*is standard in many SQL dialects.
328-344: Argument count validation.
The flexible min/max checks handle typical use cases thoroughly.internal/store/resourcesync.go (4)
53-65: Verify dialect fallback behavior.
Conditionally using raw SQL for Postgres vsCreateIndexfor other dialects is a valid approach. Keep in mind thatCreateIndexmay behave differently across databases which do not support GIN indexing.
66-77: Consistent approach for annotations indexing.
This is consistent with label indexing. Just ensure that any non-Postgres databases handle this index creation properly without errors.
79-79: No issues with the return statement.
The final return statement here is straightforward and appears correct.
215-215: Confirm the rationale for discarding annotations.
SettingresourceSync.Annotations = nilmight lead to losing the user’s annotation data on update. If the design intent is to intentionally wipe them, please document this decision. Otherwise, consider preserving or merging existing annotations.internal/store/selector/label.go (7)
1-20: Overall file introduction and struct definition.
Defining a dedicatedLabelSelectorstruct is clear and supports strong typing for label-based queries.
43-72: Flexible operator inversion logic.
Inverting the operator to “!=” is intuitive. The approach concisely builds a multi-part selector string.
93-113: Parsing Kubernetes-style label selectors.
Usinglabels.Parseis a neat approach. Verify whether advanced label syntax (e.g., set-based) is fully supported or if partial coverage might cause user confusion.
115-177:Parsemethod ensures JSONB usage.
This is a thorough check. The approach of building a custom SQL parser for label-based filtering is advanced and flexible. Just ensure performance is adequate for large queries.
243-283:parseRequirementmethod covers multiple operators.
Supports a broad operator set. Carefully verify “NOT” handling for large sets of values.
285-290:createKeyTokenutility.
Simple function forming the correct token for a key, looks clean and robust.
292-297:createPairTokenfor key-value pairs.
JSON-based approach is sensible and ensures consistent format within the JSONB column.internal/store/selector/selector.go (6)
80-83: Parsing selector tags.
Ensuring theselectortag is parsed only for relevant fields prevents accidental processing. This is a safe approach.
88-88: Explicitly storingOptions.
Storing parsed tag options is a good design choice, enabling you to handle advanced or custom flags later.
Line range hint
95-151: Nested selector resolution logic.
The logic to handle array indexes and key paths is well-structured. Make sure to handle edge cases like invalid indexes or excessive nesting.
236-245: Filtering out hidden selectors.
Hiding certain selectors from error messages is user-friendly. Verify you are consistently marking them as hidden throughout code.
264-274: Parser invocation inResolveFieldsFromSchema.
Ensuring hidden fields remain hidden is crucial. The logic is correct; keep an eye on future expansions of the tag format.
337-364:parseSelectorTagfunction.
Breaking out tag parsing is a clean approach. This is easy to maintain if new selector flags are introduced.internal/store/selector/annotation.go (9)
1-22:AnnotationSelectorstruct introduction.
Having a distinct type for annotations vs. labels is clear, preventing confusion in the codebase.
45-74: Building annotation selector strings with operator inversion.
Similar approach to labels. No immediate issues, just be sure user-provided annotation values are sanitized if used externally.
95-134:NewAnnotationSelector: manual key validation.
Validating viak8s.io/apimachinery/pkg/util/validationis robust. Good to see aggregated errors when multiple keys are invalid.
136-198:Parsemethod with JSONB field checks.
Mirrors label logic. The custom parser approach is advanced; just ensure performance if large annotation sets are used.
200-241:Tokenizemethod for annotation requirements.
Handles multiple operators well. Confirm long input sets do not degrade performance significantly.
243-300:parseRequirement: wide operator coverage.
Covers existence, in, not-in, equals, not-equals. It’s consistent with the label approach. Good job reusing logic.
302-307:createKeyToken: single token creation.
Straightforward approach, consistent with labels.
309-314:createPairTokenfor JSONB key-value pairs.
Correctly encodes the annotation data as JSON. This ensures consistent storage and matching.
316-326:queryField: same approach as label selector.
Minimally overrides the function to pass existing validations. No issues observed.internal/store/certificatesigningrequest.go (4)
50-61: Index creation for CSR labels.
Similar to other resources, fallback toCreateIndexif not Postgres. Confirm non-Postgres users do not expect GIN features.
63-75: Index creation for CSR annotations.
Consistent with label indexing. Verify the same fallback logic and ensure no unexpected behavior in non-Postgres.
76-76: Final return statement.
No concerns here. The function is well-structured overall.
208-208: Settingcertificatesigningrequest.Annotations = nil.
Similar to ResourceSync, this discards any provided annotation values. If this is not intentional, consider merging or preserving them.internal/service/enrollmentrequest.go (5)
7-7: No issues found on import changes
Everything looks fine; no concerns here.
80-86: Check for consistent device lifecycle status
Lines 80-86 properly set the device status to "Enrolled". Consider verifying that any related code checks for this exact string, so that increased flexibility isn't needed later.
128-130: Field selector instantiation verified
Creating theFieldSelectorfrom the request parameters looks correct and consistent with other uses of the selector package.
135-140: Label selector usage
The usage ofLabelSelectorin lines 135-140 and the assignment in line 146 ensures consistency in filtering enrollment requests. No issues found.Also applies to: 146-146
246-309: Comprehensive patch logic
This newly added patch method correctly enforces immutability on key fields (metadata.name,apiVersion,kind,status). The approach of applying a JSON patch, then validating the result, is sound. Double-check the performance ofreflect.DeepEqualif the object grows more complex in future. Overall, it’s well-structured.internal/store/fleet.go (6)
59-61: Return migration errors
Raising an error fromAutoMigrateis good practice for surfacing migration failures early. No issues found.
63-75: Index creation for labels
This block creates a GIN index for labels. Ensure that the logic aroundMigrator().HasIndexremains consistent when multiple migrations run in parallel (i.e., potential race conditions if done simultaneously by multiple processes).
76-88: Index creation for annotations
Similar to labels, GIN indexing on annotations is a solid approach. The fallback logic for non-Postgres dialects is well-structured.
243-249: Use of FieldSelector in Get
Lines 243-249 demonstrate a correct approach to building a field selector from a map and integrating it into a device query. This approach is consistent with the new patterns observed elsewhere.
514-514: Ensure merged annotations are valid
Merging annotations preserves existing keys. This is a good approach, but double-check that unexpected input types or invalid keys are handled gracefully upstream.
522-522: JSON map usage
Storing annotations viamodel.MakeJSONMapis consistent with the rest of the JSON-based fields. This helps maintain a flexible data structure in the DB.internal/store/selector/field.go (26)
14-14: Import of extended selector
The usage ofselectorfrom thepkg/k8s/selectorlibrary aligns with the new approach to typed field selectors.
21-25: Struct definition for FieldSelector
Renaming and reorganizing the FieldSelector fields (parser, fieldResolver, etc.) provides clarity. No issues found.
28-28: Private selector introduction
A new concept of private selectors is introduced. This can help differentiate user-facing fields from internal ones.
30-37: WithPrivateSelectors option
Allowing private selector usage behind a functional option is a clean, extensible design.
58-87: NewFieldSelectorFromMap
These lines handle the difference between equality and inequality operators by togglinginvert. This is consistent with typical usage scenarios.
89-109: NewFieldSelectorOrDie
Similar to the earlier function, the panic approach is consistent for internal usage where we expect success.
143-159: Add method
Merging the requirements from one FieldSelector into another is straightforward. Confirm that disjoint or contradicting field constraints are handled gracefully upstream.
160-188: Parse method
This block sets up the SQL parser with the tokenizer and function overrides, ensuring the final query is built from theselector.Requirements(). Clean approach.
Line range hint
189-209: Parser error handling
Returning known error codes for parse failures—likeErrFieldSelectorParseFailed—improves debuggability.
215-223: Tokenize
Ensures the transition from high-level selector logic to query tokens is done once the fieldResolver is set. No issues found.
Line range hint
241-247: resolveSelectorField usage
These lines fetch resolved fields for each requirement key. The fallback to hidden selectors is a flexible approach to handle private fields.
257-257: Error context for parse issues
Attaching the key name to parse errors is helpful for root cause analysis.
267-267: Adding context for value parse
Same reasoning as line 257—helpful for debugging which key/value failed.
276-276: Operator resolution error
Explicitly showing which operation caused the failure is also beneficial for debugging.
303-303: createFieldToken
UtilizesresolveFieldto build the appropriate query token for the field. Implementation looks good.
311-311: createValueToken
Mirrors the approach increateFieldTokenfor values. Straightforward.
319-319: createOperatorToken
Properly branches for various operators—Exists, NotEquals, etc. This approach is flexible and easily extended.
354-358: Rejecting private fields
The check ensures that fields labeled private can’t be accessed unless the user explicitly enablesprivateSelectors. Good.
487-487: Unknown operators
Returning an error on unknown operators helps catch potential typos or future extension points.
512-512: applyArrayOperator
Supports common array-based operations. No issues found.
530-530: applyTimestampOperator
Covers the typical operators on timestamps, ignoring partial matches. Looks correct.
542-542: applyNumbersOperator
Works similarly to timestamps, restricting which operators are valid.
554-554: applyBooleanOperator
Ensures only boolean-compatible operators are supported.
565-565: applyJsonbOperator
JSONB fields only permit a subset of operators. This enforces the correct usage.
580-580: applyStringOperator
Makes sense to block partial string matching when the field is cast from JSONB or is an array element.
609-618: Hidden selector resolution
The fallback from standard to hidden selector resolution fosters advanced internal fields while keeping them hidden from general queries.internal/tasks/fleet_selector.go (7)
12-12: New import of store/selector
Bringing inselectoraligns with the rest of the codebase for label and field selectors.
119-124: Retrieving a label selector
This logic obtains aLabelSelectorfromgetMatchLabelsSafe(fleet). The fallback to removing owners if no labels exist is correct.
127-128: Storing the label selector
AssigninglstolistParams.LabelSelectorensures consistent usage in subsequent device listing.
241-246: Creating FieldSelector to remove owners
Lines 241-246 form a newFieldSelectorfor devices matching the specific ownership. This is a clean solution.
255-260: Using LabelSelector for orphaned devices
For orphan removal, building aLabelSelectorfrom fleet labels is consistent with other usage.
262-267: Combining LabelSelector and FieldSelector
A good pattern: label-based matching plus a field-based restriction for specific owners.
270-272: ListParams update
Assigning both the label and field selectors tolistParamsfor a refined search is consistent.internal/api/server/server.gen.go (8)
87-89: New PATCH endpoint added for device status updatesThe addition of the
PatchDeviceStatusendpoint follows RESTful principles by allowing partial updates to device status. This is a good practice as it provides more granular control compared to PUT operations.
111-113: New PATCH endpoint added for enrollment request updatesThe addition of the
PatchEnrollmentRequestendpoint is consistent with the API's PATCH pattern for partial updates. This maintains API consistency across different resource types.
117-117: HTTP method changed from POST to PUT for enrollment request approvalThe change from POST to PUT for
ApproveEnrollmentRequestbetter aligns with RESTful principles since approval is an idempotent operation that updates the resource state.
123-125: New PATCH endpoint added for enrollment request status updatesThe addition of the
PatchEnrollmentRequestStatusendpoint completes the consistent pattern of PATCH operations for status updates across different resources.
165-167: New PATCH endpoint added for fleet status updatesThe addition of the
PatchFleetStatusendpoint maintains consistency with the status update pattern seen in other resources.
2844-2852: Added 409 Conflict response for certificate signing request creationThe addition of the 409 Conflict response type is appropriate for handling duplicate resource creation attempts.
7241-7272: Consistent implementation of PATCH handlersThe implementation of PATCH handlers for device status, enrollment request, enrollment request status, and fleet status follows a consistent pattern:
- Extracts name from path parameter
- Decodes JSON body
- Handles errors appropriately
- Applies middleware
- Processes response
This consistency is good for maintainability and reduces the likelihood of bugs.
Also applies to: 7466-7497, 7591-7622, 7990-8021
2509-2511: Consistent routing configuration for PATCH endpointsThe router configuration for all new PATCH endpoints follows the same pattern and correctly uses chi's group routing. This maintains consistency and helps with route organization.
Also applies to: 2533-2535, 2545-2547, 2587-2589
internal/api/client/client.gen.go (4)
173-176: LGTM: New PatchDeviceStatus interface methods addedThe addition of these interface methods for patching device status follows a consistent pattern with other PATCH operations in the codebase.
686-709: LGTM: PatchDeviceStatus implementation follows established patternsThe implementation correctly:
- Handles context propagation
- Follows error handling patterns
- Uses consistent parameter naming
- Maintains proper cleanup with deferred body close
4510-4514: LGTM: Response interface methods for PatchDeviceStatusThe response interface methods are properly defined with both generic body and JSON-specific variants.
5256-5282: LGTM: PatchDeviceStatusResponse type and methodsThe response type properly includes all expected status codes and follows the established pattern for response handling.
cmd/flightctl-worker/main.go (2)
4-8: LGTM: Added necessary imports for signal handlingThe new imports support the graceful shutdown functionality being added.
51-57: LGTM: Proper signal handling and context managementThe implementation:
- Correctly sets up signal handling with NotifyContext
- Handles multiple termination signals (SIGHUP, SIGTERM, SIGQUIT)
- Properly cleans up with deferred cancel call
internal/crypto/tls.go (1)
10-10: Simplified TLS configuration return valuesThe function signature has been simplified by removing the redundant gRPC TLS config. Error handling is now more concise with two nil returns instead of three.
Also applies to: 13-13, 17-17, 21-21, 41-41
internal/service/handler.go (2)
34-42: Updated function signature aligns with the removed field.
Removing theconsoleGrpcEndpointparameter fromNewServiceHandleris consistent with the updated struct. Ensure that all callers ofNewServiceHandlerhave been updated accordingly.✅ Verification successful
All callers of NewServiceHandler are correctly updated
The only caller ininternal/api_server/server.gomatches the new signature perfectly.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Looking for calls to NewServiceHandler that might still pass consoleGrpcEndpoint rg "NewServiceHandler"Length of output: 421
15-21: Removal ofconsoleGrpcEndpointfield simplifies the struct.
This change reduces complexity inServiceHandler. Please confirm that any references toconsoleGrpcEndpointelsewhere in the codebase have been safely removed or replaced.✅ Verification successful
Field removal is properly handled with agentEndpoint replacement
TheconsoleGrpcEndpointfield was safely replaced withagentEndpoint, maintaining the same functionality while using more accurate terminology. The change is consistently applied throughout the codebase, with all references now using the new field name.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Checking for any references to consoleGrpcEndpoint across the repository rg "consoleGrpcEndpoint"Length of output: 881
Script:
#!/bin/bash # Find calls to GetRenderedDeviceSpec ast-grep --pattern 'GetRenderedDeviceSpec($$$)' # Also search for any config-related references we might have missed rg -i "grpc.*endpoint.*console" -A 2 -B 2 # Look for removed lines in handler.go git log -L :consoleGrpcEndpoint:internal/service/handler.goLength of output: 1011
Script:
#!/bin/bash # Find the handler implementation fd handler.go --exec grep -l "GetRenderedDeviceSpec" # Look for the actual implementation ast-grep --pattern 'func ($_ *ServiceHandler) GetRenderedDeviceSpec($$$)' # Check for any environment or config related changes rg -l "GetRenderedDeviceSpec" | xargs grep -l "config\|env"Length of output: 1339
internal/service/common/enrollmentrequest.go (1)
44-45: Specific handling for duplicate names improves user feedback.
Introducing a 409 response forErrDuplicateNameis a good enhancement. Please confirm that there are unit tests validating this scenario, ensuring comprehensive coverage.internal/agent/client/enrollment.go (1)
46-53: Streamlined switch-case approach for enrollment requests.
This clearer branching is more maintainable than chained conditionals. Double-check if other relevant status codes (e.g., 403, 503) might also warrant handling or fallback logic here.✅ Verification successful
Current status code handling is appropriate and complete
The implementation correctly handles all status codes defined in the API specification (201, 409 for creation; 200 for retrieval). Generic error status codes like 403 or 503 are appropriately caught by the default error case, maintaining clean and maintainable code without unnecessary complexity.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Checking if references to 403 or 503 status codes exist that require handling in enrollment.go rg "StatusForbidden|StatusServiceUnavailable" -A 5Length of output: 500
Script:
#!/bin/bash # Get complete context of enrollment.go status handling echo "=== Enrollment Status Handling ===" rg "StatusCode|http\." internal/agent/client/enrollment.go -B 2 -A 5 echo -e "\n=== API Documentation ===" fd -e md -e yaml -e json . | xargs rg -l "enrollment|api" echo -e "\n=== Similar Enrollment Patterns ===" ast-grep --pattern 'switch $_.StatusCode() { $$$ }'Length of output: 6816
Script:
#!/bin/bash # Check API specification for enrollment endpoints echo "=== Enrollment API Spec ===" cat api/v1alpha1/openapi.yaml | grep -A 20 "/enrollmentrequests" echo -e "\n=== Agent API Spec ===" cat api/v1alpha1/agent/openapi.yaml | grep -A 20 "/enrollmentrequests"Length of output: 4329
internal/flterrors/flterrors.go (1)
21-24: Additional error variables for label/annotation selectors.
These new errors enhance clarity and diagnostic precision when parsing selectors. This follows best practices for explicit, descriptive error handling.internal/store/model/jsonfield.go (4)
62-63: Good introduction of a JSONMap type!This generic type makes it much more convenient to store maps in PostgreSQL JSONB fields without having to create multiple specialized types.
65-71: Ensure consistency in handling nil input map.The function returns an empty map for nil input, which is fine. However, confirm that the caller correctly distinguishes between an empty map and an originally nil map, if needed for future logic.
88-96: Efficiently reassigning scanned result to the map.The code looks straightforward and effective. Consider adding tests for edge cases, e.g., empty JSON object, partial fields, or invalid types.
98-110: Proper JSON marshaling for Value().The method gracefully handles nil maps. Ensure you handle large maps gracefully within your database constraints. This design is solid.
internal/store/model/enrollmentrequest.go (2)
50-51: Robust fallback to empty maps.Using
lo.FromPtrOrensures labels/annotations are always initialized, preventing nil map issues. This is helpful for consistent usage.
75-76: Ensuring map pointers are properly initialized.Using
lo.ToPtr(util.EnsureMap(...))is an effective way to prevent nil pointer issues. Confirm in integration tests that pulling these from the database also works as expected.internal/agent/device/resource/cpu_test.go (2)
48-48: Excellent rename for consistency!Renaming to
CpuResourceMonitorSpecaligns well with standard naming conventions. Be sure that all references toCPUResourceMonitorSpecin the codebase are updated accordingly.
74-74: Use of FromCpuResourceMonitorSpec is consistent with the rename.Looks good. Ensure any references to the old method name
FromCPUResourceMonitorSpecare removed or updated.internal/api_server/middleware/tls.go (3)
6-6: Import of "fmt" is justified.Needed for error formatting; no concerns.
11-11: Configuration import usage.Importing
confighelps streamline dynamic server config. Verify that your config struct fields match usage in this file.
35-36: NewHTTPServerWithTLSContext signature updated.The additional
cfg *config.Configparameter is consistent with the main server creation logic. No issues spotted.internal/store/model/certificatesigningrequest.go (2)
50-51: Use oflo.FromPtrOrlooks good.This ensures that
LabelsandAnnotationsdefault to an empty map instead of remaining nil, thereby removing potential nil-pointer issues or unnecessary conditional checks. This is a clean fallback pattern.
75-76: Pointer conversion for Labels and Annotations usage is consistent.Using
lo.ToPtr(util.EnsureMap(...))ensures the resulting API resource consistently receives a non-nil map, simplifying downstream logic that expects a map type.internal/store/model/fleet.go (2)
53-54: Good practice to default to empty maps.Similar to other models, defaulting
LabelsandAnnotationsto an empty map consistently avoids nil checks. This uniform handling makes the data structure more reliable.
98-99: Handles labels & annotations conversion consistently.Using
lo.ToPtr(util.EnsureMap(...))maintains consistency by ensuring that the fleet’s API representation always includes a map pointer instead of nil.internal/agent/client/client.go (1)
32-33: Confirm passing an empty string as endpoint.Previously, this function accepted an
endpointparameter. Now it defaults to an empty string, which may alter client behavior if the baseclient interprets an empty endpoint differently.Run the following script to locate all callers of
NewGRPCClientFromConfigand confirm that they do not rely on a custom endpoint.✅ Verification successful
Passing empty string as endpoint is correct and safe
The base implementation in
internal/client/config.gogracefully handles empty endpoints by falling back toconfig.Service.Server. No callers rely on custom endpoints, making this change safe and aligned with configuration-based approach.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Searching for calls to NewGRPCClientFromConfig rg -A 3 'NewGRPCClientFromConfig\('Length of output: 1144
test/e2e/agent/agent_updates_test.go (1)
43-43: Rename toDeviceOsSpecsuccessfully maintains consistency.Renaming from
DeviceOSSpectoDeviceOsSpecensures uniform struct naming across the codebase. Verify that all references have been updated to avoid any type errors.Run the following script to confirm no remaining references to the old
DeviceOSSpectype:✅ Verification successful
All references successfully updated to
DeviceOsSpecThe codebase shows complete consistency in using the new
DeviceOsSpecnaming, with no remaining references to the oldDeviceOSSpectype.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Searching for references to the old DeviceOSSpec type rg -A 3 'DeviceOSSpec'Length of output: 22
Script:
#!/bin/bash # Search for both old and new type names in Go files echo "=== Searching for DeviceOSSpec ===" rg -t go 'DeviceOSSpec' echo -e "\n=== Searching for DeviceOsSpec ===" rg -t go 'DeviceOsSpec' # Search in YAML/JSON files echo -e "\n=== Searching in YAML/JSON files ===" rg -t yaml -t json 'DeviceO[sS]Spec' # Use ast-grep to find type definitions echo -e "\n=== Checking type definitions ===" ast-grep --pattern 'type DeviceO[sS]Spec struct'Length of output: 3908
test/integration/tasks/repotester_test.go (1)
57-57: Removed Unused Return Variable SuccessfullyRemoving the unused return variable cleans up the code and improves readability without affecting functionality.
internal/store/store.go (2)
8-8: New Import for Selector PackageThe introduction of the
selectorpackage is a logical step. Confirm that all references to the older package are removed elsewhere and that the new package is consistently utilized.
133-137: RefinedListParamsStructureUsing strongly typed selectors (
FieldSelector,LabelSelector,AnnotationSelector) adds clarity and modularity. This aligns well with improved selector usage across the codebase.internal/agent/device/resource/memory_test.go (2)
118-118: Consistent Naming ConventionSwitching to
FromCpuResourceMonitorSpecmaintains consistency with the new naming strategy. The changes appear correct and coherent.
140-140: Parallel Change for Updated Alert RulesRepeating the updated call to
FromCpuResourceMonitorSpecfor clearing alerts is consistent with the new pattern and ensures uniform approach.test/integration/tasks/repotester_conditions_test.go (2)
6-6: Importedstringsfor Case-Insensitive ChecksThis import makes sense for case-insensitive comparisons and enhances flexibility in status verification.
26-26: Enhanced Status Check with Case-Insensitive MatchingUsing
strings.EqualFoldto validate the repository's "status" label is a robust enhancement. This allows flexible label configurations without sacrificing accuracy.internal/store/model/repository.go (3)
56-56: Ensure nil-safe initialization is desired.Using
lo.FromPtrOr(resource.Metadata.Labels, make(map[string]string))effectively replaces a nil pointer with an empty map. Confirm that an empty map is indeed the desired default value in scenarios where the labels are missing or nil.
57-57: Retain consistency across metadata fields.As with labels, using
lo.FromPtrOr(resource.Metadata.Annotations, make(map[string]string))helps avoid nil pointer checks. Consider verifying that the code handles annotations consistently with labels elsewhere in the codebase.
115-116: Ensure map usage remains consistent.Using
util.EnsureMap(...)in conjunction withlo.ToPtr(...)ensures that labels and annotations are returned as non-nil pointers to maps. Verify that this pointer-based approach is consistent with other related metadata manipulations outside this file, such as ininternal/store/device.goorinternal/store/enrollmentrequest.go.cmd/flightctl-api/main.go (3)
5-5: Review necessity of the net import.The new
netimport is introduced for TCP listener creation. Confirm whether any other package references the same functionalities to avoid partial duplication.
89-89: Security considerations for TLS config.Double-check that the TLS config returned from
crypto.TLSConfigForServerenforces modern security standards (e.g., secure ciphers, certificate validation). Outdated or insecure settings might lead to vulnerabilities.
113-113: Confirm gRPC usage.
agentserver.GetGRPCServer()is passed toapiserver.New(...).Run(). Ensure that the gRPC server’s lifecycle management is coordinated properly so as not to conflict with the newly introduced TCP listener-based approach.internal/container/container_test.go (5)
38-38: Consistent naming for OS spec.Renaming to
DeviceOsSpecis consistent with codebase changes. Ensure updates across all references to avoid compile-time or runtime issues.
45-45: Meaningful test coverage.This test scenario ensures identical images pass. The rename to
DeviceOsSpecis appropriate; confirm that the code still properly checks tags/digests.
51-51: Check version-based test logic.Changing from
DeviceOSSpectoDeviceOsSpecmatches the code refactor. Test coverage for different image tags remains intact.
57-57: Maintain backwards support.Even with the rename, confirm that the system supports older references if backward compatibility is required. The test logic is unchanged, but ensure other references are uniformly updated.
63-63: Edge case handling.For
_invalidimages, returningErrParsingImageremains valid. The rename toDeviceOsSpecdoes not affect the error-handling path, but watch for any references that might still use the old type name.internal/api_server/server.go (3)
78-78: Context-based KV initialization.Introducing
ctxands.logintokvstore.NewKVStoreis beneficial for better logging and cancellation handling. Verify that all calls tokvStore.Close()are properly deferred to avoid resource leaks.
104-104: Request size limit ordering.Placing request size limit middleware before logging helps mitigate log spam in DoS scenarios. This is a good security and performance practice.
124-124: Revised service handler signature.Removing
baseAgentGrpcUrlis a breaking change for callers. Verify that no external references rely on the old signature.test/integration/kvstore/kvstore_test.go (4)
7-7: Good use of flightlog for unified logging.
This import aligns the test suite with the rest of the codebase’s logging strategy.
11-11: Logrus import is consistent with existing logging conventions.
No concerns here. This maintains consistency with your codebase’s usage of structured logging libraries.
25-25: Verify multiple logger initializations.
Re-invokingflightlog.InitLogs()inside repeated test setups can re-initialize log settings. Ensure this is intentional.
32-33: Cleaning up keys in AfterEach is well-structured.
This ensures tests do not affect each other by leaving stale data in Redis.internal/store/model/selectors_test.go (5)
41-47: Normalization of selector names is a solid approach.
This ensures hidden or specially annotated selectors remain consistent. Good addition.
53-53: Clear matching error message.
It’s concise and effectively indicates when schema types fail alignment.
67-67: Root selector name usage is correct.
Usingselector.NewSelectorName(schemaName)helps maintain naming consistency from the root.
87-87: Building field paths viaselector.NewSelectorNameis consistent.
This is helpful for ensuring uniform formatting of nested fields.
126-126: Handling arrays of structs is well-structured.
Recursively diving into element types covers complex structures.internal/store/model/resourcesync.go (2)
50-51: Robust handling of nil labels and annotations.
Switching tolo.FromPtrOrguarantees a valid map, preventing null pointer errors in resource creation.
80-81: Safe defaulting for labels and annotations.
Ensuring both fields are always pointers to non-nil maps improves consistency when converting back to API resources.internal/store/model/device.go (2)
84-85: Graceful fallback for labels and annotations.
lo.FromPtrOravoids nil map concerns and maintains a clear default.
128-129: Consistent approach for pointer fields.
Usinglo.ToPtr(util.EnsureMap(...))unifies label and annotation handling across the code.internal/store/selector/selector_test.go (3)
30-33: Ensure clarity in mapped fields returned.This newly introduced mapping returns multiple fields (
model.field6andmodel.field17) for a single “mappedselector”. Verify whether the intended behavior is to map one selector name to multiple fields or if we should refine the mapping logic further.
78-85: Selector naming approach looks consistent.The usage of
NewSelectorNameis consistent and maintains clarity. Good job adhering to a unified pattern for building selector names. No immediate issues.
110-113: Retaining test coverage forbadTestModel.This validates unsupported selectors by returning them in
ListSelectors. The naming overlaps properly withResolveSelector, ensuring negative test coverage is adequately handled.internal/service/console.go (1)
161-161: Confirm the updated endpoint usage.Returning
h.agentEndpointinstead ofh.consoleGrpcEndpointchanges the advertised gRPC endpoint for console connections. Confirm that consumers of this API expectagentEndpoint. Otherwise, it may break existing integrations.internal/util/util.go (1)
272-281: Useful map-initialization helper.The
EnsureMapfunction neatly prevents nil-map assignments. Just make sure callers reassign its return value back to the original variable if concurrency or subsequent usage is expected.internal/agent/device/status/status.go (1)
190-190: Naming consistency check forDeviceOsStatus.This aligns with the rename from
DeviceOSStatustoDeviceOsStatus. Confirm that all references to the old type have been updated throughout the codebase to avoid compilation errors or mismatched usage.internal/store/model/selectors.go (8)
18-21: Good usage of typed selector names.Replacing literal strings with
selector.NewSelectorNameenhances consistency and reduces the risk of typos.
24-24: Maintain consistency with existing patterns.The introduction of
selector.NewSelectorName("spec.template.spec.os.image")aligns with the typed approach above. This should simplify future refactoring.
27-28: Boolean selector usage looks correct.This ensures that
status.approval.approvedis distinctly recognized as a boolean field.
31-31: Name usage is consistent.The new code properly leverages
selector.NewSelectorName("spec.repository")for a field string, matching the established pattern.
34-35: String field usage is consistent.String-based selectors for repository specs are handled similarly, ensuring maintainability.
38-38: Aligned usage for certificate field selectors.This ensures that the naming convention remains coherent across all resource types.
44-47: Explicit handling of metadata name and alias.The mapping logic is clear and ensures both “metadata.name” and “metadata.alias” are returned for the combined selector.
68-68: Good approach to extending selectors.Using
selector.NewSelectorFieldNameSet()then callingAdd(...)for existing keys and the composite “metadata.nameoralias” allows easy expansions in the future.internal/store/selector/annotation_test.go (3)
10-32: Thorough test coverage for annotation selector operations.You’re verifying multiple variations of valid inputs, ensuring that parsing and creation logic is robust. No issues spotted.
34-108: Comprehensive coverage of valid and invalid operations.The test set for annotation selectors is methodical, confirms parser correctness, and checks for expected errors. This is well-structured and easy to follow.
110-178: Effective map-based annotation selector tests.Creating selectors from maps and comparing them to expected tokenized results is a good strategy for verifying combined logic flows. Good job ensuring empty maps produce an empty selector.
test/util/create_utils.go (2)
56-56: Renamed struct usage is consistent.Updating to
DeviceOsSpecaligns naming with standard Go patterns. This helps maintain a consistent codebase.
155-155: Ensuring consistent usage of DeviceOsSpec.The change here matches the newly renamed type, ensuring that the new field references remain consistent.
internal/store/selector/label_test.go (3)
10-32: Solid coverage of basic label selector operations.Checking multiple valid patterns in
TestLabelSelectorOperationsensures early detection of parsing issues.
34-109: Validation of query tokenization is robust.You’re testing a range of valid and invalid label selectors, ensuring the query token set is as expected. This thorough approach is commendable.
116-184: Good approach to map-based label selector creation.Generating label selectors from maps and verifying them against expected queries covers real-world usage scenarios. Implementation is consistent and readable.
internal/config/config.go (3)
8-8: No issues with importing time package.
The addition of thetimeimport is appropriate for configuring timeouts.
112-125: Good use of explicit defaults for HTTP config fields.
Providing default values prevents misconfigurations and ensures sane fallbacks.
172-172: Using NewDefault() ensures new HTTP fields retain their defaults.
By initializingConfigwithNewDefault(), any new fields introduced insvcConfigautomatically inherit default values before YAML unmarshalling. This helps avoid nil or zero-value issues.pkg/k8s/selector/labels/selector.go (1)
73-81: Doc comment updated correctly for set-based and exact-match restrictions.
Removing the partial match references clarifies the supported syntaxes. The rest of the docstring is consistent with existing code behavior.internal/service/fleet_test.go (2)
67-67: Consistent rename to v1alpha1.DeviceOsSpec.
This type rename promotes consistency. No logical impact observed.
167-167: Use of “newimg” for OS image is correct in this patch context.
This aligns with the patch requirement. No issues.test/harness/test_harness.go (1)
120-120: Improved lifecycle management with workerServer.Run(ctx).
Passing the context toworkerServer.Runallows graceful cancellation and overall better control flow. Good practice.internal/service/device_test.go (1)
93-93: Renamed struct reference
The OS field now referencesDeviceOsSpecinstead ofDeviceOSSpec, reflecting the broader rename across the codebase. This change looks consistent with the updated naming convention.internal/store/templateversion.go (1)
61-73: Ensure index creation remains idempotent
Creating indexes conditionally is good. However, if your environment automatically runs multiple migrations in parallel or repeatedly, double-check that concurrent index creation is handled gracefully (especially for PostgreSQL).test/e2e/field_selectors/field_selectors_test.go (3)
1-2: File-level overview
This is a new comprehensive E2E test suite for field selectors. The test structure is clear, and usage of Ginkgo & Gomega follows best practices.
19-25: Use constants for better readability
The introduction of meaningful constants likeunknownSelectorandresourceCreatedhelps clarify test expectations. Nicely done.
143-154: Robust negative testing
Kudos for verifying unknown and unsupported field selectors. These tests ensure the CLI returns errors as expected.test/integration/store/enrollmentrequest_test.go (4)
13-13: Selector usage
The new import forgithub.com/flightctl/flightctl/internal/store/selectorindicates expanded label/field selector features. This helps maintain a consistent filtering approach throughout the codebase.
41-44: Status updates immediately after creation
CallingUpdateStatuspost-create ensures each EnrollmentRequest gains its certificate or status changes. This is a good pattern, but confirm no race conditions arise if concurrent calls occur.
165-166: Improved usage of LabelSelector
Replacing the olderLabelsapproach withLabelSelectorfosters consistency with other selectors. This makes label-based queries more flexible and reusable.
217-217: Consistent Status fields
Ensuring that EnrollmentRequestStatus.Certificateis preserved upon subsequent updates enhances clarity. This helps avoid overwriting or losing important fields.api/v1alpha1/util.go (1)
348-363: Consider returning early for empty arguments.The
MatchExpressionsToStringfunction checkslen(exprs)for zero and immediately returns an empty string. This is good practice. You might also consider whether short-circuiting further logic is desirable for even slightly larger checks, especially when the length is small or in performance-sensitive areas. Overall, the implementation appears sound.internal/agent/device/resource/resource.go (1)
Line range hint
304-323: Check for consistent usage of renamed CPU spec.The rename from
CPUResourceMonitorSpec→CpuResourceMonitorSpecand correspondingFromCPUResourceMonitorSpec→FromCpuResourceMonitorSpecis consistent with the rest of the code changes. Ensure that references across the entire codebase, including documentation, comments, and test coverage, have also been updated to avoid confusion.✅ Verification successful
The rename from
CPUResourceMonitorSpectoCpuResourceMonitorSpecis complete and consistent.The codebase shows uniform adoption of the new naming convention across all relevant files, including API definitions, implementations, and tests. The remaining uppercase "CPU" occurrences are unrelated identifiers that follow different naming conventions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify references to old CPU spec naming across the repository rg "CPUResourceMonitorSpec|FromCPUResourceMonitorSpec" -A 3 -B 3Length of output: 66
Script:
#!/bin/bash # Search for any CPU-related specs or monitors with various case patterns rg -i "cpu.*(?:spec|monitor)" -A 3 -B 3 # Specifically look for the new naming pattern rg "CpuResourceMonitorSpec|FromCpuResourceMonitorSpec" -A 3 -B 3 # Search in API definitions and types fd -e go -e proto -e yaml --exec grep -l -i "cpu.*resource.*monitor"Length of output: 35020
test/e2e/agent/agent_test.go (1)
116-116: Align type name across all usage.Changed from
DeviceOSSpectoDeviceOsSpec. Verify all references are updated consistently in the codebase, including fields, docstrings, and tests.✅ Verification successful
Type rename verification successful
The codebase consistently uses
DeviceOsSpecacross all files and no instances of the oldDeviceOSSpecnaming were found.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify references to old OS spec naming across the repository rg "DeviceOSSpec" -A 3 -B 3Length of output: 27
Script:
#!/bin/bash # Search for the new type name and related patterns rg "DeviceOsSpec" -A 3 -B 3 # Look for OS-related type definitions in API packages fd "types\.go|api\.go" --exec ast-grep --pattern 'type $_ struct { $$$ Os $$$ $$$ }' # Search for any OS-related fields or types rg -i "deviceos|os.*spec" -A 3 -B 3Length of output: 67932
internal/tasks/resourcesync.go (1)
16-16: Good introduction of the selector package.The newly introduced import for
selectorclarifies ownership-based queries. Be sure to handle potential pointer or map nil edge cases when creating new field selectors in other contexts.test/integration/store/resourcesync_test.go (3)
12-12: Use ofselectorpackage is consistent.
Good to see theselectorimport to unify field and label filtering. No issues here.
115-117: Potential edge-case: Check error behavior whenfleetowneris nil.
Iffleetowneris ever nil,selector.NewFieldSelectorFromMapOrDiemight panic. Ensure that you handle or validatefleetownerbefore invoking this constructor, or else consider using a safer fallback pattern to avoid possible runtime errors.
238-239: Selector usage is consistent.
UsingLabelSelectorwithNewLabelSelectorFromMapOrDielooks correct. No issues observed.internal/service/repository.go (4)
44-44: Clearer error messaging.
The additional handling forflterrors.ErrResourceIsNilandflterrors.ErrIllegalResourceVersionFormatis a nice improvement for returning 400 errors. This clarifies client-facing responses.
46-47: Meaningful status code for duplicate repository name.
Handlingflterrors.ErrDuplicateNamewith HTTP 409 makes sense for conflict scenarios. Well done.
70-83: Graceful parsing of selectors.
The new logic to parse and validateFieldSelectorandLabelSelectoris well-structured. Returning 400 with a descriptive message when parsing fails is a good UX improvement.
88-88: Consistent usage ofListParams.
Integrating the parsed selectors intolistParamsis consistent with other parts of the code that useListParams. No concerns.internal/service/resourcesync.go (4)
43-43: Improved error handling on resource creation.
The added check forflterrors.ErrResourceIsNilandflterrors.ErrIllegalResourceVersionFormatfollowed by a 400 response is consistent with other service methods. Good job.
46-46: Duplicate name conflict.
Returning a 409 for duplicate name is correct and consistent with HTTP conflict semantics.
69-82: Robust field/label selector handling.
UsingNewFieldSelectorandNewLabelSelectorensures consistent filtering. Ensuring the request returns 400 on invalid parsing helps maintain user clarity.
87-87: No issues withListParamsusage.
Assigning the selectors tolistParamsis straightforward. Implementation looks correct.internal/store/repository.go (4)
50-52: Error handling on AutoMigrate.
Returning the error immediately if migration fails is essential for preventing partial migrations. Good defensive approach.
54-65: PostgreSQL GIN index creation for labels.
Verifying index existence and selectively creating it is a best practice. Nice handling of dialect differences.
67-79: PostgreSQL GIN index creation for annotations.
Same approach as labels, ensuring consistent index usage. This improves performance for annotation-based lookups.
241-241: Annotations set to nil: verify usage.
Explicitly nullifyingrepository.Annotationsbefore storing can simplify updates; however, confirm that this field truly shouldn't preserve any incoming data. If needed, consider selectively clearing only server-managed keys or fields.internal/service/fleet.go (6)
58-58: Enhanced error handling is beneficial.Combining
ErrResourceIsNilwithErrIllegalResourceVersionFormatto return a 400 error further refines your error handling. Verify that all relevant resource version mismatch cases are properly covered here.
60-61: Good handling of duplicate names.Explicitly returning a 409 Conflict for
ErrDuplicateNameclarifies API behavior for resource name collisions. This is a best practice for resource creation.
84-86: Verify field selector construction.Ensure appropriate validation of field selectors in the business logic. Although the code gracefully handles parsing errors, confirm that the store layer properly enforces valid usage of field selectors downstream.
91-97: Ensure label selector is validated as expected.Switching from direct label maps to
LabelSelectoris more structured. Just confirm your tests (unit or integration) cover the negative cases to validate that a malformed selector triggers the appropriate error responses.
102-102: Label selector injection looks good.Assigning
labelSelectortolistParamsis a straightforward enhancement, aligning with the new structured selector approach used throughout the codebase.
235-235: Accurate response for unauthorized deletion.Changing the response to HTTP 403 instead of 409 for an unauthorized fleet deletion more accurately reflects forbidden access. Good job on clarifying your API semantics.
internal/service/device.go (7)
47-47: Handling resource version format errors is good.Extending error handling to include
ErrIllegalResourceVersionFormatensures clarity when the version is malformed.
49-50: Returning 409 Conflict for duplicate names is correct.Clearly signaling a conflict for name collisions improves the API's predictability.
69-72: Field selector parsing introduced.Similar to the fleet service, ensure the store layer thoroughly validates these field selectors. It's good to see consistent usage of the
selectorpackage.
76-79: Label selector handling introduced.Switching to
selector.LabelSelectorenables more robust label-based filtering. As with field selectors, confirm negative scenarios are tested for coverage.
95-95: Summary operation label selector.Including
labelSelectorwhen callingSummaryis consistent with the new approach. Double-check that the summary logic properly interprets the selector.
118-118: Assigning label selector in listParams.This preserves consistency with newly introduced field selector handling, making the listing logic more uniform.
388-388: Corrected permission check string in DecommissionDevice.Renaming to "devices/decommission" from the misspelled version clarifies the role-based authorization logic.
internal/tasks/callback_manager_test.go (2)
405-408: Rename Confirmation:DeviceOSSpec→DeviceOsSpec
It appears thatDeviceOSSpecwas renamed toDeviceOsSpec. The reference here aligns with the broader naming consistency effort.
431-434: Apply the Renaming Consistently
Confirm that all references toDeviceOSSpecin other files are updated toDeviceOsSpecto avoid compile-time or runtime errors.✅ Verification successful
Naming is Already Consistent
The codebase consistently usesDeviceOsSpecacross all files. No instances of the oldDeviceOSSpecnaming were found, indicating the renaming has been properly applied throughout the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all references to `DeviceOSSpec` are updated properly. rg "DeviceOSSpec"Length of output: 17
Script:
#!/bin/bash # Search for both naming patterns case-insensitively rg -i "deviceo[s]spec" # Search for struct/interface definitions ast-grep --pattern 'type DeviceO$_Spec struct { $$$ }' # Also check for any references in proto files fd -e proto -x cat {}Length of output: 4014
internal/client/config.go (2)
258-263: Signature Enhancement forNewGRPCClientFromConfig
Renaming the parameter fromgrpcEndpointtoendpointmakes the function signature more generic. The fallback logic forgrpcEndpointremains clear and concise.
296-299: Ensure Stripping of HTTP Prefixes is Intentional
Strippinghttp://,https://, and trailing slashes might mask certain usage patterns if the endpoint actually needs those protocols. Confirm that downstream consumers indeed expect a strictly host-based endpoint without any protocol.internal/store/selector/field_test.go (5)
48-53: Improved Error Logging for Good Strings
Including the parsed selector in error messages helps pinpoint potential parsing issues quickly.
74-74: Renamed Test Improves Specificity
ChangingTestOperationstoTestFieldSelectorQueriesclarifies the test signal. Good practice for more accurate method naming.
244-256: Accurate Comparison of Tokenized Queries
Usingqueryparser.Tokenizeto verify the correctness of the field selector’s tokenization ensures robust coverage. Good job.
274-279: Error Handling for Invalid Field Selectors
Consistently verifying error existence ensures that improper usage is caught early. This approach is recommended.
286-337: New Test:TestFieldSelectorMap
This new test effectively checks map-based selectors. Good addition, as it simplifies usage and ensures coverage of direct map-to-selector conversions.internal/service/certificatesigningrequest.go (3)
93-99: Retrieve Field Selector First
By placing the logic forfieldSelectorearly, errors surface quickly if parsing fails. This helps avoid partial usage deeper in the method.
100-106: Addition of Label Selector
This label selector logic mirrors the field selector approach. Make sure to handle corner cases such as invalid label syntax or empty strings gracefully.
111-111: PopulateLabelSelectorintoListParams
EnsuringlabelSelectorproperly populatesListParamswill unify filtering across label and field selectors, aligning with the project’s new structured selector approach.api/v1alpha1/spec.gen.go (1)
21-237: Verify the newly embedded Swagger specification.A large portion of the embedded spec has changed. Please confirm that:
- The Base64 content decodes and decompresses correctly.
- The new API definitions match the intended schema.
To ensure correctness, consider running this script to decode, decompress, and validate the swagger content:
✅ Verification successful
The swagger specification implementation is properly structured and validated
The code shows a robust implementation:
- Uses the well-tested kin-openapi/openapi3 library
- Includes proper validation through the OpenAPI loader
- Has built-in error handling and external reference resolution
- Being a generated file, the generation process would have failed if the spec was invalid
The changes to the Base64 content are safe as the code ensures validation during runtime through the OpenAPI loader.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verifies the newly embedded swaggerSpec is valid OpenAPI. # Step 1: Decode and decompress echo "<base64_content>" | base64 -d | gzip -d > extracted_swagger.json # Step 2: Validate swagger with your favorite linter (e.g., openapi-cli). # Example: # openapi-cli validate extracted_swagger.json # Step 3: Print a simple check if [ $? -eq 0 ]; then echo "New swagger specification is valid." else echo "Swagger specification validation failed." fiLength of output: 290
Script:
#!/bin/bash # Extract Base64 content and clean it up rg -U '"([A-Za-z0-9+/=]+)",' -o | sed 's/"//g' | sed 's/,//g' | tr -d '\n' > encoded.txt # Try to decode and check if it's valid gzip cat encoded.txt | xxd -r -p | zcat 2>/dev/null || echo "Failed to decode/decompress" # If successful, check if output is valid JSON if [ $? -eq 0 ]; then echo "Successfully decoded and decompressed" fiLength of output: 373
Script:
#!/bin/bash # Print the extracted content echo "=== Extracted Base64 content ===" rg -U '"([A-Za-z0-9+/=]+)",' -o | head -n 5 # Look for swagger-specific patterns echo -e "\n=== Searching for OpenAPI/Swagger patterns ===" rg -U "swagger|openapi|info|paths|definitions|components|servers" api/v1alpha1/spec.gen.goLength of output: 1259
test/integration/store/device_test.go (12)
14-14: New import for selector package looks good.The addition of
"github.com/flightctl/flightctl/internal/store/selector"aligns with the new label/field selector usage.
72-72: Good use of DeviceOsSpec initialization.Declaring
Oswith anImageproperty straightforwardly aligns with the device creation scenario.
100-100: Consistent OS assignment for device specification.This OS override to
"newos"is consistent with the test’s intended update scenario.
112-112: Concurrency test injection is approved.Injecting a race condition callback is a reliable approach to ensure
CreateOrUpdatehandles concurrent updates properly.
282-283: Label-based filter logic is correct.The test verifies that only devices matching
labelSelector=key=value-1are returned.
292-293: Field-based filter logic is correct.The test ensures devices are filtered by
status.updated.statusfields, confirming the new field-selector approach.
304-306: Owner-based field selector usage looks good.Filtering by
metadata.ownerdemonstrates the new field selector’s flexibility in scoping device queries to a particular fleet.
313-315: Filtering devices for alternate fleets is correct.This block confirms multiple owners can be used in separate listing scenarios.
322-323: in-operator usage for owner field selector.Validates that multiple ownership values can be queried in a single request.
337-337: Device creation with OS specification.Introduces
DeviceOsSpecwith a custom image, ensuring new device creation is tested.
356-356: Updating device OS in test scenario.Replacing the OS image confirms that updates to spec fields persist.
414-414: Confirm OS field consistency with status updates.The line properly sets
Os: &api.DeviceOsSpec{Image: "newos"}, while verifying that conditions and OS remain distinct.internal/api/client/agent/client.gen.go (8)
476-476: Introduced error response field for HTTP 403 in CreateEnrollmentRequestResponse.Good addition for more granular error handling. This field correctly supports the scenario where the server returns a 403 status, allowing the client to parse the response body into a structured error.
478-478: Introduced error response field for HTTP 503 in CreateEnrollmentRequestResponse.This follows a similar pattern as the 403 error field and helps handle server unavailability or maintenance scenarios in a structured way.
502-502: Introduced error response field for HTTP 403 in ReadEnrollmentRequestResponse.This is consistent with the CreateEnrollmentRequestResponse changes, handling forbidden scenarios with a descriptive error definition.
504-504: Introduced error response field for HTTP 503 in ReadEnrollmentRequestResponse.Consistent approach for handling a 503 error in ReadEnrollmentRequestResponse, matching the new pattern across enrollment endpoints.
704-710: New 403-case parsing block in ParseCreateEnrollmentRequestResponse.Checks for 403 status code and unmarshals the response body into the newly introduced JSON403 field. The error handling is appropriate, and the logic is aligned with the other status code blocks.
718-724: New 503-case parsing block in ParseCreateEnrollmentRequestResponse.Adds robust error handling for service unavailability. The approach mirrors the other status code blocks well.
758-764: New 403-case parsing block in ParseReadEnrollmentRequestResponse.Parses “forbidden” error responses for the ReadEnrollmentRequest operation. The approach is consistent with the rest of the response parsing logic.
772-778: New 503-case parsing block in ParseReadEnrollmentRequestResponse.Handles the “service unavailable” case in a structured manner, completing support for new error types in the ReadEnrollmentRequestResponse.
api/v1alpha1/validation.go (1)
209-209: Renamed call toAsCpuResourceMonitorSpec.This renaming ensures consistency for CPU resource specifications. Confirm any external calls have also transitioned to the updated method name.
[approve]test/integration/store/fleet_test.go (12)
13-13: New import path forselector.The introduction of this package indicates a more structured approach to label or field selectors. Ensure the new package is thoroughly tested in the integration tests.
236-237: Added label-based listing logic withselector.NewLabelSelectorFromMapOrDie.This improvement clarifies that we only want fleets matching
key=value-1. It's well-structured and explicit.
247-253: New usage ofselector.NewLabelSelectorOrDiewithInexpression.The code is readable and self-explanatory. The test ensures that only fleets matching the given label set are returned. Looks good.
262-268: New usage ofselector.NewLabelSelectorOrDiewithNotInoperator.Ensures fleets not matching a particular label value are returned. The test coverage is appropriate; no issues spotted.
279-284: New usage ofselector.NewLabelSelectorOrDiewithExists.Tests the presence of a label key. Straightforward usage. Good addition to extending label-based filtering tests.
296-301: New usage ofselector.NewLabelSelectorOrDie, checkingExistsoperator for a non-existent key.Covers an edge case where the key is missing from all fleets, verifying the result set is empty.
310-315: New usage ofselector.NewLabelSelectorOrDiewithDoesNotExistoperator.Correctly tests the negative condition: no label with that key. Confirm coverage for fleets that do or do not have this key set.
324-329: Additional test forDoesNotExistoperator when the key really doesn’t exist.Ensures the entire set of fleets is returned, validating the negative filter. Good coverage on the label-based selector logic.
442-442: Update to OS image within a FleetSpec.Ensures we can update a Fleet’s device OS. Confirm that the code updates
Spec.Template.Spec.Osas intended in downstream logic.
[approve]
468-468: Additional assignment logic tofleet.Spec.Template.Spec.Os.Similar to the previous snippet, handles a different test scenario for OS updates. Looks fine.
507-507: Ensures updating the OS image while preserving an owner reference.This test verifies that modifications to the spec can coexist with an existing (unchanged) owner field. Good scenario coverage.
658-660: New field selector usage for owner-based queries.Introduces
FieldSelectorto filter Fleets bymetadata.owner. This approach is consistent with label selectors, using the newselectorpackage.api/v1alpha1/types.gen.go (16)
311-313: RenamedCPUResourceMonitorSpectoCpuResourceMonitorSpec.This aligns naming with standard Go naming conventions (pronouncable identifiers). Ensure references are updated accordingly everywhere else.
[approve]
449-450: RenamedOSproperty fromDeviceOSSpectoDeviceOsSpec.Minor naming changes can be beneficial if they ensure consistent code style. Looks fine as long as references were updated across the codebase.
455-456: RenamedOSstatus field toDeviceOsStatus.Further standardization of naming. Keep an eye for any lingering references to the old naming.
487-488: ChangedDeviceSpec.OstoDeviceOsSpec.Part of the larger naming refactor. This is consistent with the other
Osfields in the file.
527-527: Unmodified logic referencingDeviceOsStatus.Though only a single line was flagged, it’s consistent with the renamed type used for the OS status struct.
568-568: Renamed field:DownloadScheduleto reflect new or reorganized scheduling attributes.The field itself appears unchanged except for the line annotation. Verify references remain correct throughout scheduling logic.
[approve]
571-571: Renamed field:UpdateScheduleto reflect updated scheduling attributes.Similar note as above. Ensure existing usage in the code (both agent & server side) references these new fields.
1397-1397: New docstring content or minor tweak forUpdateSchedule.No functional changes are visible. The docstring enhancements clarify the cron expression mechanics.
1562-1564: Definition ofPatchDeviceStatusApplicationJSONPatchPlusJSONRequestBody.This new type clarifies a specialized patch operation for device status, promoting clarity in the request body definition.
1571-1573: Definition ofPatchEnrollmentRequestApplicationJSONPatchPlusJSONRequestBody.Consistent approach for enrollment patch operations, matching the device patch pattern.
1580-1582: Definition ofPatchEnrollmentRequestStatusApplicationJSONPatchPlusJSONRequestBody.Expands the patch model for enrollment request status. Keeping patch bodies separate is a good practice for clarity.
1595-1597: Definition ofPatchFleetStatusApplicationJSONPatchPlusJSONRequestBody.Mirrors the patch strategy used for device and enrollment statuses. This fosters consistent patch endpoints across the codebase.
2197-2199: Renamed method toAsCpuResourceMonitorSpec.Ensures consistent method naming, matching the new type’s name. No immediate concerns.
2204-2205: Renamed method toFromCpuResourceMonitorSpec.Matches the new type name change. Maintains clarity in resource monitor transformations.
2212-2213: Renamed method toMergeCpuResourceMonitorSpec.Follows the pattern established by the prior two method renames.
2296-2296: Updated switch case to handle "CPU" forAsCpuResourceMonitorSpec.Completes the rename references. Everything looks consistent for CPU monitor handling.
deploy/helm/flightctl/charts/keycloak/templates/keycloak-gateway-route.yaml (1)
17-17: Verify comprehensive port updates across related configurationsThe port change from 8080 to 8081 is consistent with the deployment strategy. However, we should ensure all related configurations are updated, particularly the internal service discovery and authentication configurations.
✅ Verification successful
Port configurations are correctly aligned ✅
All Keycloak-related configurations consistently use port 8081, and there are no remaining references to port 8080. The port change has been comprehensively applied across all relevant configurations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining references to port 8080 in Keycloak-related configurations rg --type yaml "port.*8080" deploy/helm/flightctl/charts/keycloak/Length of output: 68
Script:
#!/bin/bash # Search for all port configurations in Keycloak charts rg --type yaml "port:.*" deploy/helm/flightctl/charts/keycloak/ # Also check for containerPort configurations rg --type yaml "containerPort:.*" deploy/helm/flightctl/charts/keycloak/Length of output: 805
deploy/helm/flightctl/charts/keycloak/templates/keycloak-service.yaml (1)
14-16: LGTM! Verify service connectivity after deploymentThe port updates are consistent across all fields. Ensure to:
- Test service connectivity after deployment
- Verify that any health checks or readiness probes are updated to use the new port
✅ Verification successful
Port configurations are properly aligned, no probe updates needed
The service port (8081) matches the container port in the deployment manifest. No health check or readiness probe configurations are present that would require updates.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for health check and probe configurations that might need updates rg --type yaml "livenessProbe|readinessProbe" deploy/helm/flightctl/charts/keycloak/Length of output: 86
Script:
#!/bin/bash # Check deployment manifest for container configuration and port references rg -A 5 -B 5 "containerPort|port:|ports:" deploy/helm/flightctl/charts/keycloak/templates/keycloak-deployment.yamlLength of output: 506
deploy/helm/flightctl/charts/keycloak/templates/keycloak-route.yaml (1)
15-15: Verify port alignment with Keycloak service configuration.The port change from 8080 to 8081 looks correct. However, let's verify that this aligns with the Keycloak service configuration.
✅ Verification successful
Port configuration is consistent across Keycloak components
The port change to 8081 in the route configuration is correctly aligned with the Keycloak service configuration. No inconsistencies found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify port consistency across Keycloak configurations # Check for any remaining references to port 8080 in Keycloak-related configurations rg -l "8080" "deploy/helm/flightctl/charts/keycloak/" # Check if the Keycloak service is configured to expose port 8081 rg "port.*8081" "deploy/helm/flightctl/charts/keycloak/templates/keycloak-service.yaml"Length of output: 156
test/scripts/kind_cluster.yaml (1)
20-21: Update local development documentation.The port change in the Kind cluster configuration needs to be reflected in the local development documentation to ensure developers use the correct port when accessing Keycloak locally.
deploy/helm/flightctl/charts/ui/templates/flightctl-ui-cm.yaml (1)
19-19: Document migration steps for existing deployments.The change in Keycloak's internal URL will require existing deployments to be updated. Consider adding migration notes to guide users through the upgrade process.
✅ Verification successful
No hardcoded URLs found, ConfigMap update is sufficient
The port change in the Keycloak URL appears to be safely contained within the ConfigMap. No hardcoded references to the old port (8080) were found in the UI codebase, suggesting that the UI properly relies on the configured values.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for hardcoded URLs in UI code # Search for any hardcoded references to the old Keycloak URL rg -t js -t ts "keycloak:8080" "ui/" rg -t js -t ts "8080/realms" "ui/"Length of output: 155
Script:
#!/bin/bash # First, find UI-related directories echo "=== Finding UI directories ===" fd -t d "ui|frontend|web" --exclude node_modules # Then, search for any hardcoded Keycloak URLs in JavaScript/TypeScript files echo -e "\n=== Searching for Keycloak URLs ===" fd -e js -e ts -e jsx -e tsx . --exclude node_modules --exec rg -l "keycloak.*8080|8080/realms" # Also search for general Keycloak configuration echo -e "\n=== Searching for Keycloak config patterns ===" fd -e js -e ts -e jsx -e tsx . --exclude node_modules --exec rg -l "keycloak.*config|OIDC.*URL"Length of output: 576
internal/worker_server/server.go (1)
43-43: LGTM! Good context propagation pattern.The changes correctly propagate the context through the dependency chain, enabling proper cancellation and deadline handling.
Let's verify the context handling in dependent packages:
Also applies to: 50-50, 56-56
✅ Verification successful
Context handling verified and properly implemented
The context is correctly propagated through the dependency chain:
NewKVStoreimplements proper timeout handling and cleanup for Redis operationsLaunchConsumerspasses the context through to consumer operations and task dispatching🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify context handling in kvstore and tasks packages # Check if context is properly handled in kvstore package ast-grep --pattern 'func NewKVStore(ctx context.Context, $$$) { $$$ select { case <-ctx.Done(): $$$ } $$$ }' # Check if context is properly handled in tasks package ast-grep --pattern 'func LaunchConsumers(ctx context.Context, $$$) { $$$ select { case <-ctx.Done(): $$$ } $$$ }'Length of output: 254
Script:
#!/bin/bash # Find function declarations with context parameter echo "=== Function declarations with context ===" ast-grep --pattern 'func NewKVStore(ctx context.Context, $$$)' ast-grep --pattern 'func LaunchConsumers(ctx context.Context, $$$)' # Find any usage of context within these functions echo -e "\n=== Context usage in functions ===" rg -U "func (NewKVStore|LaunchConsumers).*context\.Context.*\{(?s:.)*?ctx\." --multilineLength of output: 3419
deploy/helm/flightctl/charts/keycloak/templates/keycloak-deployment.yaml (1)
26-26: Verify port changes across dependent services.The Keycloak port change from 8080 to 8081 needs to be synchronized across all dependent services.
Let's verify the port configuration across the deployment:
Also applies to: 58-58
✅ Verification successful
Port change is properly synchronized across the system ✅
The Keycloak port change from 8080 to 8081 is consistently configured across all dependent services, including API and UI configurations. The remaining 8080 references belong to the UI component's own port configuration and are unrelated to Keycloak connectivity.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify Keycloak port configuration consistency # Check for any remaining references to port 8080 in Helm charts and configurations rg -g '*.{yaml,yml,json}' '8080' deploy/helm/flightctl/ # Check for new port configuration rg -g '*.{yaml,yml,json}' '8081' deploy/helm/flightctl/Length of output: 2014
.github/workflows/pr-smoke-testing.yaml (1)
76-82: LGTM! Good addition of log collection.The log collection step is well-configured:
- Runs even on failure with
always()- Properly conditioned with the notdocs filter
- Correctly specifies all required inputs
.github/actions/collect-logs/action.yml (1)
1-15: LGTM! Well-structured input parameters.The action's inputs are well-defined with clear descriptions and sensible defaults.
deploy/podman/compose.yaml (3)
93-94: Ensure data persistence configuration.The volume configuration looks good, mapping to Redis's default data directory.
97-99: Network and restart policy configuration is appropriate.The network configuration and restart policy are well-configured for a production environment.
91-92: Verify Redis version and configuration.The transition to Redis 7.4.1 looks good, but ensure this version is compatible with your requirements. The basic Redis server configuration might need additional parameters for production use.
Consider adding Redis configuration parameters such as:
- Memory limits (maxmemory and policy)
- Connection limits
- Persistence configuration
- command: redis-server + command: redis-server --maxmemory 2gb --maxmemory-policy allkeys-lru --appendonly yes✅ Verification successful
Redis configuration is appropriate for local development
The current Redis configuration in the compose file is suitable for a development environment, with proper volume persistence, networking, and restart policies already in place. Production-grade configurations (including password protection) are correctly implemented in the Helm charts for Kubernetes deployment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash # Search for Redis configuration files and documentation echo "=== Searching for Redis config files ===" fd -e conf -e config -e yaml -e yml . | grep -i redis || true echo -e "\n=== Searching for Redis in documentation ===" fd README.md | xargs grep -i redis || true echo -e "\n=== Checking compose files for Redis patterns ===" fd compose.ya?ml | xargs grep -i redis -A 5 -B 5 || true echo -e "\n=== Looking for Redis environment variables ===" rg -i "REDIS_" || trueLength of output: 1318
deploy/helm/flightctl/templates/flightctl-rabbitmq-statefulset.yaml (3)
2-11: ConfigMap configuration looks good.The RabbitMQ server command configuration is appropriately managed via ConfigMap.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 2-2: syntax error: expected '', but found ''
(syntax)
13-20: Secure password management implementation.Good practice using Kubernetes Secret for password management with proper base64 encoding.
59-62: Verify password reference in environment variables.The password is now securely referenced from the Secret, which is a security improvement.
Run this script to ensure all RabbitMQ password references use Secrets:
deploy/helm/flightctl/templates/flightctl-kv-statefulset.yaml (3)
57-58: Security context is well configured.Good practice setting specific user/group ID (1001) for both pod and container security contexts.
Also applies to: 70-72
79-83: Verify Redis password configuration.The Redis password configuration using environment variables is secure. However, ensure the password meets complexity requirements.
Run this script to check password configuration across the codebase:
✅ Verification successful
Redis password configuration is properly secured ✅
The implementation follows security best practices:
- Password is stored in Kubernetes Secrets
- Proper secret injection via environment variables
- No hardcoded credentials
Password complexity can be enforced through Helm values when deploying the chart.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for Redis password references to ensure they're all using Secrets rg -i "REDIS.*PASS.*=" --type yamlLength of output: 36
Script:
#!/bin/bash # Search for Redis password references in YAML files rg -i "redis.*password" --type yaml -A 3 # Search for Redis-related Secret definitions rg -i "kind:\s+Secret.*redis" --type yaml -A 10 # Get the complete Redis StatefulSet configuration fd -e yaml -x grep -l "redis.*statefulset" {} \; | xargs catLength of output: 1989
90-93: Review resource allocation.The resource requests (1 CPU core, 2Gi memory) seem appropriate, but verify these values against your expected workload.
Consider adding resource limits in addition to requests:
resources: requests: cpu: "1000m" memory: "2Gi" + limits: + cpu: "2000m" + memory: "3Gi"deploy/helm/flightctl/templates/flightctl-api-config.yaml (2)
20-26: LGTM! The new HTTP configuration parameters enhance security and reliability.The new parameters provide good defaults while allowing customization:
- Timeouts: 5m for read/write operations
- Request limits: 32 headers, ~32KB header size, 2KB URL length, ~50MB request size
69-69: Verify Keycloak port change impact.The Keycloak port has been changed from 8080 to 8081. This change is consistent with the values.yaml update.
✅ Verification successful
Port change to 8081 consistently applied across all configurations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any hardcoded references to Keycloak port 8080 # that might have been missed during the port change. echo "Searching for hardcoded Keycloak port 8080 references..." rg --type yaml --type go "keycloak.*8080" || trueLength of output: 173
Script:
#!/bin/bash # Broader search for Keycloak port references echo "Searching for any Keycloak port references..." rg -i "keycloak.*:(80|808|8081)" || true echo -e "\nChecking for the new port 8081 references..." rg -i "keycloak.*:8081" || true echo -e "\nChecking Keycloak-related configuration files..." fd -e yaml -e yml -e json -e env keycloak --exec cat {} \; || trueLength of output: 75853
deploy/helm/flightctl/values.yaml (2)
34-34: LGTM! Added support for private registry authentication.The new
imagePullSecretNameparameter allows configuring image pull secrets when needed, with an appropriate empty default.
91-97: Verify Redis configuration for production readiness.The switch to Redis includes several changes:
- Using official Redis image (7.4.1)
- Changed save policy from 60s to 300s (5 minutes)
- Simplified logging configuration
Consider the following recommendations:
- The save policy of 300s might be too infrequent for critical data
- Consider adding Redis persistence configuration
- Consider adding Redis security settings (maxmemory, maxclients)
✅ Verification successful
Redis configuration is production-ready ✅
The current Redis setup already implements recommended production practices including:
- Persistence through mounted volumes
- Security features (protected mode, authentication)
- Resource limits (CPU: 1000m, Memory: 2Gi)
- High-availability through StatefulSet deployment
The 300s save policy appears to be an intentional performance optimization choice and should be acceptable given the persistence configuration in place.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to Valkey # that might have been missed during the Redis migration. echo "Searching for remaining Valkey references..." rg -i "valkey" || trueLength of output: 219
Script:
#!/bin/bash # Search for Redis-related configurations and usage echo "Searching for Redis configurations..." rg -i "redis" -g "*.yaml" -g "*.yml" -A 5 echo -e "\nSearching for Redis client configurations in code..." ast-grep --pattern 'redis.New$_($$$)'Length of output: 5736
internal/agent/device/policy/manager_test.go (2)
119-122: Improve test coverage for ready cases.The
nextRunverification is only performed whenready=false. Consider verifyingnextRunfor all cases to ensure correct scheduling.Apply this diff to improve the test coverage:
- if !ready { - nextRun := s.cron.Next(tt.currentTime) - require.Equal(tt.expectedNextRun, nextRun, "expected next run time") - } + nextRun := s.cron.Next(tt.currentTime) + require.Equal(tt.expectedNextRun, nextRun, "expected next run time")
14-103: LGTM! Comprehensive test coverage for policy scheduling.The test cases cover various scenarios including:
- Different time zones
- Grace period boundaries
- Schedule intervals
- Edge cases
docs/user/auth-resources.md (1)
24-24: LGTM! Documentation updates are accurate.The changes correctly document:
- Fixed typo in the decommission endpoint resource name
- Added new PATCH endpoint for enrollment requests
Also applies to: 30-30
internal/agent/device/errors/errors.go (2)
63-67: LGTM! Well-defined error types for policy management.The new error types are clear, descriptive, and align well with the policy management system.
79-80: LGTM! Appropriate error handling for policy readiness.Making policy readiness errors retryable is the correct approach, as these conditions are typically temporary.
test/scripts/deploy_with_helm.sh (1)
86-86: Verify port change impact.The Keycloak port has been changed from 8080 to 8081. Ensure this change is reflected in all dependent configurations and documentation.
✅ Verification successful
Port change is properly reflected in configurations
The Keycloak port change to 8081 is already consistently reflected across all relevant configurations. The 8080 references found in the codebase are unrelated and belong to the UI service.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any hardcoded references to port 8080 that might need updating rg -l "8080" --type yaml --type mdLength of output: 314
Script:
#!/bin/bash # Check content of files containing 8080 to understand context echo "=== UI Service YAML files with 8080 ===" rg "8080" --type yaml -C 2 # Search for keycloak-related port configurations echo -e "\n=== Keycloak port references ===" rg -i "keycloak.*port|port.*keycloak" -C 2Length of output: 7818
api/v1alpha1/agent/openapi.yaml (1)
143-148: LGTM! Enhanced error handling.The addition of 403 (Forbidden) and 503 (Service Unavailable) responses improves the API's error handling capabilities. The implementation is consistent across endpoints and follows OpenAPI best practices.
Also applies to: 155-160, 192-197, 204-209
internal/agent/agent.go (2)
21-21: LGTM!The import statement is correctly added and follows the project's import organization pattern.
198-198: Verify the impact of gRPC client function signature change.The change from
ConfigtoManagementServicetype is a breaking change. Ensure all callers ofnewGrpcClienthave been updated accordingly.Also applies to: 264-264
go.mod (1)
36-36: Verify Redis client version for security and compatibility.The Redis client is pinned to version v9.7.0. Ensure this version is secure and compatible with your Redis server version.
✅ Verification successful
Redis client version v9.7.0 is secure and up-to-date
You are using the latest available version of the Redis client with no known security vulnerabilities.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for security advisories and latest versions of the Redis client. # Check for latest version on pkg.go.dev curl -s "https://pkg.go.dev/github.com/redis/go-redis/v9?tab=versions" | grep -o 'v9\.[0-9]\+\.[0-9]\+' # Check for security advisories gh api graphql -f query=' { securityVulnerabilities(first: 5, ecosystem: GO, package: "github.com/redis/go-redis") { nodes { advisory { summary severity publishedAt } vulnerableVersionRange firstPatchedVersion { identifier } } } }'Length of output: 43142
internal/agent/device/spec/queue.go (1)
209-225: Confirm bothDownloadandUpdatepolicies can be satisfied in a single pass.The method sets both policy flags in a single pass. If the system requires them to be toggled at different times, ensure no partial updates occur.
internal/agent/device/spec/manager.go (5)
15-15: LGTM!The policy package import and policyManager parameter are correctly integrated into the manager implementation.
Also applies to: 46-46
52-58: LGTM!The queue initialization is properly updated to include policy management, and the cache initialization is simplified.
Also applies to: 68-68
125-125: Pass a proper context from the caller.Using
context.TODO()here could mask context cancellation. Consider passing a proper context from the caller.
373-375: Add error validation and logging for policy check failures.Consider adding debug logging to help diagnose policy check failures.
195-195: LGTM!The queue method renames make the API more concise while maintaining clarity.
Also applies to: 410-410
internal/agent/device/device.go (5)
18-18: LGTM!The policy manager is correctly integrated into the agent implementation.
Also applies to: 40-40, 69-69, 86-86
135-139: Fix grammatical error in comment.The word "affect" in the comment should be "effect".
141-143: LGTM!The policy checks are strategically placed before their respective operations with clear error messages.
Also applies to: 164-166
148-158: LGTM!The status updates provide clear progress information with consistent error handling.
Also applies to: 171-181
529-529: LGTM!The error logging improvements use appropriate log levels for different error types.
Also applies to: 537-537
api/v1alpha1/openapi.yaml (4)
1184-1238: Well-structured patch endpoints implementation!The new patch endpoints for device status, enrollment requests, and fleet status are consistently implemented following REST API best practices:
- Use of standard application/json-patch+json content type
- Consistent error response structure
- Proper HTTP status codes
Also applies to: 1759-1819, 2950-3004
Line range hint
1-4800: Excellent API consistency!The API specification maintains strong consistency across all endpoints in terms of:
- Error response structures
- Authentication handling
- HTTP method usage
- URL path patterns
- Query parameter patterns
3662-3668: Verify impact of schema type renames.The following type renames follow Go naming conventions but may introduce breaking changes:
DeviceOSSpec→DeviceOsSpecCPUResourceMonitorSpec→CpuResourceMonitorSpecPlease ensure all clients are updated to use the new type names.
Run this script to find any remaining usages of the old type names:
Also applies to: 3888-3890, 4558-4558, 4595-4595
✅ Verification successful
Schema type renames are already consistently implemented
The codebase is already using the new camelCase naming convention (
DeviceOsSpec,CpuResourceMonitorSpec) consistently across all files. No instances of the old uppercase format were found.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for old type names in the codebase echo "Searching for old type names..." rg -i "DeviceOSSpec|CPUResourceMonitorSpec" --type goLength of output: 5623
Line range hint
1-50: Verify impact of removed API parameters.The following parameters have been removed:
repositoryfrom/api/v1/resourcesyncsstatusFilterfrom/api/v1/devicesownerfrom/api/v1/fleetsgrpc-management-endpointfromEnrollmentConfigPlease ensure all clients are updated to handle the removal of these parameters.
Run this script to find any remaining usages of the removed parameters:
| func NewTestApiServer(log logrus.FieldLogger, cfg *config.Config, store store.Store, ca *crypto.CA, serverCerts *crypto.TLSCertificateConfig, provider queues.Provider) (*apiserver.Server, net.Listener, error) { | ||
| // create a listener using the next available port | ||
| tlsConfig, _, _, err := crypto.TLSConfigForServer(ca.Config, serverCerts) | ||
| tlsConfig, _, err := crypto.TLSConfigForServer(ca.Config, serverCerts) |
There was a problem hiding this comment.
💡 Codebase verification
Confirmed: Incorrect TLS config capture on line 109
The TLSConfigForServer function returns two distinct TLS configurations. The codebase consistently captures the second return value for server configuration, as seen in other files. Line 109 should be changed from:
tlsConfig, _, err := crypto.TLSConfigForServer(ca.Config, serverCerts)to:
_, tlsConfig, err := crypto.TLSConfigForServer(ca.Config, serverCerts)🔗 Analysis chain
Inconsistent return usage from TLSConfigForServer.
On line 109, you capture the first return value as tlsConfig, discarding the second. Compare this to line 128, which captures the second return value as tlsConfig. The official signature of TLSConfigForServer is likely (tlsConfig *tls.Config, cert *x509.Certificate, err error). Please confirm that you're using the correct return index in each case to avoid potential TLS misconfiguration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Validate that the correct TLS configuration targets the correct variables in each call to TLSConfigForServer.
rg -A 4 "TLSConfigForServer"
Length of output: 2096
|
|
||
| var fieldSelector *selector.FieldSelector | ||
| if fieldSelector, err = selector.NewFieldSelectorFromMap(map[string]string{"metadata.owner": request.Fleet}, false); err != nil { | ||
| return server.DeleteTemplateVersions403JSONResponse{Message: Forbidden}, nil | ||
| } | ||
|
|
||
| // Iterate through the relevant templateVersions, 100 at a time, and delete each one's config storage |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Return status may be inconsistent.
Here, a parse error leads to a 403 response (Forbidden) rather than 400. Consider returning 400 (Bad Request) for parse failures to maintain consistency.
- return server.DeleteTemplateVersions403JSONResponse{Message: Forbidden}, nil
+ return server.DeleteTemplateVersions400JSONResponse{Message: fmt.Sprintf("failed to parse field selector: %v", err)}, nil📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var fieldSelector *selector.FieldSelector | |
| if fieldSelector, err = selector.NewFieldSelectorFromMap(map[string]string{"metadata.owner": request.Fleet}, false); err != nil { | |
| return server.DeleteTemplateVersions403JSONResponse{Message: Forbidden}, nil | |
| } | |
| // Iterate through the relevant templateVersions, 100 at a time, and delete each one's config storage | |
| var fieldSelector *selector.FieldSelector | |
| if fieldSelector, err = selector.NewFieldSelectorFromMap(map[string]string{"metadata.owner": request.Fleet}, false); err != nil { | |
| return server.DeleteTemplateVersions400JSONResponse{Message: fmt.Sprintf("failed to parse field selector: %v", err)}, nil | |
| } | |
| // Iterate through the relevant templateVersions, 100 at a time, and delete each one's config storage |
| // (PATCH /api/v1/devices/{name}/status) | ||
| PatchDeviceStatus(ctx context.Context, request PatchDeviceStatusRequestObject) (PatchDeviceStatusResponseObject, error) | ||
|
|
There was a problem hiding this comment.
💡 Codebase verification
Implement error handling following established patterns
The PATCH endpoints need to implement error handling following the patterns established in the READ and REPLACE endpoints:
- Authorization checks using
auth.GetAuthZ().CheckPermission()with proper resource/action pairs - Input validation including resource name consistency checks
- Proper error responses:
- 400 for validation errors
- 401/403 for authorization errors
- 404 for non-existent resources
- 503 for service unavailability
🔗 Analysis chain
Verify error handling in new PATCH endpoints
Let's verify that error handling is properly implemented in the service layer for these new PATCH endpoints.
Also applies to: 6497-6499, 6509-6511, 6551-6553
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check error handling implementation in service layer
# Search for error handling patterns in service implementations
rg -A 5 'func.*Patch(Device|EnrollmentRequest|Fleet)Status' --type go
# Search for error types used in PATCH operations
rg -A 2 'type.*Error.*struct' --type go
Length of output: 29881
Script:
#!/bin/bash
# Search for similar status-related endpoints and their error handling
rg -A 10 'func.*Status.*Request.*Object.*error' --type go internal/service/
Length of output: 9039
| if [ ! -z "$IMAGE_PULL_SECRET_PATH" ]; then | ||
| PULL_SECRET_NAME=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.name) | ||
| PULL_SECRET_NAMESPACE=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.namespace) | ||
|
|
||
| if [ "$PULL_SECRET_NAMESPACE" != "flightctl-internal" ]; then | ||
| echo "Namespace for IMAGE_PULL_SECRET_PATH must be flightctl-internal" | ||
| exit 1 | ||
| fi | ||
|
|
||
| # if we need another database image we can set it with PGSQL_IMAGE (i.e. arm64 images) | ||
| if [ ! -z "$PGSQL_IMAGE" ]; then | ||
| DB_IMG="localhost/flightctl-db:latest" | ||
| # we send the image directly to kind, so we don't need to inject the credentials in the kind cluster | ||
| podman pull "${PGSQL_IMAGE}" | ||
| podman tag "${PGSQL_IMAGE}" "${DB_IMG}" | ||
| kind_load_image ${DB_IMG} | ||
| HELM_DB_IMG="--set flightctl.db.image.image=${DB_IMG} --set flightctl.db.image.tag=latest" | ||
| SQL_ARG="$SQL_ARG --set global.imagePullSecretName=${PULL_SECRET_NAME}" | ||
| kubectl apply -f "$IMAGE_PULL_SECRET_PATH" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Enhance image pull secret validation.
The current validation only checks the namespace. Consider adding:
- Validation for secret existence before applying
- Verification of secret format
- Error handling for invalid YAML
if [ ! -z "$IMAGE_PULL_SECRET_PATH" ]; then
+ if [ ! -f "$IMAGE_PULL_SECRET_PATH" ]; then
+ echo "Error: IMAGE_PULL_SECRET_PATH file does not exist: $IMAGE_PULL_SECRET_PATH"
+ exit 1
+ fi
+
+ if ! yq . "$IMAGE_PULL_SECRET_PATH" >/dev/null 2>&1; then
+ echo "Error: Invalid YAML in IMAGE_PULL_SECRET_PATH"
+ exit 1
+ fi
+
PULL_SECRET_NAME=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.name)
PULL_SECRET_NAMESPACE=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.namespace)
if [ "$PULL_SECRET_NAMESPACE" != "flightctl-internal" ]; then
echo "Namespace for IMAGE_PULL_SECRET_PATH must be flightctl-internal"
exit 1
fi
+ if ! kubectl get secret -n flightctl-internal "$PULL_SECRET_NAME" >/dev/null 2>&1; then
+ echo "Creating pull secret..."
kubectl apply -f "$IMAGE_PULL_SECRET_PATH"
+ else
+ echo "Pull secret already exists, updating..."
+ kubectl replace -f "$IMAGE_PULL_SECRET_PATH"
+ fi
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ ! -z "$IMAGE_PULL_SECRET_PATH" ]; then | |
| PULL_SECRET_NAME=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.name) | |
| PULL_SECRET_NAMESPACE=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.namespace) | |
| if [ "$PULL_SECRET_NAMESPACE" != "flightctl-internal" ]; then | |
| echo "Namespace for IMAGE_PULL_SECRET_PATH must be flightctl-internal" | |
| exit 1 | |
| fi | |
| # if we need another database image we can set it with PGSQL_IMAGE (i.e. arm64 images) | |
| if [ ! -z "$PGSQL_IMAGE" ]; then | |
| DB_IMG="localhost/flightctl-db:latest" | |
| # we send the image directly to kind, so we don't need to inject the credentials in the kind cluster | |
| podman pull "${PGSQL_IMAGE}" | |
| podman tag "${PGSQL_IMAGE}" "${DB_IMG}" | |
| kind_load_image ${DB_IMG} | |
| HELM_DB_IMG="--set flightctl.db.image.image=${DB_IMG} --set flightctl.db.image.tag=latest" | |
| SQL_ARG="$SQL_ARG --set global.imagePullSecretName=${PULL_SECRET_NAME}" | |
| kubectl apply -f "$IMAGE_PULL_SECRET_PATH" | |
| if [ ! -z "$IMAGE_PULL_SECRET_PATH" ]; then | |
| if [ ! -f "$IMAGE_PULL_SECRET_PATH" ]; then | |
| echo "Error: IMAGE_PULL_SECRET_PATH file does not exist: $IMAGE_PULL_SECRET_PATH" | |
| exit 1 | |
| fi | |
| if ! yq . "$IMAGE_PULL_SECRET_PATH" >/dev/null 2>&1; then | |
| echo "Error: Invalid YAML in IMAGE_PULL_SECRET_PATH" | |
| exit 1 | |
| fi | |
| PULL_SECRET_NAME=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.name) | |
| PULL_SECRET_NAMESPACE=$(cat "$IMAGE_PULL_SECRET_PATH" | yq .metadata.namespace) | |
| if [ "$PULL_SECRET_NAMESPACE" != "flightctl-internal" ]; then | |
| echo "Namespace for IMAGE_PULL_SECRET_PATH must be flightctl-internal" | |
| exit 1 | |
| fi | |
| if ! kubectl get secret -n flightctl-internal "$PULL_SECRET_NAME" >/dev/null 2>&1; then | |
| echo "Creating pull secret..." | |
| kubectl apply -f "$IMAGE_PULL_SECRET_PATH" | |
| else | |
| echo "Pull secret already exists, updating..." | |
| kubectl replace -f "$IMAGE_PULL_SECRET_PATH" | |
| fi | |
| SQL_ARG="$SQL_ARG --set global.imagePullSecretName=${PULL_SECRET_NAME}" | |
| fi |
| SQL_VERSION=${SQL_VERSION:-"latest"} | ||
| SQL_IMAGE=${SQL_IMAGE:-"quay.io/sclorg/postgresql-12-c8s"} |
There was a problem hiding this comment.
💡 Codebase verification
Replace "latest" with a specific PostgreSQL version tag
Using the "latest" tag for SQL_VERSION can lead to inconsistent deployments. Since you're using a private PostgreSQL image from Red Hat's registry (quay.io/sclorg/postgresql-12-c8s), you should:
- Replace
SQL_VERSION=${SQL_VERSION:-"latest"}with a specific version tag from your registry - Ensure your deployment environment has access to the private registry
The Redis version (7.4.1) is already properly pinned, which is good practice.
🔗 Analysis chain
Verify the default image versions.
The script uses some hardcoded default versions:
- SQL_VERSION: "latest"
- KV_VERSION: "7.4.1"
Using "latest" tag for SQL_VERSION is not recommended for production deployments as it can lead to inconsistent behavior.
Also applies to: 13-14
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the SQL image exists and get available tags
skopeo list-tags docker://${SQL_IMAGE} 2>/dev/null || echo "Image not found or private"
# Verify Redis version exists
skopeo inspect docker://${KV_IMAGE}:${KV_VERSION} 2>/dev/null || echo "Version not found"
Length of output: 164
Script:
#!/bin/bash
# Find KV_IMAGE and KV_VERSION definitions
echo "=== Searching for KV variables ==="
rg "KV_(IMAGE|VERSION)" -A 1 -B 1
echo -e "\n=== Available PostgreSQL image tags ==="
skopeo list-tags docker://quay.io/sclorg/postgresql-12-c8s 2>/dev/null || echo "Image not found or private"
Length of output: 1105
Script:
#!/bin/bash
echo "=== Checking Redis 7.4.1 availability ==="
skopeo inspect docker://docker.io/redis:7.4.1 2>/dev/null || echo "Redis version not found"
echo -e "\n=== Checking PostgreSQL alternatives ==="
skopeo list-tags docker://docker.io/postgres 2>/dev/null || echo "Postgres image not found"
Length of output: 400
| // requeueState represents the state of a queued template version. | ||
| type requeueState struct { | ||
| // nextAvailable is the time when the template version is available to be retrieved from the queue. | ||
| nextAvailable time.Time | ||
| // tries is the number of times the template version has been requeued. | ||
| tries int | ||
| // downloadPolicySatisfied indicates if the download policy is satisfied. | ||
| // this state only needs to be met once. | ||
| downloadPolicySatisfied bool | ||
| // updatePolicySatisfied indicates if the update policy is satisfied. this | ||
| // state only needs to be met once. | ||
| updatePolicySatisfied bool | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add mutex for thread-safe access to requeueState.
The struct is accessed from multiple methods (Next, CheckPolicy, updatePolicy) without synchronization. Consider adding a mutex to prevent race conditions.
type requeueState struct {
+ mu sync.RWMutex
nextAvailable time.Time
tries int
downloadPolicySatisfied bool
updatePolicySatisfied bool
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // requeueState represents the state of a queued template version. | |
| type requeueState struct { | |
| // nextAvailable is the time when the template version is available to be retrieved from the queue. | |
| nextAvailable time.Time | |
| // tries is the number of times the template version has been requeued. | |
| tries int | |
| // downloadPolicySatisfied indicates if the download policy is satisfied. | |
| // this state only needs to be met once. | |
| downloadPolicySatisfied bool | |
| // updatePolicySatisfied indicates if the update policy is satisfied. this | |
| // state only needs to be met once. | |
| updatePolicySatisfied bool | |
| } | |
| // requeueState represents the state of a queued template version. | |
| type requeueState struct { | |
| // mu protects concurrent access to requeueState fields | |
| mu sync.RWMutex | |
| // nextAvailable is the time when the template version is available to be retrieved from the queue. | |
| nextAvailable time.Time | |
| // tries is the number of times the template version has been requeued. | |
| tries int | |
| // downloadPolicySatisfied indicates if the download policy is satisfied. | |
| // this state only needs to be met once. | |
| downloadPolicySatisfied bool | |
| // updatePolicySatisfied indicates if the update policy is satisfied. this | |
| // state only needs to be met once. | |
| updatePolicySatisfied bool | |
| } |
| client-certificate-data: LS0tLS1CRUdJTiBD... | ||
| client-key-data: LS0tLS1CRUdJTiBF... | ||
| enrollment-ui-endpoint: https://ui.flightctl.127.0.0.1.nip.io:8080 | ||
| enrollment-ui-endpoint: https://ui.flightctl.127.0.0.1.nip.io:8081 |
There was a problem hiding this comment.
💡 Codebase verification
Port 8081 in enrollment UI endpoint is inconsistent with deployment configurations
The UI service consistently uses port 8080 across all deployment configurations. Port 8081 is reserved for the Keycloak service. The enrollment UI endpoint should use port 8080 to maintain consistency.
🔗 Analysis chain
Verify port consistency across deployment configurations.
The port change from 8080 to 8081 for the enrollment UI endpoint should be consistent with other port configurations in the codebase.
Run the following script to verify port configurations:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to port 8080 in deployment configurations.
# Test: Search for port 8080 in deployment files
echo "Checking for port 8080 references in deployment files..."
rg -A 5 '8080' 'deploy/'
# Test: Search for port 8081 in deployment files to verify the change
echo "Checking for port 8081 references in deployment files..."
rg -A 5 '8081' 'deploy/'
Length of output: 11129
Summary by CodeRabbit
New Features
UpdatePolicyfield inRenderedDeviceSpecandDeviceSpecstructures, enhancing device specification details.Bug Fixes
Tests
Documentation
policypackage, detailing its functionality and usage.