Skip to content

Version 1#38

Open
alvagante wants to merge 44 commits intomainfrom
0100
Open

Version 1#38
alvagante wants to merge 44 commits intomainfrom
0100

Conversation

@alvagante
Copy link
Copy Markdown
Member

@alvagante alvagante commented Mar 15, 2026

  • Fix MigrationRunner.ts - use db.getPlaceholder() instead of hardcoded ?
  • Fix ExecutionRepository.ts - use db.getPlaceholder() for all SQL queries (INSERT, UPDATE, WHERE, LIMIT/OFFSET)
  • Fix IntegrationManager.ts - clear timeout timers in finally block to prevent leaks
  • Fix facts.ts - clear timeout timer in per-source facts fetching
  • Fix ProxmoxIntegration.ts - fix double proxmox: prefix in nodeUri (use target directly)
  • Fix ProxmoxClient.ts - normalize HTTP response headers, URL-encode task ID and node in waitForTask()
  • Fix migrations 002 and 007 - replace SQLite-specific datetime('now') with portable CURRENT_TIMESTAMP

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

alvagante and others added 11 commits March 11, 2026 18:14
- Delete orphaned database schema files (audit-schema.sql, migrations.sql, rbac-schema.sql)
- Rename initial migration to 000_initial_schema.sql for clarity
- Update DatabaseService to load only numbered migrations instead of base schema files
- Move all schema definitions to the migrations directory as single source of truth
- Add documentation in copilot-instructions.md explaining migration-first approach
- Create internal task documentation for database schema cleanup process
- Eliminate duplicate table definitions that existed across base schemas and migrations
…a file

- Update COPY instruction to include all database files and migrations
- Change from copying only schema.sql to copying entire src/database/ directory
- Ensures future database-related files are automatically included in builds
- Improves maintainability by avoiding need to update Dockerfile when new database files are added
…tern

- Add new Proxmox integration with ProxmoxClient, ProxmoxService, and ProxmoxIntegration classes
- Implement Proxmox API client with authentication and resource management capabilities
- Add comprehensive test coverage for Proxmox integration and service layer
- Update IntegrationManager to register and manage Proxmox integration
- Add dedicated Proxmox routes handler for API endpoints
- Update integration types to include Proxmox configuration schema
- Refactor ConfigService and schema to support Proxmox settings
- Update server configuration to include Proxmox routes
- Add Kiro specification files for puppet-pabawi refactoring workflow
- Update vitest configuration for improved test execution
- Improves infrastructure flexibility by adding virtualization platform support alongside existing integrations
…ycle management

- Add ProvisionPage with dynamic form generation for VM and LXC creation
- Add ManageTab component for node lifecycle actions (start, stop, reboot, destroy)
- Add ProxmoxProvisionForm and ProxmoxSetupGuide components for integration setup
- Add formGenerator utility for dynamic form rendering based on capability metadata
- Add permissions system for RBAC-aware UI element visibility
- Add comprehensive validation and error handling for provisioning operations
- Add test utilities and generators for provisioning-related tests
- Add documentation for Proxmox setup, provisioning, permissions, and management workflows
- Add Kiro specification files for Proxmox frontend UI and integration features
- Update Navigation component to include new Provision page route
- Update IntegrationSetupPage to support Proxmox configuration
- Update API client with provisioning endpoints and type definitions
- Update package.json with required dependencies
- Move 7 completed task documents from .kiro/todo to .kiro/done directory
- Add comprehensive REMAINING_TODOS_REPORT.md with prioritized task breakdown
- Include test failure analysis, RBAC issues, and environment configuration items
- Add SQLite test database temporary files (test-migration.db-shm, test-migration.db-wal)
- Update frontend logger with minor improvements
- Consolidate task tracking and provide clear roadmap for remaining work
- Add getNodes() method to retrieve PVE cluster nodes with status and resource info
- Add getNextVMID() method to fetch next available VM ID from cluster
- Add getISOImages() method to list ISO images available on node storage
- Add getTemplates() method to list OS templates available on node storage
- Implement caching for node and resource queries (60-120 second TTL)
- Add corresponding integration layer methods to expose Proxmox service capabilities
- Update frontend navigation and routing to support new provisioning workflows
- Enhance ProxmoxProvisionForm with node selection and resource discovery UI
- Update API client and type definitions for provisioning operations
- Improve error handling and logging across Proxmox integration layer
- Update foundFilter initial state from 'all' to 'found' for better UX
- Aligns with filterMode default behavior to show relevant data by default
- Reduces noise by filtering out not-found entries on initial load
…ved API handling

- Fix ProxmoxClient to use form-urlencoded encoding for POST/PUT/DELETE requests instead of JSON, matching Proxmox API requirements
- Add detailed error messages in API responses by including response body text for better diagnostics
- Add getStorages() method to ProxmoxService and ProxmoxIntegration to query available storage devices on nodes with optional content type filtering
- Add getNetworkBridges() method to ProxmoxService and ProxmoxIntegration to query network interfaces on nodes with bridge type filtering
- Implement caching for both storage and network queries with 120-second TTL to reduce API calls
- Update ProxmoxProvisionForm frontend component to use new storage and network discovery endpoints
- Extend provisioning types to support storage and network configuration options
- Update API client to expose new storage and network discovery endpoints
… and code quality improvements

Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- DatabaseAdapter interface with query, execute, transaction support
- SQLiteAdapter: full implementation with WAL mode, foreign keys
- PostgresAdapter: full implementation with pg pool, transaction client
- AdapterFactory: creates correct adapter based on DB_TYPE env var
- Error types: DatabaseQueryError, DatabaseConnectionError
- Tests for all three components (39 tests passing)
Copilot AI review requested due to automatic review settings March 15, 2026 15:28
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a database-adapter abstraction (SQLite + PostgreSQL), migrates services/tests to the new adapter API, and adds Proxmox integration endpoints/capabilities (including lifecycle actions and provisioning discovery), along with version bumps and Docker packaging updates.

Changes:

  • Replace direct sqlite3 usage with a new DatabaseAdapter interface, plus SQLiteAdapter and PostgresAdapter, updating services and tests accordingly.
  • Add Proxmox integration wiring (server registration, provisioning discovery endpoint, node lifecycle/destroy endpoints) and enhance inventory node linking to retain per-source identifiers.
  • Update build/Docker packaging to ship migration files (migration-first), add pg dependency, and bump version labels.

Reviewed changes

Copilot reviewed 87 out of 199 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
backend/test/integration/batch-execution.test.ts Switch test DB usage to SQLiteAdapter/DatabaseAdapter helpers.
backend/test/integration/auth-flow.test.ts Update direct SQL execution to new adapter execute().
backend/test/database/rbac-schema.test.ts Migrate schema verification tests to adapter queryOne/execute.
backend/test/database/migration-integration.test.ts Replace sqlite callback wrappers with adapter query/queryOne.
backend/test/database/index-verification.test.ts Use adapter queries for index/pragma validation checks.
backend/test/database/SQLiteAdapter.test.ts Add test coverage for SQLiteAdapter lifecycle/query/transactions.
backend/test/database/PostgresAdapter.test.ts Add basic instantiation/placeholder/init-failure tests for PostgresAdapter.
backend/test/database/MigrationRunner.test.ts Update migration runner tests for adapter + dialect-specific migration selection.
backend/test/database/ExecutionRepository.test.ts Replace sqlite3 DB with adapter-backed in-memory DB.
backend/test/database/AdapterFactory.test.ts Add tests for adapter factory + new DB error types.
backend/test/UserService.test.ts Migrate to adapter + adjust role assignment expectations (default Viewer role).
backend/test/UserService.password.test.ts Migrate to adapter + seed Viewer role; execute schema via adapter.
backend/test/RoleService.test.ts Migrate to adapter; execute schema statements via adapter.
backend/test/GroupService.test.ts Migrate to adapter; simplify query helpers and schema init.
backend/src/services/UserService.ts Replace sqlite3-specific DB helpers with DatabaseAdapter methods.
backend/src/services/SetupService.ts Replace sqlite callbacks with adapter queryOne/execute + structured logging.
backend/src/services/RoleService.ts Replace sqlite helpers with adapter execute/query/queryOne.
backend/src/services/PermissionService.ts Replace sqlite helpers with adapter execute/query/queryOne.
backend/src/services/GroupService.ts Replace sqlite helpers with adapter execute/query/queryOne.
backend/src/services/BatchExecutionService.ts Replace sqlite callbacks with adapter calls; simplify DB actions.
backend/src/services/AuthenticationService.ts Replace sqlite helpers with adapter calls for auth/lockout tables.
backend/src/services/AuditLoggingService.ts Replace sqlite helpers with adapter calls; return changes via adapter execute.
backend/src/server.ts Register Proxmox plugin when configured; add initialization logging.
backend/src/routes/streaming.ts Allow EventSource auth via token query param -> Authorization header fallback.
backend/src/routes/inventory.ts Add Proxmox node lifecycle and destroy endpoints.
backend/src/routes/integrations/status.ts Add “proxmox not configured” status entry for UI.
backend/src/routes/integrations/provisioning.ts New provisioning discovery endpoint for frontend integration listing.
backend/src/routes/integrations.ts Mount Proxmox + provisioning routers with optional auth/RBAC.
backend/src/middleware/rbacMiddleware.ts Switch middleware DB dependency to DatabaseAdapter.
backend/src/middleware/authMiddleware.ts Switch middleware DB dependency to DatabaseAdapter.
backend/src/integrations/types.ts Add ProvisioningCapability and parameter validation metadata.
backend/src/integrations/proxmox/types.ts New Proxmox integration type definitions and error classes.
backend/src/integrations/proxmox/ProxmoxIntegration.ts New Proxmox plugin implementing inventory + execution tool behaviors.
backend/src/integrations/NodeLinkingService.ts Extend linked nodes to keep per-source IDs/URIs; adjust identifier extraction.
backend/src/integrations/IntegrationManager.ts Return LinkedNode[] in aggregated inventory; add provisioning capability aggregation.
backend/src/database/rbac-schema.sql Remove legacy RBAC schema file (migration-first).
backend/src/database/migrations/000_initial_schema.sql Add header docs; align with migration-first approach.
backend/src/database/migrations.sql Remove legacy monolithic migrations file (migration-first).
backend/src/database/errors.ts New DB error classes for query/connection errors.
backend/src/database/audit-schema.sql Remove legacy audit schema file (migration-first).
backend/src/database/SQLiteAdapter.ts New sqlite-backed DatabaseAdapter implementation (WAL/FKs + query helpers).
backend/src/database/PostgresAdapter.ts New pg-backed DatabaseAdapter implementation with tx support.
backend/src/database/MigrationRunner.ts Switch to adapter; add dialect-specific migration file selection.
backend/src/database/ExecutionRepository.ts Switch to adapter; remove sqlite callback wrappers.
backend/src/database/DatabaseService.ts Use adapter factory; simplify init to “migrations only” approach.
backend/src/database/DatabaseAdapter.ts New unified DB interface shared across services.
backend/src/database/AdapterFactory.ts New factory selecting sqlite vs postgres based on env.
backend/src/config/schema.ts Add Zod schema/types for Proxmox integration config.
backend/src/config/ConfigService.ts Parse Proxmox env vars into integration config structure.
backend/package.json Bump version, add pg deps, update build to copy migrations directory.
Dockerfile.ubuntu Update image label version.
Dockerfile.alpine Update image label version.
Dockerfile Update image label version; copy full database dir into dist in container.
CLAUDE.md Update docs to reflect migration-first DB schema approach.
.kiro/todo/proxmox-restart-required.md Add operational note about restarting server to clear cached code.
.kiro/todo/expert-mode-prototype-pollution.md Add security TODO note about prototype-pollution sanitization.
.kiro/todo/REMAINING_TODOS_REPORT.md Add report summarizing remaining work/test failures.
.kiro/steering/security-best-practices.md Add allowlist-secret guidance note.
.kiro/specs/pabawi-release-1-0-0/.config.kiro Add spec config metadata for release work.
.kiro/specs/090/puppet-pabawi-refactoring/tasks.md Add refactoring plan/tasks document.
.kiro/specs/090/puppet-pabawi-refactoring/requirements.md Add requirements document for Puppet module refactor.
.kiro/specs/090/puppet-pabawi-refactoring/.config.kiro Add spec config metadata for Puppet refactor.
.kiro/specs/090/proxmox-integration/.config.kiro Add spec config metadata for Proxmox integration.
.kiro/specs/090/proxmox-frontend-ui/requirements.md Add frontend requirements spec for provisioning UI.
.kiro/specs/090/proxmox-frontend-ui/.config.kiro Add spec config metadata for frontend UI feature.
.kiro/done/proxmox-ssl-fix.md Document completed Proxmox SSL fix.
.kiro/done/provisioning-endpoint-fix.md Document completed provisioning endpoint addition.
.kiro/done/node-linking-redesign.md Document completed node linking redesign.
.kiro/done/docker-missing-schema-files.md Document completed Docker schema packaging fix.
.kiro/done/database-schema-cleanup-task.md Document DB migration-first cleanup completion.
.kiro/database-cleanup-prompt.md Add DB cleanup prompt/context document.
.github/copilot-instructions.md Update DB schema docs to migration-first guidance.

const result = await proxmoxTool.executeAction({
type: "task",
target: nodeId,
action: "destroy_vm",
Comment on lines +52 to +63
// Handle token from query parameter (EventSource doesn't support headers)
// Only move to Authorization header when no Authorization header is already present,
// then remove from query to reduce the chance of it being logged downstream.
if (
typeof req.query.token === "string" &&
!req.headers.authorization
) {
req.headers.authorization = `Bearer ${req.query.token}`;
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete (req.query as Record<string, unknown>).token;
}

Comment on lines +49 to +60
// Determine integration status based on health check
let status: 'connected' | 'degraded' | 'not_configured' = 'not_configured';
const healthCheck = proxmox.getLastHealthCheck();

if (healthCheck) {
if (healthCheck.healthy) {
status = 'connected';
} else if (healthCheck.message?.includes('not initialized') || healthCheck.message?.includes('disabled')) {
status = 'not_configured';
} else {
status = 'degraded';
}
Backend:
- AWS plugin with EC2 inventory, provisioning, lifecycle, health check
- Journal service with timeline aggregation and note support
- Integration config service with AES-256-GCM encryption and merge
- Proxmox VM/LXC compute type routing and separate create methods
- API routes for journal, integration config, and AWS endpoints
- ConfigService and schema updated for AWS env vars
- Database migrations for journal_entries and integration_configs

Frontend:
- AWS provisioning form and setup guide
- Proxmox VM and LXC provision forms (split from single form)
- Journal timeline component with note form
- Integration config management page
- RBAC UI updated for new permission types
- Navigation and routing updates

Fixes:
- Markdown table formatting in docs/integrations/aws.md
- Allowlisted example AWS keys in AWSSetupGuide.svelte
- Updated .secrets.baseline
Copilot AI review requested due to automatic review settings March 16, 2026 11:33
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds “Version 1” backend foundations for provisioning integrations (Proxmox + AWS), node linking improvements, and a migration-first, adapter-based database layer (SQLite/Postgres), with packaging updates for Docker builds.

Changes:

  • Introduces Proxmox and AWS provisioning integrations and related API routes (including provisioning discovery).
  • Refactors inventory node deduplication into a linking model with per-source node identity (sourceData), and updates aggregation types accordingly.
  • Reworks database access around a DatabaseAdapter abstraction, adds Postgres support, and moves schema fully into numbered migrations; updates Docker/build scripts to ship migrations.

Reviewed changes

Copilot reviewed 70 out of 239 changed files in this pull request and generated 15 comments.

Show a summary per file
File Description
backend/src/routes/integrations/provisioning.ts New endpoint to discover provisioning integrations and capabilities.
backend/src/routes/integrations/aws.ts New AWS integration API routes for inventory/provision/lifecycle/resource discovery.
backend/src/routes/integrations.ts Mounts new Proxmox + provisioning routers; adds RBAC/auth gating.
backend/src/routes/integrationConfig.ts New integration-config CRUD routes, using auth/RBAC.
backend/src/middleware/rbacMiddleware.ts Switches RBAC middleware DB type to DatabaseAdapter.
backend/src/middleware/authMiddleware.ts Switches auth middleware DB type to DatabaseAdapter.
backend/src/integrations/types.ts Adds ProvisioningCapability and capability parameter validation metadata.
backend/src/integrations/proxmox/types.ts Adds Proxmox integration types + error classes.
backend/src/integrations/proxmox/ProxmoxIntegration.ts New Proxmox plugin implementation (inventory/actions/provisioning/journal).
backend/src/integrations/proxmox/ProxmoxClient.ts New Proxmox API client with retries and task polling.
backend/src/integrations/aws/types.ts Adds AWS integration types + AWSAuthenticationError.
backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts Adds health check unit tests for AWS plugin.
backend/src/integrations/aws/AWSPlugin.ts New AWS plugin implementation (inventory/actions/provisioning/journal).
backend/src/integrations/NodeLinkingService.ts Extends linked nodes with per-source identity (sourceData) and alters linking behavior.
backend/src/integrations/IntegrationManager.ts Inventory now returns LinkedNode[]; adds provisioning capability aggregation.
backend/src/database/rbac-schema.sql Removes legacy RBAC schema file (migration-first).
backend/src/database/audit-schema.sql Removes legacy audit schema file (migration-first).
backend/src/database/migrations.sql Removes legacy aggregated migrations file (migration-first).
backend/src/database/migrations/000_initial_schema.sql Documents initial schema migration (header).
backend/src/database/migrations/007_permissions_and_provisioner_role.sql Seeds new permissions/roles for provisioning/journal/config resources.
backend/src/database/migrations/008_journal_entries.sql Adds journal_entries table migration.
backend/src/database/migrations/009_integration_configs.sql Adds per-user integration_configs table migration.
backend/src/database/errors.ts Adds typed DB error classes for adapters.
backend/src/database/DatabaseAdapter.ts Introduces unified DB adapter interface.
backend/src/database/AdapterFactory.ts Adds adapter factory controlled by DB_TYPE / DATABASE_URL.
backend/src/database/SQLiteAdapter.ts Implements DatabaseAdapter using sqlite3.
backend/src/database/PostgresAdapter.ts Implements DatabaseAdapter using pg.
backend/src/database/MigrationRunner.ts Refactors migration runner to use DatabaseAdapter + dialect-specific migration selection.
backend/src/database/ExecutionRepository.ts Refactors repository to use DatabaseAdapter (still sqlite-style SQL).
backend/src/database/DatabaseService.ts Refactors service to create/init adapter and run migrations only.
backend/src/config/schema.ts Adds config schemas for Proxmox + AWS integrations.
backend/src/config/ConfigService.ts Parses Proxmox/AWS config from env; adds getAWSConfig().
backend/package.json Adds AWS SDK + pg deps; updates build to ship migrations directory.
Dockerfile Copies full DB directory into image; bumps version label.
Dockerfile.alpine Bumps version label.
Dockerfile.ubuntu Bumps version label.
CLAUDE.md Updates docs to reflect migration-first DB approach.
.pre-commit-config.yaml Tightens duplicate/backup filename detection regex.
.kiro/todo/proxmox-restart-required.md Adds operational note about restarting backend to pick up changes.
.kiro/todo/expert-mode-prototype-pollution.md Adds security TODO doc for prototype pollution risk.
.kiro/todo/REMAINING_TODOS_REPORT.md Adds TODO status report.
.kiro/steering/security-best-practices.md Updates security steering doc with allowlist-secret guidance.
.kiro/specs/pabawi-release-1-0-0/.config.kiro Adds spec config metadata.
.kiro/specs/090/puppet-pabawi-refactoring/tasks.md Adds implementation plan for puppet module refactoring.
.kiro/specs/090/puppet-pabawi-refactoring/requirements.md Adds requirements for puppet module refactoring.
.kiro/specs/090/puppet-pabawi-refactoring/.config.kiro Adds spec config metadata.
.kiro/specs/090/proxmox-integration/requirements.md Adds Proxmox integration requirements spec.
.kiro/specs/090/proxmox-integration/.config.kiro Adds spec config metadata.
.kiro/specs/090/proxmox-frontend-ui/tasks.md Adds frontend UI implementation plan spec.
.kiro/specs/090/proxmox-frontend-ui/requirements.md Adds frontend UI requirements spec.
.kiro/specs/090/proxmox-frontend-ui/.config.kiro Adds spec config metadata.
.kiro/done/proxmox-ssl-fix.md Adds “done” note about Proxmox SSL handling approach.
.kiro/done/provisioning-endpoint-fix.md Adds “done” note about provisioning endpoint addition.
.kiro/done/node-linking-redesign.md Adds “done” note about node linking redesign.
.kiro/done/docker-missing-schema-files.md Adds “done” note about Docker copying DB artifacts.
.kiro/done/database-schema-cleanup-task.md Adds “done” note about migration-first DB cleanup.
.kiro/database-cleanup-prompt.md Adds internal prompt doc for DB cleanup work.
.github/copilot-instructions.md Updates repo instructions to reflect migration-first DB approach.

Comment on lines 188 to +193
private async recordMigration(migration: MigrationFile): Promise<void> {
const now = new Date().toISOString();

return new Promise((resolve, reject) => {
this.db.run(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
[migration.id, migration.filename, now],
(err) => {
if (err) {
reject(new Error(`Failed to record migration: ${err.message}`));
} else {
resolve();
}
}
);
});
await this.db.execute(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
[migration.id, migration.filename, now]
);
Comment on lines +23 to 27
// Ensure database directory exists (for SQLite)
const dbDir = dirname(this.databasePath);
if (!existsSync(dbDir)) {
mkdirSync(dbDir, { recursive: true });
}
this.ensureInitialized();

const startedAt = new Date().toISOString();
const target = Array.isArray(action.target) ? action.target[0] : action.target;
return {
id: `aws-provision-${Date.now()}`,
type: "task",
targetNodes: [target],
Comment on lines +316 to +317
{
nodeId: instanceId,
Comment on lines +112 to +114
const linkedNode: LinkedNode = {
id: primaryName, // Use name as primary ID for lookups
name: primaryName,
Comment on lines +63 to +76
if (db) {
const authMiddleware = createAuthMiddleware(db, jwtSecret);
const rbacMiddleware = createRbacMiddleware(db);

router.use(
"/provisioning",
asyncHandler(authMiddleware),
asyncHandler(rbacMiddleware('provisioning', 'read')),
createProvisioningRouter(integrationManager)
);
} else {
// Fallback for cases where database is not available (e.g., tests)
router.use("/provisioning", createProvisioningRouter(integrationManager));
}
Comment on lines +35 to +39
const configService = new IntegrationConfigService(
databaseService.getConnection(),
process.env.JWT_SECRET ?? "",
);
const authMiddleware = createAuthMiddleware(databaseService.getConnection());
Comment on lines +50 to +61
// for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator.
// Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution.
if (config.ssl && config.ssl.rejectUnauthorized === false) {
this.logger.warn(
"Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " +
"Proxmox connections will use the default TLS verification. " +
"Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
Comment on lines +2 to +3
import { z } from "zod";
import { ZodError } from "zod";
Copilot AI review requested due to automatic review settings March 20, 2026 07:38
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds initial provisioning/integration support (Proxmox + AWS), expands inventory/facts aggregation behavior, and refactors the database layer toward an adapter-based, migration-first approach (with optional PostgreSQL support).

Changes:

  • Introduce provisioning discovery + AWS integration routes, plus Proxmox integration plugin/types/tests.
  • Refactor database access behind a DatabaseAdapter abstraction, add SQLite/PostgreSQL adapters, and migrate schema management to migrations/.
  • Update inventory node linking model and add multi-source facts endpoint with per-source timeouts.

Reviewed changes

Copilot reviewed 72 out of 250 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
backend/src/routes/integrations/provisioning.ts New endpoint to list provisioning integrations + capabilities based on health state.
backend/src/routes/integrations/aws.ts New AWS integration API routes (inventory, provisioning, lifecycle, region/resources discovery).
backend/src/routes/integrations.ts Mount Proxmox + provisioning routers; update DB type to DatabaseAdapter for auth/RBAC wiring.
backend/src/routes/integrationConfig.ts New routes to CRUD per-user integration configs with auth + RBAC.
backend/src/routes/facts.ts Add /:id/facts endpoint to gather facts from all sources in parallel with timeouts.
backend/src/middleware/rbacMiddleware.ts Switch RBAC middleware DB dependency to DatabaseAdapter.
backend/src/middleware/authMiddleware.ts Switch auth middleware DB dependency to DatabaseAdapter.
backend/src/integrations/types.ts Add ProvisioningCapability and extend capability parameter metadata with validation.
backend/src/integrations/proxmox/types.ts New Proxmox integration domain types + error classes.
backend/src/integrations/proxmox/tests/ProxmoxIntegration.test.ts Unit tests for ProxmoxIntegration behaviors and journaling hooks.
backend/src/integrations/proxmox/ProxmoxIntegration.ts New Proxmox plugin implementing InformationSource + ExecutionTool + journaling.
backend/src/integrations/proxmox/ProxmoxClient.ts New Proxmox API HTTP client with retry + task polling.
backend/src/integrations/aws/types.ts New AWS integration types + AWSAuthenticationError.
backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts Unit tests for AWSPlugin health check behavior.
backend/src/integrations/NodeLinkingService.ts Extend linked nodes to preserve per-source IDs/URIs via sourceData; adjust identifier extraction.
backend/src/integrations/IntegrationManager.ts Inventory now returns LinkedNode[]; add provisioning capability aggregation; add per-source inventory timeout; delegate dedup to NodeLinkingService.
backend/src/database/rbac-schema.sql Remove legacy RBAC schema file (migration-first).
backend/src/database/migrations/009_integration_configs.sql Add migration for integration_configs table and indexes.
backend/src/database/migrations/008_journal_entries.sql Add migration for journal_entries table and indexes.
backend/src/database/migrations/007_permissions_and_provisioner_role.sql Seed new permissions + Provisioner role; expand admin permissions.
backend/src/database/migrations/000_initial_schema.sql Add migration header/comments for initial schema.
backend/src/database/migrations.sql Remove legacy adhoc migrations file (migration-first).
backend/src/database/errors.ts New typed DB error classes for query/connection failures.
backend/src/database/audit-schema.sql Remove legacy audit schema file (migration-first).
backend/src/database/SQLiteAdapter.ts Add SQLite DatabaseAdapter implementation (WAL + FK).
backend/src/database/PostgresAdapter.ts Add Postgres DatabaseAdapter implementation with pool + transactions.
backend/src/database/MigrationRunner.ts Refactor migration runner to use DatabaseAdapter and support dialect-specific migration filenames.
backend/src/database/ExecutionRepository.ts Refactor repository to use DatabaseAdapter methods instead of sqlite3 callbacks.
backend/src/database/DatabaseService.ts Refactor service to initialize via adapter factory, run migrations, and expose adapter.
backend/src/database/DatabaseAdapter.ts Define unified DB adapter interface used by services/middleware.
backend/src/database/AdapterFactory.ts Add env-driven adapter selection (sqlite default, postgres with DATABASE_URL).
backend/src/config/schema.ts Add Proxmox + AWS integration configuration schemas to config model.
backend/src/config/ConfigService.ts Parse Proxmox + AWS configs from environment; add getAWSConfig().
backend/package.json Bump version; add AWS SDK + pg deps; adjust build script to copy migrations directory.
Dockerfile.ubuntu Update OCI image version label.
Dockerfile.alpine Update OCI image version label.
Dockerfile Update OCI image version label; change openbolt install step; copy full database directory; add PROXMOX/AWS env flags.
CLAUDE.md Update documentation to reflect migration-first DB schema.
.pre-commit-config.yaml Tighten duplicate/backup filename regex.
.kiro/todo/proxmox-restart-required.md New internal note about cached server code / restart.
.kiro/todo/expert-mode-prototype-pollution.md New internal note documenting prototype pollution finding.
.kiro/todo/REMAINING_TODOS_REPORT.md New report summarizing remaining TODOs and test failures.
.kiro/steering/security-best-practices.md Document detect-secrets allowlist pragma usage.
.kiro/steering/git-best-practices.md Document pre-commit hooks and allowlist pragma usage.
.kiro/specs/pabawi-release-1-0-0/.config.kiro Add spec config metadata file.
.kiro/specs/090/puppet-pabawi-refactoring/tasks.md Add refactoring implementation plan doc.
.kiro/specs/090/puppet-pabawi-refactoring/requirements.md Add refactoring requirements doc.
.kiro/specs/090/puppet-pabawi-refactoring/.config.kiro Add spec config metadata file.
.kiro/specs/090/proxmox-integration/requirements.md Add Proxmox integration requirements doc.
.kiro/specs/090/proxmox-integration/.config.kiro Add spec config metadata file.
.kiro/specs/090/proxmox-frontend-ui/tasks.md Add Proxmox frontend UI implementation plan doc.
.kiro/specs/090/proxmox-frontend-ui/requirements.md Add Proxmox frontend UI requirements doc.
.kiro/specs/090/proxmox-frontend-ui/.config.kiro Add spec config metadata file.
.kiro/done/proxmox-ssl-fix.md Add internal note describing prior SSL fix approach.
.kiro/done/provisioning-endpoint-fix.md Add internal note describing provisioning endpoint fix.
.kiro/done/node-linking-redesign.md Add internal note describing node linking redesign.
.kiro/done/docker-missing-schema-files.md Add internal note describing Docker schema copy fix.
.kiro/done/database-schema-cleanup-task.md Add internal note documenting migration-first DB cleanup.
.kiro/database-cleanup-prompt.md Add internal prompt describing DB cleanup task.
.github/copilot-instructions.md Update project guidance to reflect migration-first schema policy.

build-essential \
ansible \
&& gem install openbolt -v 5.1.0 --no-document \
openbolt \
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line appears to execute openbolt during the image build rather than installing it (the previous gem install openbolt -v ... was removed). This will likely fail the Docker build (command not found / non-zero exit) and also makes the build non-reproducible. Install the gem explicitly (pin version if needed) or install via the appropriate package manager, but avoid invoking the runtime binary in a RUN install step.

Copilot uses AI. Check for mistakes.
const eventType = this.mapActionToEventType(action.action);
const entry: CreateJournalEntry = {
nodeId: target,
nodeUri: `proxmox:${target}`,
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

target is already a Proxmox node identifier like proxmox:pve1:100, so nodeUri: proxmox:${target}`` produces proxmox:proxmox:pve1:100. This will break downstream consumers that expect a stable URI format. Use the actual node URI (e.g., from the node data) or avoid double-prefixing by storing `target` directly if it already includes the scheme/prefix.

Suggested change
nodeUri: `proxmox:${target}`,
nodeUri: target,

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +61
// NOTE: Setting NODE_TLS_REJECT_UNAUTHORIZED is process-wide and would disable TLS verification
// for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator.
// Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution.
if (config.ssl && config.ssl.rejectUnauthorized === false) {
this.logger.warn(
"Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " +
"Proxmox connections will use the default TLS verification. " +
"Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The integration accepts ssl.rejectUnauthorized=false (and ProxmoxIntegration logs that TLS verification is disabled), but the client currently does not apply any TLS override—so self-signed Proxmox setups will still fail. Either: (1) implement a supported per-request TLS configuration for Node's fetch (e.g., undici dispatcher/Agent) and ensure the dependency is present, or (2) explicitly document and/or implement the process-wide NODE_TLS_REJECT_UNAUTHORIZED=0 approach with clear warnings. Right now the config option is effectively non-functional.

Suggested change
// NOTE: Setting NODE_TLS_REJECT_UNAUTHORIZED is process-wide and would disable TLS verification
// for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator.
// Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution.
if (config.ssl && config.ssl.rejectUnauthorized === false) {
this.logger.warn(
"Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " +
"Proxmox connections will use the default TLS verification. " +
"Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
// NOTE: Ideally we would configure TLS verification per-client (e.g. via a custom HTTPS
// agent / undici dispatcher). Given the current architecture and constraints, we fall
// back to the process-wide NODE_TLS_REJECT_UNAUTHORIZED mechanism when explicitly
// requested via config.ssl.rejectUnauthorized === false.
//
// This is inherently dangerous because it disables TLS verification for ALL HTTPS
// traffic in this Node.js process, not just Proxmox. We therefore:
// - Only change NODE_TLS_REJECT_UNAUTHORIZED when it is unset;
// - Log loud warnings about the security implications;
// - Encourage configuring a proper CA (config.ssl.ca) instead whenever possible.
if (config.ssl && config.ssl.rejectUnauthorized === false) {
const currentTlsSetting = process.env.NODE_TLS_REJECT_UNAUTHORIZED;
if (currentTlsSetting === undefined) {
// Only set the env var when it is not already defined, to avoid surprising
// behavior if the operator configured it explicitly at process start.
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
this.logger.warn(
"Proxmox ssl.rejectUnauthorized=false is set. NODE_TLS_REJECT_UNAUTHORIZED was not set; " +
"it has now been set to '0', disabling TLS certificate verification for ALL HTTPS " +
"connections in this process. This is insecure and should only be used in trusted " +
"environments. Prefer configuring a trusted CA certificate via ssl.ca whenever possible.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
} else if (currentTlsSetting === "0") {
// TLS verification is already globally disabled; align with that and log at info level.
this.logger.info(
"Proxmox ssl.rejectUnauthorized=false is set and NODE_TLS_REJECT_UNAUTHORIZED is already '0'. " +
"TLS certificate verification is disabled process-wide.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
} else {
// The operator explicitly configured a different global setting. Respect it, but make
// clear that the Proxmox option cannot override it.
this.logger.warn(
"Proxmox ssl.rejectUnauthorized=false is set, but NODE_TLS_REJECT_UNAUTHORIZED is already " +
`set to '${currentTlsSetting}'. The global setting will take precedence and TLS ` +
"certificate verification will NOT be disabled unless NODE_TLS_REJECT_UNAUTHORIZED is '0'.",
{
component: "ProxmoxClient",
operation: "constructor",
}
);
}

Copilot uses AI. Check for mistakes.
Comment on lines +190 to +193
await this.db.execute(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
[migration.id, migration.filename, now]
);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hardcodes SQLite-style ? placeholders. When DB_TYPE=postgres, this insert will fail because pg expects $1/$2/$3. Since DatabaseAdapter already exposes getPlaceholder(index), build the VALUES placeholder list using that API (or provide an adapter-level helper) so migration tracking works across dialects.

Copilot uses AI. Check for mistakes.
Comment on lines +451 to +452
router.get(
"/:id/facts",
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Express route matching is order-dependent. If this router already defines a more general /:id (or similar) route earlier in the file, then /:id will also match /<nodeId>/facts and prevent this handler from running. To avoid accidental shadowing, register /:id/facts before any /:id-style routes, or consider a less ambiguous path shape (e.g., /:id/facts/all or a non-overlapping prefix).

Copilot uses AI. Check for mistakes.
Comment on lines +484 to +489
const timeoutPromise = new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error(`Timeout after ${SOURCE_TIMEOUT_MS}ms`)),
SOURCE_TIMEOUT_MS,
),
);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per-source timeout uses setTimeout(...) but the timer is never cleared when source.getNodeFacts() resolves first. This creates unnecessary pending timers under load (they still fire later, even though the race is already decided). Capture the timeout handle and clearTimeout(...) in a finally block around the raced await to keep the event loop/timer queue clean.

Copilot uses AI. Check for mistakes.
Comment on lines +454 to +459
const timeoutPromise = new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)),
SOURCE_TIMEOUT_MS,
),
);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the facts endpoint, this timeout promise leaves a live timer even if the source returns quickly. In inventory polling scenarios across many sources/users, this can create avoidable timer churn and log noise. Consider clearing the timeout in a finally once either branch wins the race (or switch to an AbortController-based approach so the underlying request can be cancelled where supported).

Copilot uses AI. Check for mistakes.
…sment

- Add Azure support design and requirements specifications
- Add ManageTab lifecycle flow analysis documenting end-to-end error tracing
- Add missing lifecycle actions bugfix documentation and Kiro config
- Add workspace analysis summary and product/tech steering documents
- Update ProxmoxClient with improved integration handling
- Update inventory routes with enhanced node lifecycle actions support
- Update ManageTab component with better state management
- Update API client with refined lifecycle actions fetching
- Update NodeDetailPage with improved node detail rendering
- Add security assessment documentation for v0.10.0
Copilot AI review requested due to automatic review settings March 24, 2026 11:34
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 72 out of 264 changed files in this pull request and generated 7 comments.

// Fetch nodes and groups in parallel
// Fetch nodes and groups in parallel with per-source timeout
// Prevents a single slow source from blocking the entire inventory
const SOURCE_TIMEOUT_MS = 15_000;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timeoutPromise created with setTimeout() is never cleared when Promise.all(...) finishes first. This can lead to unhandled rejections firing after the race is already resolved (and also leaks timers). Use a cancellable timeout (store the timer handle + clearTimeout in a finally), or use AbortController/signal-aware APIs for cancellation to avoid late rejections.

Copilot uses AI. Check for mistakes.
Comment on lines +454 to +464
const timeoutPromise = new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)),
SOURCE_TIMEOUT_MS,
),
);

const [nodes, groups] = await Promise.race([
Promise.all([
source.getInventory(),
source.getGroups().catch((error: unknown) => {
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timeoutPromise created with setTimeout() is never cleared when Promise.all(...) finishes first. This can lead to unhandled rejections firing after the race is already resolved (and also leaks timers). Use a cancellable timeout (store the timer handle + clearTimeout in a finally), or use AbortController/signal-aware APIs for cancellation to avoid late rejections.

Copilot uses AI. Check for mistakes.
Comment on lines 188 to 194
private async recordMigration(migration: MigrationFile): Promise<void> {
const now = new Date().toISOString();

return new Promise((resolve, reject) => {
this.db.run(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
[migration.id, migration.filename, now],
(err) => {
if (err) {
reject(new Error(`Failed to record migration: ${err.message}`));
} else {
resolve();
}
}
);
});
await this.db.execute(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
[migration.id, migration.filename, now]
);
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MigrationRunner uses hardcoded SQLite-style ? placeholders, which will fail when the active adapter is Postgres (expects $1, $2, ...). Since DatabaseAdapter already exposes getPlaceholder(index), build this SQL using adapter placeholders (or have the adapter expose a helper to interpolate placeholders) so migrations can be recorded on both dialects.

Copilot uses AI. Check for mistakes.
@@ -223,7 +223,7 @@ export class ExecutionRepository {
const sql = `UPDATE executions SET ${updateFields.join(", ")} WHERE id = ?`;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExecutionRepository still embeds SQLite ? placeholders in queries. After switching the repository to DatabaseAdapter, this will break under the Postgres adapter. Consider centralizing placeholder generation (e.g., a small helper that replaces ? with adapter.getPlaceholder(i) sequentially), or rewrite queries to use adapter-generated placeholders directly.

Copilot uses AI. Check for mistakes.
@@ -247,7 +247,7 @@ export class ExecutionRepository {
const sql = "SELECT * FROM executions WHERE id = ?";
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExecutionRepository still embeds SQLite ? placeholders in queries. After switching the repository to DatabaseAdapter, this will break under the Postgres adapter. Consider centralizing placeholder generation (e.g., a small helper that replaces ? with adapter.getPlaceholder(i) sequentially), or rewrite queries to use adapter-generated placeholders directly.

Copilot uses AI. Check for mistakes.
Comment on lines +112 to +121
const linkedNode: LinkedNode = {
id: primaryName, // Use name as primary ID for lookups
name: primaryName,
uri: node.uri, // Will be overwritten with combined URIs
transport: node.transport,
config: node.config,
sources: [],
linked: false,
sourceData: {},
};
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous implementation spread ...node into linkedNode, but the new object literal no longer carries over other Node fields (notably source, which is present in several integration nodes and often used downstream for UI labels/logic). If Node.source is required/relied on elsewhere, this is a behavioral regression. Either preserve the original spread (...node) and then overwrite id/name as needed, or explicitly set a stable top-level source (e.g., highest-priority source / first source) while still keeping sources + sourceData for attribution.

Copilot uses AI. Check for mistakes.
Comment on lines +517 to +521
// Build a Response-compatible object so the rest of the client code is unchanged
const response = new Response(bodyText, {
status: res.statusCode ?? 500,
statusText: res.statusMessage ?? "",
headers: new Headers(res.headers as Record<string, string>),
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

res.headers values can be string | string[] | undefined in Node, and new Headers(...) expects an iterable/record of string values. Casting to Record<string, string> can hide runtime failures (e.g., Set-Cookie arrays) or produce incorrect header values. Normalize header entries (join arrays, skip undefined) before constructing Headers to avoid runtime exceptions and preserve correct semantics.

Suggested change
// Build a Response-compatible object so the rest of the client code is unchanged
const response = new Response(bodyText, {
status: res.statusCode ?? 500,
statusText: res.statusMessage ?? "",
headers: new Headers(res.headers as Record<string, string>),
// Normalize Node.js IncomingHttpHeaders (string | string[] | undefined) to string values
const headerEntries: [string, string][] = [];
for (const [key, value] of Object.entries(res.headers)) {
if (typeof value === "string") {
headerEntries.push([key, value]);
} else if (Array.isArray(value)) {
headerEntries.push([key, value.join(", ")]);
}
// Skip undefined values
}
// Build a Response-compatible object so the rest of the client code is unchanged
const response = new Response(bodyText, {
status: res.statusCode ?? 500,
statusText: res.statusMessage ?? "",
headers: new Headers(headerEntries),

Copilot uses AI. Check for mistakes.
- Add ALLOW_DESTRUCTIVE_PROVISIONING environment variable to control whether destructive provisioning actions (destroy VM/LXC, terminate EC2) are allowed
- Add ProvisioningConfig schema and provisioning safety configuration to ConfigService
- Add isDestructiveProvisioningAllowed() method to ConfigService for checking if destructive actions are enabled
- Add /api/config/provisioning endpoint to expose provisioning safety settings to frontend
- Add allowDestructiveActions guard to AWS EC2 terminate endpoint, returning 403 DESTRUCTIVE_ACTION_DISABLED when disabled
- Add allowDestructiveActions guard to Proxmox destroy endpoints, returning 403 when disabled
- Pass provisioning safety options through integration routers (AWS, Proxmox) from main integrations router
- Update ManageTab.svelte frontend to respect provisioning safety configuration
- Update configuration documentation and provisioning guide with safety settings
- Update setup.sh to include provisioning safety configuration
- Defaults to false (destructive actions disabled) for safer out-of-box experience
- Add AWS_REGIONS configuration to support querying multiple regions via JSON array or comma-separated string
- Update ConfigService to parse and validate multi-region configuration
- Add regions field to AWSConfigSchema for type safety
- Improve AWS credential validation to support default credential chain (env vars, ~/.aws/credentials, instance profile)
- Remove requirement for explicit credentials or profile when using default chain
- Update AWSPlugin health check to include regions in diagnostic output
- Fix NodeLinkingService to skip AWS URIs when extracting hostname identifiers
- Add source field to linked nodes for backward compatibility with code reading node.source
- Update .env.example with comprehensive AWS and Proxmox configuration documentation
- Clean up test database temporary files (test-migration.db-shm, test-migration.db-wal)
- Update integration and performance tests to reflect configuration changes
Copilot AI review requested due to automatic review settings March 25, 2026 20:38
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 73 out of 270 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (1)

Dockerfile:1

  • This changes OpenBolt installation from a pinned RubyGems install to an apt-get install openbolt. On Debian/Ubuntu, openbolt is unlikely to exist as an apt package (and even if it does, it removes the previously pinned version). This is likely to break image builds or change behavior unexpectedly. Consider restoring gem install openbolt -v <pinned> (or verify the apt package exists and pin its version explicitly).
# Stage 1: Build frontend with Vite

const eventType = this.mapActionToEventType(action.action);
const entry: CreateJournalEntry = {
nodeId: target,
nodeUri: `proxmox:${target}`,
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as in recordJournal(): nodeUri will become proxmox:proxmox:... when target is already prefixed. Align nodeUri with the canonical URI format used elsewhere (or use target directly if that’s what consumers expect).

Suggested change
nodeUri: `proxmox:${target}`,
nodeUri: target,

Copilot uses AI. Check for mistakes.
Comment on lines +190 to +191
await this.db.execute(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MigrationRunner now supports Postgres via DatabaseAdapter, but it still hardcodes SQLite ? placeholders. This will fail with pg which expects $1, $2, .... Use this.db.getPlaceholder(1..n) to build dialect-correct SQL (or expose an adapter helper for parameterized INSERTs).

Suggested change
await this.db.execute(
"INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)",
const placeholders = [
this.db.getPlaceholder(1),
this.db.getPlaceholder(2),
this.db.getPlaceholder(3),
].join(", ");
await this.db.execute(
`INSERT INTO migrations (id, name, appliedAt) VALUES (${placeholders})`,

Copilot uses AI. Check for mistakes.

try {
await this.run(sql, params);
await this.db.execute(sql, params);
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExecutionRepository has been switched to DatabaseAdapter, but the repository SQL (e.g., WHERE id = ?, VALUES (?, ?, ...)) is still SQLite-style. This will break when DB_TYPE=postgres. To truly support both dialects, build queries using db.getPlaceholder(n) and avoid SQLite-specific syntax (or provide a small query builder that emits the right placeholders).

Copilot uses AI. Check for mistakes.
Comment on lines +247 to +260
getAllProvisioningCapabilities(): Array<{
source: string;
capabilities: Array<{
name: string;
description: string;
operation: "create" | "destroy";
parameters: Array<{
name: string;
type: string;
required: boolean;
default?: unknown;
}>;
}>;
}> {
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This duplicates the provisioning capability shape inline instead of reusing the existing ProvisioningCapability (and parameter) types. That increases the chance of drift between plugins and the manager. Prefer returning Array<{ source: string; capabilities: ProvisioningCapability[] }> (or a shared DTO type) so changes to capability contracts are type-checked centrally.

Copilot uses AI. Check for mistakes.
Comment on lines 72 to 76
private getMigrationFiles(): MigrationFile[] {
const dialect = this.db.getDialect();

try {
const files = readdirSync(this.migrationsDir);
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration selection logic is now dialect-aware (shared vs *.sqlite.sql / *.postgres.sql precedence) and is critical for correctness, but there are no tests shown covering: (1) precedence when both shared + dialect-specific exist, (2) ignoring the other dialect, and (3) stable ordering. Adding unit tests for getMigrationFiles()/getPendingMigrations() would help prevent subtle migration regressions.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 75 out of 273 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (1)

Dockerfile:1

  • openbolt was previously installed via gem install openbolt -v 5.1.0 (RubyGems). Installing it via apt-get install openbolt is likely to fail unless the base image repo actually provides such a package. Consider restoring the gem install (ideally pinned to a known-good version) or switch to the correct OS package name if one exists in the target distro repositories.
# Stage 1: Build frontend with Vite

if (typeof target === "string") {
// Map target to node ID with bolt: prefix
nodes.push(`bolt:${target}`);
nodes.push(target);
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes NodeGroup.nodes entries from bolt:${...} to raw target names. To keep groups usable, NodeGroup.nodes should match whatever identifier the inventory API uses for linked nodes (often LinkedNode.id). If the Bolt inventory sometimes omits name (causing linked IDs to fall back to prefixed IDs), group membership resolution can break; consider pushing the exact canonical ID used in inventory (or explicitly use node.name as the group member key).

Copilot uses AI. Check for mistakes.
const targetName = typeof targetObj.name === "string" ? targetObj.name : null;
if (targetName) {
nodes.push(`bolt:${targetName}`);
nodes.push(targetName);
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes NodeGroup.nodes entries from bolt:${...} to raw target names. To keep groups usable, NodeGroup.nodes should match whatever identifier the inventory API uses for linked nodes (often LinkedNode.id). If the Bolt inventory sometimes omits name (causing linked IDs to fall back to prefixed IDs), group membership resolution can break; consider pushing the exact canonical ID used in inventory (or explicitly use node.name as the group member key).

Copilot uses AI. Check for mistakes.
sources: ["ansible"],
linked: false,
nodes: hosts.map(host => `ansible:${host}`), // Prefix hosts with ansible:
nodes: hosts,
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same concern as the Bolt group change: NodeGroup.nodes should consistently reference the same identifier used by the inventory nodes returned to the UI. If Ansible inventory nodes are not keyed by plain hosts values in all cases, group membership may no longer resolve correctly; consider normalizing group node IDs to the canonical inventory ID (or ensure inventory node name is always set to host so linked IDs match).

Copilot uses AI. Check for mistakes.
Comment on lines +153 to +164
getDialect(): "sqlite" | "postgres" {
return "postgres";
}

getPlaceholder(index: number): string {
if (!Number.isInteger(index) || index < 1) {
throw new Error(
`Invalid placeholder index ${index} for Postgres; parameter positions are 1-based.`,
);
}
return "$" + String(index);
}
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new Postgres adapter introduces dialect-specific placeholder rules and transaction behavior that are easy to regress (and migration code currently still uses ?). Add unit tests covering placeholder generation, basic query/execute in and out of transactions, and a minimal migration runner path in Postgres mode to ensure the new DB_TYPE=postgres pathway works end-to-end.

Copilot uses AI. Check for mistakes.
alvagante and others added 2 commits March 30, 2026 09:27
- Add KiroGraph sync hooks for file create, delete, and save events
- Configure MCP server with KiroGraph tools and auto-approval settings
- Add KiroGraph steering guide with tool usage documentation and workflow
- Initialize KiroGraph configuration with language support and exclusion patterns
- Enable semantic code graph indexing for improved exploration capabilities
…eaders/encoding

Agent-Logs-Url: https://github.com/example42/pabawi/sessions/21235efe-0ec2-4f7c-b639-3c95b5857cf2

Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 30, 2026 07:38
@alvagante alvagante review requested due to automatic review settings March 30, 2026 07:38
Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 30, 2026

@copilot apply changes based on the comments in this thread

Applied all changes from the review thread in commit 50c1133:

  • MigrationRunner.ts & ExecutionRepository.ts: Replaced hardcoded ? SQLite placeholders with db.getPlaceholder(i) calls so queries work correctly with both SQLite and PostgreSQL adapters
  • IntegrationManager.ts & facts.ts: Fixed timer leaks — captured timeoutHandle from setTimeout and call clearTimeout() in a finally block so timers are always cleaned up whether the race resolves or rejects
  • ProxmoxIntegration.ts: Fixed double proxmox: prefix — target is already proxmox:node:id, so nodeUri now uses target directly instead of `proxmox:${target}`
  • ProxmoxClient.ts: Normalized Node.js IncomingHttpHeaders (which can be string | string[] | undefined) before constructing new Headers(), and added encodeURIComponent() for taskId and node in the task status endpoint
  • Migrations 002 & 007: Replaced SQLite-specific datetime('now') with portable CURRENT_TIMESTAMP that works in both dialects

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • https://api.github.com/repos/example42/pabawi/pulls/38/reviews/3950306141/comments
    • Triggering command: /usr/bin/curl curl -s REDACTED (http block)
  • httpstat.us
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js l/config (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js lude -- tail -20 e (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js 14.0/deps/openss--conditions de/node/bin/git 14.0/deps/openss--conditions --64�� s.a /tmp/ccAbDtg4.o 14.0/src (dns block)
  • puppetdb.example.com
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js l/config (dns block)
    • Triggering command: /opt/hostedtoolcache/node/24.14.0/x64/bin/node /opt/hostedtoolcache/node/24.14.0/x64/bin/node --conditions node --conditions development --experimental-import-meta-resolve --require /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/suppress-warnings.cjs /home/REDACTED/work/pabawi/pabawi/node_modules/vitest/dist/workers/forks.js l/config /var/log/sysstatnode l/openssl/includ--conditions k/pa�� s/src/string_vie--experimental-import-meta-resolve 14.0/include/nod--require n/as 14.0/src lude nings.cjs /forks.js (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Copilot AI review requested due to automatic review settings March 30, 2026 08:27
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 75 out of 273 changed files in this pull request and generated 7 comments.

"Proxmox password authentication requires a realm"
);
}

Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Password authentication validation is incomplete: username is not required when config.password is set. In ProxmoxClient.authenticate() the username is constructed from ${this.config.username ?? \"\"}@${this.config.realm ?? \"\"}, so missing username will produce an invalid value like \"@pam\" and guarantee authentication failure. Require config.username (and non-empty) whenever config.password is provided.

Suggested change
// Validate username for password authentication
if (config.password && (!config.username || typeof config.username !== "string")) {
throw new Error(
"Proxmox password authentication requires a non-empty username"
);
}

Copilot uses AI. Check for mistakes.
Comment on lines +278 to +286
const target = Array.isArray(action.target) ? action.target[0] : action.target;

try {
const result = await this.service!.executeAction(action);
await this.recordJournal(action, target, result);
return result;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
await this.recordJournalFailure(action, target, errorMessage);
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When action.target is an array, only the first target is used for journal recording (both success and failure). This loses audit/journal coverage for the remaining targets and can be misleading in multi-target executions. If multi-target actions are supported, record a journal entry per target (or explicitly reject array targets for this integration and validate accordingly).

Suggested change
const target = Array.isArray(action.target) ? action.target[0] : action.target;
try {
const result = await this.service!.executeAction(action);
await this.recordJournal(action, target, result);
return result;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
await this.recordJournalFailure(action, target, errorMessage);
const targets = Array.isArray(action.target) ? action.target : [action.target];
try {
const result = await this.service!.executeAction(action);
await Promise.all(
targets.map((target) => this.recordJournal(action, target, result)),
);
return result;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
await Promise.all(
targets.map((target) =>
this.recordJournalFailure(action, target, errorMessage),
),
);

Copilot uses AI. Check for mistakes.
Comment on lines +335 to +338
// Check if error is retryable
const isRetryable = this.retryConfig.retryableErrors.some((errCode) =>
lastError?.message.includes(errCode)
);
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retryability is determined by substring-matching the error message, which is brittle (messages vary by Node version and library). Prefer checking a structured error code (e.g., (err as NodeJS.ErrnoException).code) and/or handling fetch/network errors explicitly, then map to retryable conditions. This will make retries more reliable and easier to maintain.

Copilot uses AI. Check for mistakes.
Comment on lines 156 to +160
// Split into statements (handle multi-statement migrations)
const statements = sql
.split(";")
.map(s => s.trim())
.filter(s => {
// Filter out empty statements and comment-only statements
.map((s) => s.trim())
.filter((s) => {
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splitting SQL by ; is unsafe for PostgreSQL migrations (and some SQLite scripts) because semicolons can appear inside function bodies, triggers, or string literals. Now that Postgres is supported, this will break valid migrations. A practical fix is: for Postgres, execute the whole migration file as-is (node-postgres accepts multiple statements in one query string), and only do statement splitting for SQLite if needed; or use a more robust statement parser.

Copilot uses AI. Check for mistakes.
Comment on lines +179 to +181
// Set source (singular) to the primary source for backward compatibility
// This ensures code that reads node.source still works correctly
linkedNode.source = linkedNode.sources[0];
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linkedNode.sources[0] depends on iteration order and may not be stable/deterministic, which can cause regressions in code paths that still rely on node.source as a single canonical source. Prefer selecting the primary source deterministically (e.g., highest plugin priority or a fixed precedence list), and keep linkedNode.sources ordered accordingly.

Copilot uses AI. Check for mistakes.
{
"name": "backend",
"version": "0.8.1",
"version": "0.10.0",
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR metadata/description focuses on placeholder portability, timeout cleanup, Proxmox URI fixes, and timestamp portability, but the diff also includes a version bump to 0.10.0, introduces PostgreSQL support (pg, new adapters), adds AWS SDK deps, and changes Docker’s openbolt installation. Consider updating the PR title/description to reflect these additional scope changes (or splitting into smaller PRs) so reviewers can track intent vs. implementation.

Copilot uses AI. Check for mistakes.
Comment on lines 72 to +80
private getMigrationFiles(): MigrationFile[] {
const dialect = this.db.getDialect();

try {
const files = readdirSync(this.migrationsDir);

return files
.filter(file => file.endsWith(".sql"))
.map(filename => {
// Extract migration ID from filename (e.g., "001_initial_rbac.sql" -> "001")
const match = /^(\d+)_(.+)\.sql$/.exec(filename);
if (!match) {
throw new Error(`Invalid migration filename format: ${filename}. Expected format: NNN_name.sql`);
}

return {
id: match[1],
filename,
path: join(this.migrationsDir, filename)
};
})
.sort((a, b) => a.id.localeCompare(b.id));
// Regex matches: NNN_name.sql, NNN_name.sqlite.sql, NNN_name.postgres.sql
const migrationRegex = /^(\d+)_(.+?)(?:\.(sqlite|postgres))?\.sql$/;

Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new dialect-aware migration file selection and precedence logic (shared vs dialect-specific) is important and non-trivial, but no tests are added here. Add unit tests to cover: (1) shared-only selection, (2) dialect-specific overriding shared for the same ID, (3) ignoring the other dialect’s files, and (4) invalid filename formats raising the expected error.

Copilot uses AI. Check for mistakes.
alvagante and others added 4 commits April 1, 2026 10:17
- Delete orphaned database schema files (audit-schema.sql, migrations.sql, rbac-schema.sql)
- Rename initial migration to 000_initial_schema.sql for clarity
- Update DatabaseService to load only numbered migrations instead of base schema files
- Move all schema definitions to the migrations directory as single source of truth
- Add documentation in copilot-instructions.md explaining migration-first approach
- Create internal task documentation for database schema cleanup process
- Eliminate duplicate table definitions that existed across base schemas and migrations
…a file

- Update COPY instruction to include all database files and migrations
- Change from copying only schema.sql to copying entire src/database/ directory
- Ensures future database-related files are automatically included in builds
- Improves maintainability by avoiding need to update Dockerfile when new database files are added
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
alvagante added 13 commits April 1, 2026 19:58
- 1.1 Delete IntegrationConfigService source, types, and tests
- 1.2 Delete IntegrationConfigRouter and route tests
- 1.3 Create migration 010 to drop integration_configs table
- 1.4 Remove IntegrationConfigService from server.ts, plugins use ConfigService directly
…shboard

- 2.1 Rewrite IntegrationConfigPage.svelte as status dashboard with color-coded indicators
- 2.2 Refactor test connection API functions to use .env-sourced config (no body)
- 3.1 Refactor ProxmoxSetupGuide to pure .env snippet generator
- 3.2 Refactor AWSSetupGuide to pure .env snippet generator
- 3.3 Refactor remaining guides (Bolt, PuppetDB, SSH, Hiera, Puppetserver, Ansible)
…fix TS errors

- 4.1 Remove saveIntegrationConfig, getIntegrationConfig, getIntegrationConfigs, deleteIntegrationConfig, saveProxmoxConfig, saveAWSConfig, IntegrationConfigRecord from api.ts
- 4.2 Fix all TypeScript compilation errors across frontend (33 errors in 7 files)
- 5.1 Fix migration test to expect 11 migrations (including 010)
- 5.2 Fix all lint and type errors across both workspaces (274+ errors resolved)
…d components

- 6.1 Add ConfigService property tests (Property 1: env parsing round-trip)
- 6.2 Add IntegrationManager property tests (Property 8: graceful degradation)
- 6.3 Add Integration Status Dashboard tests (Properties 2, 3, 4)
- 6.4 Add Env Snippet Wizard tests (Properties 5, 6, 7)
- 6.5 Verify obsolete IntegrationConfigService tests removed

Note: pre-existing ProxmoxService.test.ts failure unrelated to this work
- 7.1 Update README.md with v1.0.0 version, .env-only config, new features
- 7.2 Update configuration, API, and architecture docs
- 7.3 Update integration setup guides and Docker deployment docs
- 7.4 Add v1.0.0 CHANGELOG entry with breaking changes
- 8.1 Verify all IntegrationConfigService references removed from source code
- 8.1 Clean stale entries from .secrets.baseline
- 8.2 Audit dependencies — all production deps actively used
- 9.1 Update package.json (root, backend, frontend) to 1.0.0
- 9.2 Update product.md steering, health endpoint, verify docker-compose
- 10.1 Fix trailing whitespace, detect-secrets false positives, ESLint warnings
- All pre-commit hooks pass (except hadolint-docker which requires Docker Desktop)
- 11.1 Update Dockerfile version labels and descriptions
- 11.2 Align integration ENV defaults across all Dockerfiles
- 11.3 Fix migration copy in alpine/ubuntu Dockerfiles
- 11.4 Add backend-deps stage for production deps in alpine/ubuntu
- 11.5 Rewrite .env.docker with comprehensive integration examples
- 11.6 Update docker-compose.yml with volume mounts and health check
- 11.7 Standardize docker-entrypoint.sh across all Dockerfiles
- 11.8 Update Docker deployment documentation
- 12.1 Sync .env.example with ConfigService parsing (add missing vars, fix gaps)
- 12.2 Fix setup guide env var mismatches (BoltSetupGuide, PuppetdbSetupGuide)
- 12.3 Verify .env.docker matches ConfigService and Docker ENV defaults
- 12.4 Reconcile Dockerfile ENV defaults with ConfigService Zod schema
- 13.1 npm test: 2893 passed (1 pre-existing ProxmoxService failure)
- 13.1 npm run lint: zero errors
- 13.1 tsc --noEmit: zero type errors
- 13.1 pre-commit run --all-files: all hooks pass
Copilot AI review requested due to automatic review settings April 2, 2026 21:26
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 89 out of 312 changed files in this pull request and generated 7 comments.

// Fetch nodes and groups in parallel
// Fetch nodes and groups in parallel with per-source timeout
// Prevents a single slow source from blocking the entire inventory
const SOURCE_TIMEOUT_MS = 15_000;
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per-source timeout rejects the Promise.race(...), but this block doesn't catch that rejection to degrade gracefully for the timed-out source. As written, a timeout can still fail the whole aggregation path (depending on surrounding code). If the intent is to keep inventory responsive even when one source is slow, catch timeout/errors around the race, log them, and return empty { nodes: [], groups: [] } for that source.

Copilot uses AI. Check for mistakes.
Comment on lines +456 to +461
const timeoutPromise = new Promise<never>((_, reject) => {
timeoutHandle = setTimeout(
() => { reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)); },
SOURCE_TIMEOUT_MS,
);
});
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per-source timeout rejects the Promise.race(...), but this block doesn't catch that rejection to degrade gracefully for the timed-out source. As written, a timeout can still fail the whole aggregation path (depending on surrounding code). If the intent is to keep inventory responsive even when one source is slow, catch timeout/errors around the race, log them, and return empty { nodes: [], groups: [] } for that source.

Copilot uses AI. Check for mistakes.
Comment on lines +465 to +469
try {
[nodes, groups] = await Promise.race([
Promise.all([
source.getInventory(),
source.getGroups().catch((error: unknown) => {
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per-source timeout rejects the Promise.race(...), but this block doesn't catch that rejection to degrade gracefully for the timed-out source. As written, a timeout can still fail the whole aggregation path (depending on surrounding code). If the intent is to keep inventory responsive even when one source is slow, catch timeout/errors around the race, log them, and return empty { nodes: [], groups: [] } for that source.

Copilot uses AI. Check for mistakes.
Comment on lines +180 to +182
// Set source (singular) to the primary source for backward compatibility
// This ensures code that reads node.source still works correctly
linkedNode.source = linkedNode.sources[0];
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linkedNode.sources[0] is order-dependent and may not reflect a stable/meaningful 'primary' source (e.g., it might vary with aggregation order). Since this field is used for backward compatibility, it should be deterministic and ideally reflect priority (or at minimum prefer a well-defined source like puppetserver when present). Consider selecting the primary source via an explicit rule (priority map / preferred-source list) rather than array insertion order.

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +32
throw new DatabaseConnectionError(
`Failed to connect to PostgreSQL: ${message}`,
this._databaseUrl,
);
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing this._databaseUrl into DatabaseConnectionError risks leaking credentials (username/password) if the error is logged or returned upstream. Consider redacting the password portion (or only storing host/db name) before attaching it to the error.

Copilot uses AI. Check for mistakes.
// Get hosts (direct members)
const hosts = Array.isArray(group.hosts) ? group.hosts : [];
// Get hosts (direct members) and prefix with source
const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : [];
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mapping assumes group.hosts is a string[]. If inventory parsing ever yields non-string entries, the resulting node IDs will be ansible:[object Object]/ansible:undefined, which can break linking and lookups. Safer is to filter to strings before prefixing (and/or coerce via String(h) only after validating the expected shape).

Suggested change
const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : [];
const hosts = Array.isArray(group.hosts)
? group.hosts
.filter((h): h is string => typeof h === "string")
.map((h) => `ansible:${h}`)
: [];

Copilot uses AI. Check for mistakes.
Comment on lines +2 to +22
-- Add integration_configs table for storing per-user integration configurations
-- with encrypted sensitive fields and unique constraint per user/integration.
-- Requirements: 32.1, 32.2, 32.3, 32.4

-- Integration configs table: Stores per-user integration settings
CREATE TABLE IF NOT EXISTS integration_configs (
id TEXT PRIMARY KEY,
userId TEXT NOT NULL,
integrationName TEXT NOT NULL,
config TEXT NOT NULL, -- JSON, sensitive fields encrypted
isActive INTEGER NOT NULL DEFAULT 1,
createdAt TEXT NOT NULL,
updatedAt TEXT NOT NULL,
FOREIGN KEY (userId) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE(userId, integrationName)
);

-- Performance indexes for integration config queries
CREATE INDEX IF NOT EXISTS idx_integration_configs_user ON integration_configs(userId);
CREATE INDEX IF NOT EXISTS idx_integration_configs_name ON integration_configs(integrationName);
CREATE INDEX IF NOT EXISTS idx_integration_configs_active ON integration_configs(isActive);
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces migration 009_integration_configs.sql but also adds migration 010_drop_integration_configs.sql to remove the same table. If v1.0.0 no longer supports DB-stored integration configs, consider removing migration 009 from this branch (or clearly justifying it as historical/backfill). Creating a table only to drop it immediately adds complexity and can confuse operators reviewing migration history.

Suggested change
-- Add integration_configs table for storing per-user integration configurations
-- with encrypted sensitive fields and unique constraint per user/integration.
-- Requirements: 32.1, 32.2, 32.3, 32.4
-- Integration configs table: Stores per-user integration settings
CREATE TABLE IF NOT EXISTS integration_configs (
id TEXT PRIMARY KEY,
userId TEXT NOT NULL,
integrationName TEXT NOT NULL,
config TEXT NOT NULL, -- JSON, sensitive fields encrypted
isActive INTEGER NOT NULL DEFAULT 1,
createdAt TEXT NOT NULL,
updatedAt TEXT NOT NULL,
FOREIGN KEY (userId) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE(userId, integrationName)
);
-- Performance indexes for integration config queries
CREATE INDEX IF NOT EXISTS idx_integration_configs_user ON integration_configs(userId);
CREATE INDEX IF NOT EXISTS idx_integration_configs_name ON integration_configs(integrationName);
CREATE INDEX IF NOT EXISTS idx_integration_configs_active ON integration_configs(isActive);
-- This migration number is intentionally retained as a no-op placeholder.
-- DB-stored integration configs were removed before release, so the
-- integration_configs table is not created in v1.0.0.
-- Keeping this file avoids renumbering later migrations while preventing
-- creation of a table that is immediately removed by a subsequent migration.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants