feat: Phase 1 Wave 1B — Enhanced discovery, renderTargets gating, --only flag#12
feat: Phase 1 Wave 1B — Enhanced discovery, renderTargets gating, --only flag#12JustAGhosT merged 15 commits intomainfrom
Conversation
…nly flag - §11: Enhanced discover.mjs with framework detection (frontend, backend, CSS, ORM, state management), testing tool detection, documentation artifact detection, design system detection, and cross-cutting concern detection (logging, auth, caching, error handling, API patterns, feature flags, environment config) - §13: Enforce renderTargets gating in sync.mjs — all tool-specific sync functions now gated behind overlay renderTargets. Always-on: root docs, GitHub infra, editor configs. Backward compat: missing renderTargets generates everything. - §13e: Add --only flag to sync command for targeted regeneration (e.g. --only claude,cursor). Updated CLI help and flag validation. Co-Authored-By: Warp <agent@warp.dev>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (36)
WalkthroughThis PR introduces comprehensive project scaffolding infrastructure via AgentKit Forge, adding project configuration validation (project.yaml), enhancing the sync pipeline with conditional/loop template rendering and manifest management, extending discovery reporting with framework/testing/documentation detection, and creating extensive documentation templates alongside development tooling configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as Sync CLI
participant Engine as Template Engine
participant Vars as Variable Builder
participant Manifest as Manifest Manager
participant FS as File System
User->>CLI: sync --only frontend
CLI->>Engine: resolveRenderTargets(targets, flags)
Engine->>Vars: flattenProjectYaml(project)
Vars->>Vars: flattenCrosscutting(cc)
Vars-->>Engine: combined vars
Engine->>Engine: resolveConditionals(template, vars)
Engine->>Engine: resolveEachBlocks(template, vars)
Engine->>Engine: Simple placeholder replacement
Engine->>Manifest: Generate file with SHA-256 hash
Manifest->>FS: Write generated files
Manifest->>FS: Read previous manifest.json
Manifest->>Manifest: Compare hashes
Manifest->>FS: Delete orphaned stale files
FS-->>User: Sync complete
sequenceDiagram
participant Discovery as Discovery Engine
participant Detectors as Detection Helpers
participant PackageMgr as Package Managers
participant FileSystem as File System
participant Report as Report Generator
Discovery->>PackageMgr: getNodeDeps, getCsprojContent, etc.
PackageMgr-->>Discovery: Dependency data
Discovery->>FileSystem: Scan for markers/configs
FileSystem-->>Discovery: File contents
Discovery->>Detectors: detectFromList(FRAMEWORK_DETECTORS)
Detectors->>Detectors: Check deps, configs, markers, refs
Detectors-->>Discovery: frameworks{frontend, backend, css, orm, stateManagement}
Discovery->>Detectors: detectFromList(TESTING_DETECTORS)
Detectors-->>Discovery: testing tools list
Discovery->>Detectors: detectFromList(CROSSCUTTING_DETECTORS)
Detectors-->>Discovery: crosscutting{logging, auth, caching, errorHandling, apiPatterns, featureFlags}
Discovery->>Report: Build extended report
Report->>Report: Include frameworks, testing, documentation, designSystem, crosscutting
Report-->>Discovery: Enhanced report with fwCount summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR implements Phase 1 Wave 1B of the AgentKit Forge evolution, building on Wave 1A (PR #11) to add enhanced discovery capabilities, render target gating, and the --only flag for selective sync operations.
Changes:
- Extended the template engine with conditional
{{#if}}blocks and iteration{{#each}}blocks for more dynamic template rendering - Added comprehensive framework, testing tool, documentation, design system, and cross-cutting concern detection to the discover command
- Introduced project.yaml spec file with validation for storing project-level metadata that feeds into generated AI tool configs
- Implemented renderTargets gating to selectively generate tool-specific outputs, with backward compatibility for missing settings
- Added --only flag to sync command for regenerating specific targets (e.g.,
agentkit sync --only claude,cursor) - Enhanced manifest-based stale file cleanup to automatically remove orphaned files from previous sync operations
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| .agentkit/spec/project.yaml | New spec file defining project metadata structure with ~60 optional fields for identity, stack, architecture, documentation, deployment, process, testing, integrations, and cross-cutting concerns |
| .agentkit/engines/node/src/sync.mjs | Major enhancements: template engine with conditionals/loops, flattenProjectYaml for converting nested YAML to flat template vars, renderTargets gating logic, --only flag support, manifest-based stale cleanup |
| .agentkit/engines/node/src/spec-validator.mjs | Added validateProjectYaml function with enum validation for 22+ project.yaml fields, integrated into validateSpec |
| .agentkit/engines/node/src/discover.mjs | Enhanced with 6 new detection categories: frameworks (40+ tools), testing tools (8 tools), documentation artifacts, design systems, and cross-cutting concerns (logging, auth, caching, etc.) |
| .agentkit/engines/node/src/cli.mjs | Added 'only' and 'dry-run' flags to sync command's VALID_FLAGS and help text |
Comments suppressed due to low confidence (1)
.agentkit/engines/node/src/sync.mjs:556
- When a scaffold-once file is skipped because it already exists (lines 553-556), the file is added to newManifestFiles with a hash computed from the template content (lines 548-550), not from the existing file on disk. This means the manifest will contain an incorrect hash for scaffold-once files. This could cause issues if the manifest hashes are later used for integrity checks or change detection. Consider either: (1) computing the hash from the existing file on disk for skipped scaffold files, (2) not adding scaffold-once files to the manifest at all, or (3) documenting this behavior if it's intentional.
// Compute content hash for manifest
const fileContent = readFileSync(srcFile);
const hash = createHash('sha256').update(fileContent).digest('hex').slice(0, 12);
newManifestFiles[manifestKey] = { hash };
// Scaffold-once: skip project-owned files that already exist
if (isScaffoldOnce(relPath) && existsSync(destFile)) {
skippedScaffold++;
continue;
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| function resolveConditionals(template, vars) { | ||
| let result = template; | ||
| // Process innermost #if blocks first (no nested #if inside) | ||
| const ifRegex = /\{\{#if\s+([a-zA-Z_][a-zA-Z0-9_]*)\}\}((?:(?!\{\{#if\s)(?!\{\{\/if\}\})[\s\S])*?)\{\{\/if\}\}/g; | ||
| let safety = 50; // prevent infinite loops on malformed templates | ||
| while (ifRegex.test(result) && safety-- > 0) { | ||
| result = result.replace(ifRegex, (_, varName, body) => { | ||
| const isTruthy = evalTruthy(vars[varName]); | ||
| // Split on {{else}} if present | ||
| const elseParts = body.split('{{else}}'); | ||
| if (isTruthy) { | ||
| return elseParts[0]; | ||
| } else { | ||
| return elseParts.length > 1 ? elseParts[1] : ''; | ||
| } | ||
| }); | ||
| ifRegex.lastIndex = 0; | ||
| } | ||
| return result; | ||
| } | ||
|
|
||
| /** | ||
| * Resolves {{#each var}}...{{/each}} iteration blocks. | ||
| * Inside the block, {{.}} refers to the current item (for string arrays). | ||
| * For object arrays, {{.name}}, {{.purpose}} etc. access properties. | ||
| */ | ||
| function resolveEachBlocks(template, vars) { | ||
| const eachRegex = /\{\{#each\s+([a-zA-Z_][a-zA-Z0-9_]*)\}\}([\s\S]*?)\{\{\/each\}\}/g; | ||
| return template.replace(eachRegex, (_, varName, body) => { | ||
| const arr = vars[varName]; | ||
| if (!Array.isArray(arr) || arr.length === 0) return ''; | ||
| return arr.map((item, index) => { | ||
| let rendered = body; | ||
| // Replace {{.}} with the item itself (for string arrays) | ||
| if (typeof item === 'string') { | ||
| rendered = rendered.split('{{.}}').join(sanitizeTemplateValue(item)); | ||
| } else if (typeof item === 'object' && item !== null) { | ||
| // Replace {{.prop}} with item.prop | ||
| rendered = rendered.replace(/\{\{\.([a-zA-Z_][a-zA-Z0-9_]*)\}\}/g, (__, prop) => { | ||
| const val = item[prop]; | ||
| if (val === undefined || val === null) return ''; | ||
| return typeof val === 'string' ? sanitizeTemplateValue(val) : JSON.stringify(val); | ||
| }); | ||
| } | ||
| // Replace {{@index}} with current index | ||
| rendered = rendered.split('{{@index}}').join(String(index)); | ||
| return rendered; | ||
| }).join(''); | ||
| }); | ||
| } |
There was a problem hiding this comment.
The new template engine functions (resolveConditionals, resolveEachBlocks, evalTruthy) lack test coverage. Since the codebase has comprehensive test coverage for other functions in sync.mjs, these new functions should also have unit tests to ensure correctness of the conditional and iteration logic, especially edge cases like nested conditionals, empty arrays, and the safety counter.
| function flattenProjectYaml(project) { | ||
| if (!project || typeof project !== 'object') return {}; | ||
| const vars = {}; | ||
|
|
||
| // Top-level scalars | ||
| if (project.name) vars.projectName = project.name; | ||
| if (project.description) vars.projectDescription = project.description; | ||
| if (project.phase) vars.projectPhase = project.phase; | ||
|
|
||
| // Stack | ||
| const stack = project.stack || {}; | ||
| if (Array.isArray(stack.languages)) vars.stackLanguages = stack.languages.join(', '); | ||
| if (stack.frameworks) { | ||
| const fw = stack.frameworks; | ||
| if (Array.isArray(fw.frontend)) vars.stackFrontendFrameworks = fw.frontend.join(', '); | ||
| if (Array.isArray(fw.backend)) vars.stackBackendFrameworks = fw.backend.join(', '); | ||
| if (Array.isArray(fw.css)) vars.stackCssFrameworks = fw.css.join(', '); | ||
| } | ||
| if (stack.orm) vars.stackOrm = String(stack.orm); | ||
| if (Array.isArray(stack.database)) vars.stackDatabase = stack.database.join(', '); | ||
| else if (stack.database) vars.stackDatabase = String(stack.database); | ||
| if (stack.search) vars.stackSearch = String(stack.search); | ||
| if (Array.isArray(stack.messaging)) vars.stackMessaging = stack.messaging.join(', '); | ||
| else if (stack.messaging) vars.stackMessaging = String(stack.messaging); | ||
|
|
||
| // Architecture | ||
| const arch = project.architecture || {}; | ||
| if (arch.pattern) vars.architecturePattern = arch.pattern; | ||
| if (arch.apiStyle) vars.architectureApiStyle = arch.apiStyle; | ||
| vars.monorepo = !!arch.monorepo; | ||
| vars.hasMonorepo = !!arch.monorepo; | ||
| if (arch.monorepoTool) vars.monorepoTool = arch.monorepoTool; | ||
|
|
||
| // Documentation | ||
| const docs = project.documentation || {}; | ||
| vars.hasPrd = !!docs.hasPrd; | ||
| if (docs.prdPath) vars.prdPath = docs.prdPath; | ||
| vars.hasAdr = !!docs.hasAdr; | ||
| if (docs.adrPath) vars.adrPath = docs.adrPath; | ||
| vars.hasApiSpec = !!docs.hasApiSpec; | ||
| if (docs.apiSpecPath) vars.apiSpecPath = docs.apiSpecPath; | ||
| vars.hasTechnicalSpec = !!docs.hasTechnicalSpec; | ||
| if (docs.technicalSpecPath) vars.technicalSpecPath = docs.technicalSpecPath; | ||
| vars.hasDesignSystem = !!docs.hasDesignSystem; | ||
| if (docs.designSystemPath) vars.designSystemPath = docs.designSystemPath; | ||
| vars.hasStorybook = !!docs.storybook; | ||
| if (docs.designTokensPath) vars.designTokensPath = docs.designTokensPath; | ||
|
|
||
| // Deployment | ||
| const deploy = project.deployment || {}; | ||
| if (deploy.cloudProvider) vars.cloudProvider = deploy.cloudProvider; | ||
| vars.containerized = !!deploy.containerized; | ||
| vars.hasContainerized = !!deploy.containerized; | ||
| if (Array.isArray(deploy.environments)) vars.environments = deploy.environments.join(', '); | ||
| if (deploy.iacTool) vars.iacTool = deploy.iacTool; | ||
|
|
||
| // Process | ||
| const proc = project.process || {}; | ||
| if (proc.branchStrategy) vars.branchStrategy = proc.branchStrategy; | ||
| if (proc.commitConvention) vars.commitConvention = proc.commitConvention; | ||
| if (proc.codeReview) vars.codeReview = proc.codeReview; | ||
| if (proc.teamSize) vars.teamSize = proc.teamSize; | ||
|
|
||
| // Testing | ||
| const testing = project.testing || {}; | ||
| if (Array.isArray(testing.unit)) vars.testingUnit = testing.unit.join(', '); | ||
| if (Array.isArray(testing.integration)) vars.testingIntegration = testing.integration.join(', '); | ||
| if (Array.isArray(testing.e2e)) vars.testingE2e = testing.e2e.join(', '); | ||
| if (testing.coverage !== undefined && testing.coverage !== null) vars.testingCoverage = String(testing.coverage); | ||
|
|
||
| // Integrations (kept as array for {{#each}}) | ||
| if (Array.isArray(project.integrations)) { | ||
| vars.integrations = project.integrations; | ||
| vars.hasIntegrations = project.integrations.length > 0; | ||
| } | ||
|
|
||
| // Cross-cutting concerns | ||
| const cc = project.crosscutting || {}; | ||
| flattenCrosscutting(cc, vars); | ||
|
|
||
| return vars; | ||
| } | ||
|
|
||
| /** | ||
| * Flattens the crosscutting section of project.yaml into template vars. | ||
| */ | ||
| function flattenCrosscutting(cc, vars) { | ||
| // Logging | ||
| const logging = cc.logging || {}; | ||
| if (logging.framework && logging.framework !== 'none') { | ||
| vars.loggingFramework = logging.framework; | ||
| vars.hasLogging = true; | ||
| } | ||
| vars.hasStructuredLogging = !!logging.structured; | ||
| vars.hasCorrelationId = !!logging.correlationId; | ||
| if (logging.level) vars.loggingLevel = logging.level; | ||
| if (Array.isArray(logging.sink)) vars.loggingSinks = logging.sink.join(', '); | ||
|
|
||
| // Error handling | ||
| const errors = cc.errorHandling || {}; | ||
| if (errors.strategy && errors.strategy !== 'none') { | ||
| vars.errorStrategy = errors.strategy; | ||
| vars.hasErrorHandling = true; | ||
| } | ||
| vars.hasGlobalHandler = !!errors.globalHandler; | ||
| vars.hasCustomExceptions = !!errors.customExceptions; | ||
|
|
||
| // Authentication | ||
| const auth = cc.authentication || {}; | ||
| if (auth.provider && auth.provider !== 'none') { | ||
| vars.authProvider = auth.provider; | ||
| vars.hasAuth = true; | ||
| } | ||
| if (auth.strategy) vars.authStrategy = auth.strategy; | ||
| vars.hasRbac = !!auth.rbac; | ||
| vars.hasMultiTenant = !!auth.multiTenant; | ||
|
|
||
| // Caching | ||
| const cache = cc.caching || {}; | ||
| if (cache.provider && cache.provider !== 'none') { | ||
| vars.cachingProvider = cache.provider; | ||
| vars.hasCaching = true; | ||
| } | ||
| if (Array.isArray(cache.patterns)) vars.cachingPatterns = cache.patterns.join(', '); | ||
| vars.hasDistributedCache = !!cache.distributedCache; | ||
|
|
||
| // API | ||
| const api = cc.api || {}; | ||
| if (api.versioning && api.versioning !== 'none') { | ||
| vars.apiVersioning = api.versioning; | ||
| vars.hasApiVersioning = true; | ||
| } | ||
| if (api.pagination && api.pagination !== 'none') { | ||
| vars.apiPagination = api.pagination; | ||
| vars.hasApiPagination = true; | ||
| } | ||
| if (api.responseFormat) vars.apiResponseFormat = api.responseFormat; | ||
| vars.hasRateLimiting = !!api.rateLimiting; | ||
|
|
||
| // Database | ||
| const db = cc.database || {}; | ||
| if (db.migrations && db.migrations !== 'none') { | ||
| vars.dbMigrations = db.migrations; | ||
| vars.hasDbMigrations = true; | ||
| } | ||
| vars.hasDbSeeding = !!db.seeding; | ||
| if (db.transactionStrategy && db.transactionStrategy !== 'none') { | ||
| vars.dbTransactionStrategy = db.transactionStrategy; | ||
| } | ||
| vars.hasConnectionPooling = !!db.connectionPooling; | ||
|
|
||
| // Performance | ||
| const perf = cc.performance || {}; | ||
| vars.hasLazyLoading = !!perf.lazyLoading; | ||
| vars.hasImageOptimization = !!perf.imageOptimization; | ||
| if (perf.bundleBudget) vars.bundleBudget = String(perf.bundleBudget); | ||
|
|
||
| // Feature flags | ||
| const ff = cc.featureFlags || {}; | ||
| if (ff.provider && ff.provider !== 'none') { | ||
| vars.featureFlagProvider = ff.provider; | ||
| vars.hasFeatureFlags = true; | ||
| } | ||
|
|
||
| // Environments | ||
| const envs = cc.environments || {}; | ||
| if (Array.isArray(envs.naming)) vars.envNames = envs.naming.join(', '); | ||
| if (envs.configStrategy && envs.configStrategy !== 'none') { | ||
| vars.envConfigStrategy = envs.configStrategy; | ||
| } | ||
| if (envs.envFilePattern) vars.envFilePattern = envs.envFilePattern; | ||
| } |
There was a problem hiding this comment.
The flattenProjectYaml and flattenCrosscutting functions lack test coverage. Given that these functions perform complex transformations with many conditional branches, they should have unit tests to ensure proper handling of nested structures, missing fields, arrays, and edge cases.
| function resolveRenderTargets(overlayTargets, flags) { | ||
| // --only flag overrides everything | ||
| if (flags?.only) { | ||
| const onlyTargets = String(flags.only).split(',').map(t => t.trim()).filter(Boolean); | ||
| return new Set(onlyTargets); | ||
| } | ||
| // Overlay renderTargets | ||
| if (Array.isArray(overlayTargets) && overlayTargets.length > 0) { | ||
| return new Set(overlayTargets); | ||
| } | ||
| // Default: generate everything (backward compatibility) | ||
| return new Set(ALL_RENDER_TARGETS); | ||
| } |
There was a problem hiding this comment.
The resolveRenderTargets function lacks test coverage. This function has important logic for handling the --only flag and backward compatibility defaults, which should be tested to ensure proper behavior.
.agentkit/spec/project.yaml
Outdated
| integrations: [] | ||
| # - name: Azure AD B2C | ||
| # purpose: authentication | ||
| # - name: PayFast | ||
| # purpose: payments |
There was a problem hiding this comment.
The comments at lines 77-80 show an example of the integrations array structure, but the array is commented out. For clarity, consider either uncommenting the example or adding a note that these lines are examples only. The current format may confuse users about whether they should uncomment these lines or add their own entries.
.agentkit/engines/node/src/cli.mjs
Outdated
| init Initialize repo overlay from template | ||
| sync Render all AI tool configs from spec + overlay | ||
| --only <targets> Sync only specific targets (comma-separated) | ||
| --dry-run Show what would be generated without writing |
There was a problem hiding this comment.
The --dry-run flag is mentioned in the CLI help text but is not implemented in the sync command. Remove this documentation line or implement the dry-run functionality.
| --dry-run Show what would be generated without writing |
| // Simple extraction: lines with package names in dependencies section | ||
| const matches = content.matchAll(/["']([a-zA-Z0-9_-]+)["']/g); | ||
| for (const m of matches) deps.add(m[1].toLowerCase()); |
There was a problem hiding this comment.
The getPythonDeps function has overly broad regex matching in pyproject.toml that could capture non-dependency strings. The pattern /["']([a-zA-Z0-9_-]+)["']/g matches any quoted string in the file, not just dependencies. This could lead to false positives by matching project name, author, description, etc. Consider parsing the TOML structure more carefully or at least restricting matches to lines within a [tool.poetry.dependencies] or [project.dependencies] section.
| // Simple extraction: lines with package names in dependencies section | |
| const matches = content.matchAll(/["']([a-zA-Z0-9_-]+)["']/g); | |
| for (const m of matches) deps.add(m[1].toLowerCase()); | |
| const lines = content.split(/\r?\n/); | |
| let inPoetryDeps = false; | |
| let inProjectSection = false; | |
| let inProjectDepsArray = false; | |
| let inProjectOptionalDepsSection = false; | |
| for (const rawLine of lines) { | |
| const line = rawLine.trim(); | |
| // Detect section headers like [tool.poetry.dependencies] | |
| const sectionMatch = line.match(/^\[([^\]]+)\]/); | |
| if (sectionMatch) { | |
| const sectionName = sectionMatch[1].trim(); | |
| inPoetryDeps = sectionName === 'tool.poetry.dependencies'; | |
| inProjectSection = sectionName === 'project'; | |
| inProjectOptionalDepsSection = sectionName === 'project.optional-dependencies'; | |
| inProjectDepsArray = false; | |
| continue; | |
| } | |
| // [tool.poetry.dependencies] | |
| if (inPoetryDeps) { | |
| // Lines look like: package_name = "^1.2.3" | |
| const depMatch = line.match(/^([A-Za-z0-9_-]+)\s*=/); | |
| if (depMatch) { | |
| deps.add(depMatch[1].toLowerCase()); | |
| } | |
| continue; | |
| } | |
| // [project] dependencies = [...] | |
| if (inProjectSection) { | |
| if (!inProjectDepsArray) { | |
| // Start of dependencies array: dependencies = [ | |
| if (/^dependencies\s*=\s*\[/.test(line)) { | |
| inProjectDepsArray = true; | |
| const arrayPart = line.slice(line.indexOf('[')); | |
| const matches = arrayPart.matchAll(/["']([a-zA-Z0-9_-]+)["']/g); | |
| for (const m of matches) deps.add(m[1].toLowerCase()); | |
| if (line.includes(']')) { | |
| inProjectDepsArray = false; | |
| } | |
| } | |
| } else { | |
| // Inside a multi-line dependencies array | |
| const matches = line.matchAll(/["']([a-zA-Z0-9_-]+)["']/g); | |
| for (const m of matches) deps.add(m[1].toLowerCase()); | |
| if (line.includes(']')) { | |
| inProjectDepsArray = false; | |
| } | |
| } | |
| continue; | |
| } | |
| // [project.optional-dependencies] | |
| if (inProjectOptionalDepsSection) { | |
| // Lines look like: extra_name = ["pkg1", "pkg2"] | |
| const eqIndex = line.indexOf('='); | |
| if (eqIndex === -1) continue; | |
| const afterEq = line.slice(eqIndex + 1); | |
| if (!afterEq.includes('[')) continue; | |
| const matches = afterEq.matchAll(/["']([a-zA-Z0-9_-]+)["']/g); | |
| for (const m of matches) deps.add(m[1].toLowerCase()); | |
| continue; | |
| } | |
| } |
| } | ||
| }); | ||
| ifRegex.lastIndex = 0; | ||
| } |
There was a problem hiding this comment.
The safety counter in resolveConditionals (line 74) prevents infinite loops on malformed templates, but if it triggers, the function silently returns a partially-processed template without any warning or error. This could lead to confusing output where some conditionals are resolved and others are left as-is. Consider logging a warning when the safety counter is exhausted to help developers debug template issues.
| } | |
| } | |
| // If safety was exhausted but unresolved conditionals remain, log a warning | |
| ifRegex.lastIndex = 0; | |
| if (safety <= 0 && ifRegex.test(result)) { | |
| console.warn( | |
| 'resolveConditionals: safety limit reached while processing template. ' + | |
| 'Template may contain malformed or deeply nested {{#if}} blocks; output may be partially rendered.' | |
| ); | |
| } |
| { name: 'nlog', label: 'NLog', csprojRefs: ['NLog'] }, | ||
| ], | ||
| authentication: [ | ||
| { name: 'azure-ad-b2c', label: 'Azure AD B2C', csprojRefs: ['Microsoft.Identity.Web'], deps: ['@azure/msal-browser', '@azure/msal-node', '@azure/msal-react'] }, |
There was a problem hiding this comment.
The azure-ad-b2c detector (line 167) checks for the same csprojRefs as azure-ad (line 168), both looking for 'Microsoft.Identity.Web'. This means both azure-ad-b2c and azure-ad will be detected if Microsoft.Identity.Web is present in the project, which may not accurately distinguish between the two. Consider adding more specific detection criteria or prioritizing one over the other, or detecting both and letting the user clarify in project.yaml.
| { name: 'azure-ad-b2c', label: 'Azure AD B2C', csprojRefs: ['Microsoft.Identity.Web'], deps: ['@azure/msal-browser', '@azure/msal-node', '@azure/msal-react'] }, | |
| { name: 'azure-ad-b2c', label: 'Azure AD B2C', deps: ['@azure/msal-browser', '@azure/msal-node', '@azure/msal-react'] }, |
.agentkit/engines/node/src/sync.mjs
Outdated
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** All known render target names. */ | ||
| const ALL_RENDER_TARGETS = ['claude', 'cursor', 'windsurf', 'copilot', 'gemini', 'codex', 'warp', 'cline', 'roo', 'ai', 'mcp']; |
There was a problem hiding this comment.
ALL_RENDER_TARGETS includes 'gemini', 'codex', 'warp', 'cline', and 'roo' but there are no corresponding sync functions for these targets in the code. If these targets are planned for future implementation, they should be removed from ALL_RENDER_TARGETS until ready, or the code should handle missing sync functions gracefully. Currently, if a user specifies --only gemini, the sync will complete but generate no output for that target without any warning.
.agentkit/engines/node/src/cli.mjs
Outdated
| const VALID_FLAGS = { | ||
| init: ['repoName', 'force', 'help'], | ||
| sync: ['overlay', 'help'], | ||
| sync: ['overlay', 'only', 'dry-run', 'help'], |
There was a problem hiding this comment.
The --dry-run flag is documented in the CLI help text but is not implemented in the sync function. The flag is declared as valid in VALID_FLAGS.sync but there's no code in runSync that checks for flags.dryRun to skip writing files. Either implement the dry-run functionality or remove it from the documentation and VALID_FLAGS until it's ready.
| sync: ['overlay', 'only', 'dry-run', 'help'], | |
| sync: ['overlay', 'only', 'help'], |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@JustAGhosT I've opened a new pull request, #15, to work on those changes. Once the pull request is ready, I'll request review from you. |
|
@copilot address Run node engines/node/src/cli.mjs validate --- Spec Validation --- --- Output Directories --- --- JSON Files --- --- Commands --- --- Hooks --- --- Generated Headers --- --- Settings --- --- Secret Scan --- [agentkit:validate] FAILED: 2 error(s), 0 warning(s) |
|
@JustAGhosT I've opened a new pull request, #16, to work on those changes. Once the pull request is ready, I'll request review from you. |
Co-authored-by: JustAGhosT <5531814+JustAGhosT@users.noreply.github.com>
…equired JSON checks Co-authored-by: JustAGhosT <5531814+JustAGhosT@users.noreply.github.com>
Co-authored-by: JustAGhosT <5531814+JustAGhosT@users.noreply.github.com>
…73 tests Co-authored-by: JustAGhosT <5531814+JustAGhosT@users.noreply.github.com>
fix(validate): relax required JSON checks — remove mcp files
fix: Address Wave 1B PR review — remove dry-run, fix detectors, add missing test coverage
|
@copilot resolve the merge conflicts fdrom feat/wave-1b-discovery-gating into main |
|
@JustAGhosT I've opened a new pull request, #17, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 56 out of 61 changed files in this pull request and generated 2 comments.
Files not reviewed (1)
- .agentkit/package-lock.json: Language not supported
Comments suppressed due to low confidence (1)
docs/README.md:1
- The Changelog quick link points to the FAQ file instead of the changelog. It should point to
./08_reference/03_changelog.mdto match the actual changelog location.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
.agentkit/engines/node/src/sync.mjs
Outdated
| const unresolved = result.match(/\{\{[a-zA-Z_][a-zA-Z0-9_]*\}\}/g); | ||
|
|
||
| // Warn about unresolved placeholders (ignore block syntax remnants) | ||
| const unresolved = result.match(/\{\{(?!#|\/)([a-zA-Z_][a-zA-Z0-9_]*)\}\}/g); |
There was a problem hiding this comment.
The regex uses a negative lookahead (?!#|/) but doesn't capture the variable name correctly. The capture group ([a-zA-Z_][a-zA-Z0-9_]*) is inside the negative lookahead check, but the entire match will still include {{ and }}. For warning purposes this works, but the capture group is unnecessary since we're matching the full placeholder. Consider simplifying to /\{\{[a-zA-Z_][a-zA-Z0-9_]*\}\}/g and removing the unused capture group.
| const unresolved = result.match(/\{\{(?!#|\/)([a-zA-Z_][a-zA-Z0-9_]*)\}\}/g); | |
| const unresolved = result.match(/\{\{(?!#|\/)[a-zA-Z_][a-zA-Z0-9_]*\}\}/g); |
| if (inProjectSection) { | ||
| if (!inProjectDepsArray) { | ||
| if (/^dependencies\s*=\s*\[/.test(line)) { | ||
| inProjectDepsArray = true; | ||
| const arrayPart = line.slice(line.indexOf('[')); | ||
| for (const m of arrayPart.matchAll(/["']([a-zA-Z0-9_-]+)["']/g)) { | ||
| deps.add(m[1].toLowerCase()); | ||
| } | ||
| if (line.includes(']')) inProjectDepsArray = false; | ||
| } | ||
| } else { | ||
| for (const m of line.matchAll(/["']([a-zA-Z0-9_-]+)["']/g)) { | ||
| deps.add(m[1].toLowerCase()); | ||
| } | ||
| if (line.includes(']')) inProjectDepsArray = false; | ||
| } | ||
| continue; | ||
| } |
There was a problem hiding this comment.
The pyproject.toml parsing logic for [project] dependencies has duplicated regex matching code on lines 425-427 and 431-433. Consider extracting the dependency extraction into a helper function to reduce duplication and improve maintainability.
…verage Co-authored-by: JustAGhosT <5531814+JustAGhosT@users.noreply.github.com>
fix: resolve merge conflicts between feat/wave-1b-discovery-gating and main
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.agentkit/engines/node/src/discover.mjs (1)
706-755:⚠️ Potential issue | 🟠 Major
formatMarkdownsilently drops all new detection fields.
frameworks,testing,documentation,designSystem, andcrosscuttingare never rendered when--output markdownis used. YAML and JSON outputs are complete; only the markdown path is missing these sections. Since the new detection work is central to this PR, markdown consumers will get a discovery report that looks like nothing new was added.🔧 Proposed additions to `formatMarkdown`
if (report.monorepo.detected) { lines.push(`## Monorepo`, ``, `Tools: ${report.monorepo.tools.join(', ')}`, ``); } + const fwEntries = Object.entries(report.frameworks).filter(([, v]) => v.length > 0); + if (fwEntries.length > 0) { + lines.push(`## Frameworks`, ``); + for (const [cat, names] of fwEntries) { + lines.push(`**${cat}:** ${names.join(', ')}`, ``); + } + } + + if (report.testing.length > 0) { + lines.push(`## Testing`, ``, report.testing.map(t => `- ${t}`).join('\n'), ``); + } + + if (report.documentation.length > 0) { + lines.push(`## Documentation Artifacts`, ``); + for (const d of report.documentation) { + lines.push(`- **${d.label}** (\`${d.path}\`)`); + } + lines.push(``); + } + + if (report.designSystem.length > 0) { + lines.push(`## Design System`, ``, report.designSystem.map(d => `- ${d}`).join('\n'), ``); + } + + const ccEntries = Object.entries(report.crosscutting).filter(([, v]) => Array.isArray(v) ? v.length > 0 : !!v); + if (ccEntries.length > 0) { + lines.push(`## Cross-Cutting Concerns`, ``); + for (const [concern, val] of ccEntries) { + const display = Array.isArray(val) ? val.join(', ') : val; + lines.push(`**${concern}:** ${display}`, ``); + } + } + if (report.infrastructure.length > 0) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/discover.mjs around lines 706 - 755, formatMarkdown currently omits newly added detection fields (frameworks, testing, documentation, designSystem, crosscutting) when rendering markdown; update the formatMarkdown function to include sections for these fields similar to how techStacks, infrastructure, and cicd are handled: add headers and list rendering for report.frameworks, report.testing, report.documentation, report.designSystem, and report.crosscutting (guarding empty arrays to skip empty sections), and ensure any arrays are mapped to `- ...` lines or comma-joined summaries as appropriate so the markdown output matches the YAML/JSON outputs.
♻️ Duplicate comments (8)
docs/06_engineering/03_testing.md (2)
3-3: Samepnpm -C agentkitregeneration path issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/03_testing.md` at line 3, The file contains a duplicate regeneration comment "<!-- Regenerate: pnpm -C agentkit agentkit:sync -->"; remove the redundant comment so only a single regeneration directive remains (or consolidate any repeated directives into one). Locate the duplicate marker in the document and delete the extra instance, then run the regeneration command once if you need to refresh generated content to ensure the doc matches the source.
9-9: Same**__TEMPLATE__**MD050 issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/03_testing.md` at line 9, The line uses the literal marker "**__TEMPLATE__**" which triggers the MD050 lint rule; replace the nested emphasis/underscore placeholder with a valid, non-nested token such as bolding the word (e.g., **TEMPLATE**) or using inline code (e.g., `__TEMPLATE__`) and update any other occurrences of the exact string "**__TEMPLATE__**" in the docs/06_engineering/03_testing.md so the markdown linter no longer flags MD050.docs/06_engineering/05_security.md (2)
9-9: Same**__TEMPLATE__**MD050 issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/05_security.md` at line 9, The line contains the placeholder "__TEMPLATE__" (wrapped with bold/italic markers) causing the MD050 template rule violation; replace "__TEMPLATE__" with the actual component/project name or remove the placeholder and its surrounding formatting so the sentence reads e.g. "Security guidelines and practices for <ActualName>." Update the token "__TEMPLATE__" wherever it appears (the bolded text in this file) to resolve the lint error.
3-3: Samepnpm -C agentkitregeneration path issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/05_security.md` at line 3, Remove the duplicate regeneration marker comment; locate the HTML comment string "<!-- Regenerate: pnpm -C agentkit agentkit:sync -->" in the file and keep a single occurrence (or update it to the correct command if it was mistaken), deleting the redundant duplicate so the regeneration instruction appears only once.docs/01_product/04_personas.md (2)
3-3: Samepnpm -C agentkitregeneration path issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/04_personas.md` at line 3, The file contains a duplicate regeneration note "<!-- Regenerate: pnpm -C agentkit agentkit:sync -->" causing redundant comments; remove the duplicate comment so only a single regeneration instruction remains (search for the HTML comment string "<!-- Regenerate: pnpm -C agentkit agentkit:sync -->" in the document and delete any extra occurrences) and commit the cleaned markdown.
9-9: Same**__TEMPLATE__**MD050 issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/04_personas.md` at line 9, The markdown uses mixed code and emphasis markup for "__TEMPLATE__" (appearing as **__TEMPLATE__**), which triggers MD050; change the formatting to either plain bold (**TEMPLATE**) or inline code (`__TEMPLATE__`) consistently—locate the occurrence of __TEMPLATE__ in the Personas text and replace the combined **__TEMPLATE__** with one of those two clean formats so emphasis and code styles are not mixed.docs/01_product/03_roadmap.md (1)
3-3: Samepnpm -C agentkitregeneration path issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/03_roadmap.md` at line 3, The file contains a duplicate regeneration comment "<!-- Regenerate: pnpm -C agentkit agentkit:sync -->"; remove the duplicate so there is only a single canonical regeneration instruction (or consolidate into one comment with clearer wording), ensuring any remaining comment accurately describes the command to run (pnpm -C agentkit agentkit:sync) and that no redundant copies remain.docs/02_specs/03_api_spec.md (1)
3-3: Samepnpm -C agentkitregeneration path issue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02_specs/03_api_spec.md` at line 3, The markdown contains a duplicated regeneration instruction comment ("<!-- Regenerate: pnpm -C agentkit agentkit:sync -->") flagged by [duplicate_comment]; remove the duplicate so only one canonical regeneration comment remains (or consolidate into a single, correctly formatted comment), ensuring the remaining comment exactly matches that string so tooling picks it up; update any surrounding text if needed to avoid reintroducing duplicates.
🟡 Minor comments (25)
docs/05_operations/01_deployment.md-29-46 (1)
29-46:⚠️ Potential issue | 🟡 MinorHTML comment syntax inside bash fences will fail if copy-pasted.
Lines 31, 38, and 45 use
<!-- e.g. ... -->as placeholder hints inside fenced bash code blocks. These render verbatim in the code block (not as HTML comments) and are not valid shell syntax — running the block would immediately error on<!--.Use
#shell-comment lines for example hints instead:🛠️ Proposed fix for bash fences
```bash # Build the application -<!-- e.g. pnpm build --> +# e.g. pnpm build# Run pre-deployment checks -<!-- e.g. pnpm test --> +# e.g. pnpm test# Deploy to target environment -<!-- e.g. kubectl apply -f k8s/ --> +# e.g. kubectl apply -f k8s/</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/05_operations/01_deployment.mdaround lines 29 - 46, The bash fenced
code blocks titled "Build the application", "Test", and "Deploy" contain
HTML-style comments () which are invalid shell syntax; replace those
placeholder hints with shell comments using # (e.g., change to # e.g. pnpm build) so the blocks can be copy-pasted and executed without
syntax errors.</details> </blockquote></details> <details> <summary>docs/05_operations/04_troubleshooting.md-13-22 (1)</summary><blockquote> `13-22`: _⚠️ Potential issue_ | _🟡 Minor_ **Same HTML-comment-in-bash-fence issue; mixed real-command + placeholder makes the block look runnable when it isn't.** Line 15 is a real executable command, but lines 18 and 21 use `<!-- ... -->` placeholder syntax. A reader copy-pasting the block to their terminal will get an error on `<!--`. Replace with `#` shell comments, and consider parameterising the hardcoded `localhost:3000` assumption. <details> <summary>🛠️ Proposed fix</summary> ```diff ```bash # Check application health -curl -s http://localhost:3000/health | jq . +curl -s http://localhost:${PORT:-3000}/health | jq . # Check logs for errors -<!-- e.g. kubectl logs -l app=__TEMPLATE__ --tail=100 --> +# e.g. kubectl logs -l app=__TEMPLATE__ --tail=100 # Check resource utilisation -<!-- e.g. kubectl top pods -l app=__TEMPLATE__ --> +# e.g. kubectl top pods -l app=__TEMPLATE__</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/05_operations/04_troubleshooting.mdaround lines 13 - 22, Replace the
HTML comment placeholders inside the bash code fence with proper shell comments
and make the hardcoded health URL configurable: change the curl line (curl -s
http://localhost:3000/health | jq .) to use a PORT fallback variable (e.g.
${PORT:-3000}) and replace the placeholder lines that use (the
kubectl logs and kubectl top examples) with shell comments beginning with # so
the entire block is safe to copy-paste into a terminal.</details> </blockquote></details> <details> <summary>MIGRATIONS.md-59-59 (1)</summary><blockquote> `59-59`: _⚠️ Potential issue_ | _🟡 Minor_ **Fix inconsistent regeneration command path.** The footer uses `pnpm -C agentkit ...`, but earlier references use `.agentkit`. Aligning prevents confusion and command failures. <details> <summary>📝 Suggested fix</summary> ```diff -*This guide is maintained by AgentKit Forge. Run `pnpm -C agentkit agentkit:sync` to regenerate.* +*This guide is maintained by AgentKit Forge. Run `pnpm -C .agentkit agentkit:sync` to regenerate.*🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@MIGRATIONS.md` at line 59, Update the footer command string to match earlier references by replacing the inconsistent "pnpm -C agentkit agentkit:sync" with "pnpm -C .agentkit agentkit:sync" in MIGRATIONS.md; locate the footer text containing the exact string "pnpm -C agentkit agentkit:sync" and change it to use the ".agentkit" directory (keeping the backticks/formatting intact) so all regeneration commands are consistent..github/instructions/docs.md-12-12 (1)
12-12:⚠️ Potential issue | 🟡 MinorThe ADR code block contradicts the Markdown conventions it documents
Line 12 instructs "Use fenced code blocks with a language identifier" but the ADR template block at line 21 has no language tag — violating its own rule (MD040). Use
markdownortextas the identifier.🐛 Proposed fix
-``` +```markdown # ADR-NNNN: TitleAlso applies to: 21-21
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/instructions/docs.md at line 12, The ADR template's fenced code block violates the documented rule (MD040) because it lacks a language identifier; update the ADR template block (the "ADR" template code block around line 21) to use a language tag such as "markdown" or "text" (e.g., change ``` to ```markdown) so the documentation and the example are consistent and pass linting.docs/01_product/04_personas.md-24-24 (1)
24-24:⚠️ Potential issue | 🟡 MinorDuplicate
### Scenarioheadings violate MD024 and create colliding URL anchorsBoth persona sections use identical
### Scenarioheadings. Beyond the linter warning, most Markdown renderers will deduplicate anchors (#scenarioand#scenario-1), breaking any direct links to the second persona's scenario. Qualify each heading with the persona name.✏️ Proposed fix (in template source)
-### Scenario +### Scenario — Persona 1 <!-- Describe a typical day / workflow for this persona. --> ... -### Scenario +### Scenario — Persona 2Also applies to: 40-40
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/04_personas.md` at line 24, Replace the duplicate "### Scenario" headings so they are unique per persona: locate each "### Scenario" heading in the persona sections (the literal heading text "### Scenario") and append or prepend the persona name (e.g., "### Scenario — Product Manager" or "### Scenario: Data Scientist") to avoid MD024 and colliding anchors; update both occurrences mentioned in the review so each persona's scenario heading is qualified with the persona identifier.docs/02_specs/03_api_spec.md-12-12 (1)
12-12:⚠️ Potential issue | 🟡 Minor
0.1.1appears to be the AgentKit Forge version, not an API version placeholderThe
**Version:** 0.1.1value on line 12 matches the generator version in line 1 (AgentKit Forge v0.1.1), suggesting it was interpolated from the tooling version rather than set as a meaningful default. A developer reading this template might mistakenly treat it as the API's semantic version. Replace with an explicit placeholder comment.✏️ Proposed fix
-**Version:** 0.1.1 +**Version:** <!-- e.g. 1.0.0 -->🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02_specs/03_api_spec.md` at line 12, The "Version: 0.1.1" line is showing the generator/tool version instead of an API version placeholder; open the docs/02_specs/03_api_spec.md and replace the literal "0.1.1" after the "Version:" token with a clear placeholder (for example "Version: x.y.z" or "Version: <API_VERSION>") and add a brief inline comment if desired to indicate it's a template value so readers won't confuse it with the generator version.docs/01_product/03_roadmap.md-9-9 (1)
9-9:⚠️ Potential issue | 🟡 Minor
**__TEMPLATE__**triggers MD050 linter warnings across all generated filesMarkdownlint sees
__TEMPLATE__inside**...**as underscore-style bold (MD050 — Strong style, expected asterisk). This is a template-engine concern: the__TEMPLATE__variable is not being escaped for Markdown before output. The same pattern appears indocs/06_engineering/03_testing.md,docs/06_engineering/05_security.md, anddocs/01_product/04_personas.md. If markdownlint runs as a CI gate it will fail on all four files.The template source should escape the underscores or use a different placeholder format:
🐛 Proposed fix (in template source)
-Testing strategy and conventions for **__TEMPLATE__**. +Testing strategy and conventions for **\_\_TEMPLATE\_\_**.Or alternatively use a placeholder that doesn't contain underscores (e.g.
{{PROJECT_NAME}}).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/03_roadmap.md` at line 9, The MD050 linter is triggered because the template placeholder "__TEMPLATE__" is emitted verbatim inside bold markers; update the template source so placeholders are safe for Markdown by either escaping underscores (emit \_\_TEMPLATE\_\_) or replacing the placeholder with a non-underscore form (e.g., {{PROJECT_NAME}}), and then regenerate the affected docs (the templates that produce docs/01_product/03_roadmap.md, docs/06_engineering/03_testing.md, docs/06_engineering/05_security.md, docs/01_product/04_personas.md) to remove the MD050 warnings; ensure the change is made in the template variable definition and any rendering helper that inserts bold/strong formatting so future outputs do not contain raw underscores.docs/08_reference/03_changelog.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorFix regeneration command: use
pnpm -C .agentkitinstead ofpnpm -C agentkitLine 2 declares the source as
.agentkit/spec, but the regenerate command on line 3 usespnpm -C agentkit(missing dot prefix). The correct form, shown in.github/instructions/docs.md, ispnpm -C .agentkit agentkit:sync. This same issue exists indocs/01_product/03_roadmap.md,docs/06_engineering/03_testing.md, anddocs/02_specs/03_api_spec.md.Proposed fix
-<!-- Regenerate: pnpm -C agentkit agentkit:sync --> +<!-- Regenerate: pnpm -C .agentkit agentkit:sync -->🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/08_reference/03_changelog.md` at line 3, Update the regenerate command string to include the missing dot prefix so it targets the hidden .agentkit directory; replace occurrences of "pnpm -C agentkit agentkit:sync" with "pnpm -C .agentkit agentkit:sync" in the docs where the source is declared as ".agentkit/spec" (e.g., the string in docs/08_reference/03_changelog.md and the same command occurrences in docs/01_product/03_roadmap.md, docs/06_engineering/03_testing.md, and docs/02_specs/03_api_spec.md) so the command matches the reference in .github/instructions/docs.md.docs/04_api/04_examples.md-13-29 (1)
13-29:⚠️ Potential issue | 🟡 MinorAvoid secret-scanner false positives in example auth headers.
Some scanners flag
Authorization: Bearer YOUR_TOKENas a secret. Using an environment variable placeholder reduces CI noise while keeping the example clear.✅ Suggested fix
-curl -X GET https://api.example.com/v1/resource?limit=10 \ - -H "Authorization: Bearer YOUR_TOKEN" +curl -X GET https://api.example.com/v1/resource?limit=10 \ + -H "Authorization: Bearer ${API_TOKEN}"-curl -X POST https://api.example.com/v1/resource \ - -H "Authorization: Bearer YOUR_TOKEN" \ +curl -X POST https://api.example.com/v1/resource \ + -H "Authorization: Bearer ${API_TOKEN}" \-curl -X PUT https://api.example.com/v1/resource/RESOURCE_ID \ - -H "Authorization: Bearer YOUR_TOKEN" \ +curl -X PUT https://api.example.com/v1/resource/RESOURCE_ID \ + -H "Authorization: Bearer ${API_TOKEN}" \-curl -X DELETE https://api.example.com/v1/resource/RESOURCE_ID \ - -H "Authorization: Bearer YOUR_TOKEN" +curl -X DELETE https://api.example.com/v1/resource/RESOURCE_ID \ + -H "Authorization: Bearer ${API_TOKEN}"Also applies to: 34-39, 43-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/04_api/04_examples.md` around lines 13 - 29, Replace the hardcoded "Authorization: Bearer YOUR_TOKEN" examples with an environment-variable placeholder to avoid secret-scanner false positives: update the Authorization header strings in the "Example 1: List Resources" and "Example 2: Create a Resource" blocks (and the other occurrences called out) to use an env var like $API_TOKEN or ${API_TOKEN} so the header reads e.g. Authorization: Bearer $API_TOKEN; update all identical header occurrences within these example sections.CHANGELOG.md-13-30 (1)
13-30:⚠️ Potential issue | 🟡 MinorAdd blank lines around headings to satisfy markdownlint (MD022).
This template currently violates the “blanks-around-headings” rule, which will fail markdownlint if enforced.
✅ Suggested fix
## [Unreleased] ### Added - Initial AgentKit Forge integration (v0.1.1) - Multi-agent team framework with 10 teams - 5-phase lifecycle orchestration model - Support for Claude Code, Cursor, Windsurf, Copilot, and MCP/A2A - Hook-based safety guardrails - Automated quality gates and validation ### Changed - (none yet) ### Fixed - (none yet) ### Removed - (none yet)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 13 - 30, Add blank lines before and after each Markdown heading to satisfy markdownlint MD022; specifically insert a blank line above and below "## [Unreleased]" and each subsection heading like "### Added", "### Changed", "### Fixed", and "### Removed" so there is an empty line separating headings from surrounding content.docs/08_reference/04_contributing.md-22-25 (1)
22-25:⚠️ Potential issue | 🟡 MinorUse a consistent default branch name across contributor docs.
This doc says to branch from
develop, while the root CONTRIBUTING.md instructs branching frommain. Please align these to avoid conflicting guidance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/08_reference/04_contributing.md` around lines 22 - 25, The contributing doc currently instructs creating a branch from `develop` which conflicts with the root CONTRIBUTING.md that uses `main`; update this file to use the same default branch name (`main`) by replacing the reference to `develop` with `main` (ensure the line "Create a feature branch from `develop`" becomes "Create a feature branch from `main`") and scan other contributor-facing docs to confirm all references consistently use `main`.docs/06_engineering/02_coding_standards.md-32-38 (1)
32-38:⚠️ Potential issue | 🟡 MinorAdd a language tag to the directory-structure code block (MD040).
Use
textorplaintextto satisfy markdownlint.Suggested fix
-``` +```text src/ modules/ # Feature modules shared/ # Shared utilities and types config/ # Configuration files __tests__/ # Test files -``` +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/02_coding_standards.md` around lines 32 - 38, The code block showing the directory structure uses untagged triple backticks; update that block to add a language tag (e.g., ```text or ```plaintext) so markdownlint rule MD040 is satisfied — replace the opening ``` with ```text (or ```plaintext) for the block that contains "src/ modules/ shared/ config/ __tests__/".CONTRIBUTING.md-46-52 (1)
46-52:⚠️ Potential issue | 🟡 MinorAdd a fenced code language to satisfy markdownlint (MD040).
The commit template code block has no language tag, which will fail MD040 if markdownlint is enforced. Consider using
textfor the commit template.Suggested fix
-``` +```text <type>(<scope>): <description> [optional body] [optional footer(s)] -``` +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 46 - 52, Update the fenced code block that contains the commit message template (the block beginning with the text "<type>(<scope>): <description>") to include a language tag so markdownlint MD040 passes; change the opening fence from ``` to ```text and leave the rest of the block content unchanged (i.e., the commit template lines and closing ``` remain the same).docs/06_engineering/04_git_workflow.md-23-26 (1)
23-26:⚠️ Potential issue | 🟡 MinorMD031 — code block inside list item needs surrounding blank lines.
Add a blank line before the opening
```bashto satisfy the linter.🔧 Proposed fix
1. **Create a branch** from `develop` (or `main` for hotfixes): + ```bash git checkout -b feature/my-feature develop ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/04_git_workflow.md` around lines 23 - 26, The markdown list item contains a fenced code block without a blank line before the opening ```bash, triggering MD031; fix the issue in the list item that contains the `git checkout -b feature/my-feature develop` code by inserting a single blank line immediately before the opening triple-backtick fence (```bash) so the code block is separated from the preceding paragraph/list marker..agentkit/engines/node/src/discover.mjs-527-527 (1)
527-527:⚠️ Potential issue | 🟡 Minor
crosscutting.envConfigis a string; all othercrosscuttingfields arestring[]— type inconsistency.The report shape initialized at line 527 has every
crosscuttingfield as an array. Lines 639–643 addenvConfigas a plain string ('env-vars'|'config-files'), or leave itundefined. Code that iteratesObject.entries(report.crosscutting)(e.g., the proposed markdown fix above) must special-case it. Consider normalising:🔧 Proposed normalisation
- crosscutting: { logging: [], authentication: [], caching: [], errorHandling: [], apiPatterns: [], featureFlags: [] }, + crosscutting: { logging: [], authentication: [], caching: [], errorHandling: [], apiPatterns: [], featureFlags: [], envConfig: [] },if (existsSync(resolve(projectRoot, '.env.example'))) { - report.crosscutting.envConfig = 'env-vars'; + report.crosscutting.envConfig = ['env-vars']; } else if (existsSync(resolve(projectRoot, 'appsettings.json'))) { - report.crosscutting.envConfig = 'config-files'; + report.crosscutting.envConfig = ['config-files']; }Also applies to: 639-643
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/discover.mjs at line 527, The crosscutting.envConfig field is a string while all other crosscutting fields are string[]; update the normalization so report.crosscutting.envConfig is always a string[]: either change the initial report shape in discover.mjs to initialize envConfig: [] and when assigning envConfig (the code that currently sets `'env-vars'` | `'config-files'`) wrap that value in an array, or ensure any consumer of report.crosscutting (e.g., code iterating Object.entries(report.crosscutting)) maps/concat envConfig into an array (convert undefined to [] and single string to [value]). Locate references to report.crosscutting and the envConfig assignment to apply this consistent array normalization.docs/01_product/02_user_stories.md-30-30 (1)
30-30:⚠️ Potential issue | 🟡 MinorMD024 — duplicate
#### Acceptance Criteriaheadings trigger lint warnings.Both Story 1 and Story 2 use the same heading text. If MD024 is enforced in CI, disambiguate the headings in the template source, or add a per-file lint suppression.
🔧 Example disambiguation
-#### Acceptance Criteria +#### Story 1: Acceptance Criteria ... -#### Acceptance Criteria +#### Story 2: Acceptance CriteriaAlso applies to: 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/01_product/02_user_stories.md` at line 30, The duplicate "#### Acceptance Criteria" headings (seen in the Story 1 and Story 2 sections) trigger MD024; resolve by disambiguating those headings (e.g., rename to "#### Acceptance Criteria — Story 1" and "#### Acceptance Criteria — Story 2" or similar unique text) or add a per-file markdownlint suppression for MD024; update the two occurrences of the "#### Acceptance Criteria" heading in the document so each is unique or include the appropriate lint comment to suppress MD024.AGENT_BACKLOG.md-76-134 (1)
76-134:⚠️ Potential issue | 🟡 MinorMD050 — bold style uses
**but project lint config expects__(underscores).Lines 76, 83, 86, 87, 91, 93, 94, 98, 133, and 134 all use
**bold**. Since this is generated content, the fix belongs in the template source, not here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENT_BACKLOG.md` around lines 76 - 134, The markdown linter (MD050) flags bold using **...**; update the template source to use underscore-based bold __...__ for all bolded tokens shown in the generated AGENT_BACKLOG.md (e.g. "Backlog", "Active Sprint", "Completed", "Blocked", "Priority Definitions", "Status Definitions", the phase progression heading and any inline bold phrases like the agenda rules) by replacing instances of "**...**" with "__...__" in the template that produces this file so regenerated content conforms to the project's MD050 rule.AGENT_BACKLOG.md-129-131 (1)
129-131:⚠️ Potential issue | 🟡 MinorMissing language identifier on fenced code block (MD040).
The phase-progression flow block should declare a language to satisfy the linter and improve rendering.
🔧 Proposed fix
-``` +```text Discovery -> Planning -> Implementation -> Validation -> Ship🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENT_BACKLOG.md` around lines 129 - 131, Add a language identifier to the fenced code block that currently contains "Discovery -> Planning -> Implementation -> Validation -> Ship" so the linter MD040 is satisfied and rendering improves; replace the triple-backtick fence with one that specifies a language (e.g., ```text) immediately before the line "Discovery -> Planning -> Implementation -> Validation -> Ship" and keep the closing triple-backticks unchanged.docs/06_engineering/04_git_workflow.md-40-46 (1)
40-46:⚠️ Potential issue | 🟡 MinorMD040 — commit message format block is missing a language specifier.
🔧 Proposed fix
-``` +```text <type>(<scope>): <description>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/06_engineering/04_git_workflow.md` around lines 40 - 46, The fenced code block showing the commit message template is missing a language specifier; update the opening fence for the block that contains "<type>(<scope>): <description>" to use a language tag (e.g., change the opening "```" to "```text") so the block is recognized as plain text by MD linters and renderers; locate the commit message example in the docs/06_engineering/04_git_workflow.md content (the block containing "<type>(<scope>): <description>") and add the "text" language specifier to the opening fence.AGENT_BACKLOG.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorInconsistent
Regeneratepath:agentkitvs.agentkit.The docs/ files (including this one) embed
pnpm -C agentkit agentkit:syncat line 3, while.github/instructions/files embedpnpm -C .agentkit agentkit:sync. The actual directory is.agentkit/, making the dot-prefixed path correct. The docs/ template is generating the wrong regenerate command.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENT_BACKLOG.md` at line 3, The regenerate command in the docs uses the wrong path string "pnpm -C agentkit agentkit:sync" (missing the leading dot); update the generator/template that emits this line so it uses the correct directory ".agentkit" by replacing that exact string with "pnpm -C .agentkit agentkit:sync" across generated docs (e.g., AGENT_BACKLOG.md) and any template or script that produces it to ensure consistency with the .github/instructions/ variants.docs/README.md-5-7 (1)
5-7:⚠️ Potential issue | 🟡 Minor
__TEMPLATE__placeholder is unreplaced in heading and body text.Line 34 (Conventions) states that
__TEMPLATE__"are replaced at sync time," but the generated file still contains the literal token in theh1(line 5) and the opening paragraph (line 7). The same issue is present indocs/02_specs/01_functional_spec.mdline 11 anddocs/02_specs/02_technical_spec.mdline 11 (**Project:** __TEMPLATE__), which suggests the template processor either did not receive a project name or did not run.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/README.md` around lines 5 - 7, The heading and body still contain the literal placeholder "__TEMPLATE__" (appearing in the h1 and opening paragraph) instead of the project name; update occurrences of the "__TEMPLATE__" token in README.md, 01_functional_spec.md and 02_technical_spec.md (the h1/opening paragraph and the "**Project:** __TEMPLATE__" lines) to the actual project name or ensure the template renderer is run with the project-name variable supplied; if using a sync/build step, verify the template processor invocation and the variable (e.g., PROJECT_NAME) is passed so the token is replaced automatically.docs/04_api/03_authentication.md-17-19 (1)
17-19:⚠️ Potential issue | 🟡 MinorFenced code block is missing a language identifier (MD040).
The block showing the
Authorizationheader has no language tag, which violates markdownlint MD040. Addinghttportextresolves the warning.🔧 Proposed fix
-``` +```http Authorization: Bearer YOUR_ACCESS_TOKEN</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/04_api/03_authentication.mdaround lines 17 - 19, The fenced code block
showing the Authorization header is missing a language tag (MD040); update the
block around the Authorization example (theAuthorization: Bearer YOUR_ACCESS_TOKENsnippet) to include a language identifier such as http or
text (e.g., ```http) so markdownlint stops complaining and the header renders
with correct syntax highlighting.</details> </blockquote></details> <details> <summary>docs/07_integrations/03_sdk.md-22-47 (1)</summary><blockquote> `22-47`: _⚠️ Potential issue_ | _🟡 Minor_ **HTML comment placeholders inside code blocks produce invalid syntax if copy-pasted.** `npm install <!-- `@org/sdk` -->`, `pip install <!-- org-sdk -->`, and `import { Client } from '<!-- `@org/sdk` -->';` are all syntactically invalid in their respective languages. A reader who copies these commands verbatim will get a shell error or a TypeScript parse failure. Prefer an explicit placeholder convention that signals "replace this" without producing broken syntax, e.g., `<org-sdk>` or `YOUR_SDK_PACKAGE`. <details> <summary>🔧 Suggested placeholder convention</summary> ```diff -npm install <!-- `@org/sdk` --> +npm install <@org/sdk> -pnpm add <!-- `@org/sdk` --> +pnpm add <@org/sdk> ``` ```diff -pip install <!-- org-sdk --> +pip install <org-sdk> ``` ```diff -import { Client } from '<!-- `@org/sdk` -->'; +import { Client } from '<@org/sdk>'; ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/07_integrations/03_sdk.md` around lines 22 - 47, Replace the HTML-comment package placeholders which break syntax—specifically the npm/pnpm lines using "<!-- `@org/sdk` -->", the pip line using "<!-- org-sdk -->", and the TypeScript import "import { Client } from '<!-- `@org/sdk` -->';"—with a safe, copy-pasteable placeholder (e.g., YOUR_SDK_PACKAGE or <org-sdk>) for each language; update the docs so shell installs read like npm install YOUR_SDK_PACKAGE / pip install YOUR_SDK_PACKAGE and the TypeScript import reads import { Client } from 'YOUR_SDK_PACKAGE' (or 'org-sdk') so examples are valid when copied. ``` </details> </blockquote></details> <details> <summary>docs/04_api/01_overview.md-25-31 (1)</summary><blockquote> `25-31`: _⚠️ Potential issue_ | _🟡 Minor_ **Gitleaks false positive on `Authorization: Bearer YOUR_TOKEN` may block CI.** `YOUR_TOKEN`, `YOUR_ID`, and `YOUR_SECRET` are clearly placeholder strings, not real credentials, so Gitleaks rule `curl-auth-header` is misfiring here. If the pipeline fails on high-severity secret-scan findings, add an inline suppression on the relevant line to avoid unnecessary CI failures. <details> <summary>🔧 Suggested suppression</summary> ```diff curl https://api.example.com/v1/resource \ - -H "Authorization: Bearer YOUR_TOKEN" + -H "Authorization: Bearer YOUR_TOKEN" # gitleaks:allow ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/04_api/01_overview.md` around lines 25 - 31, Add an inline Gitleaks suppression on the curl Authorization header line that contains the placeholder "Authorization: Bearer YOUR_TOKEN" so the `curl-auth-header` rule is ignored for this documented example; update the docs block containing the POST token call and the GET resource curl command (the lines with "client_id", "client_secret" and the "Authorization: Bearer YOUR_TOKEN" placeholder) to include the appropriate gitleaks inline suppression comment for your repo's config so the placeholder strings are not treated as secrets. ``` </details> </blockquote></details> <details> <summary>docs/07_integrations/02_webhooks.md-42-44 (1)</summary><blockquote> `42-44`: _⚠️ Potential issue_ | _🟡 Minor_ **Add a language specifier to the fenced code block.** The code block lacks a language identifier, which triggers a markdownlint warning (MD040). Since this is a pseudocode formula, use a generic specifier like `text` or `plaintext`. <details> <summary>📝 Suggested fix</summary> ```diff -``` +```text HMAC-SHA256(webhook_secret, raw_body) == X-Signature header ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ```` Verify each finding against the current code and only fix it if needed. In `@docs/07_integrations/02_webhooks.md` around lines 42 - 44, The fenced code block containing the pseudocode expression "HMAC-SHA256(webhook_secret, raw_body) == X-Signature header" is missing a language specifier and triggers markdownlint MD040; update that fenced block to include a generic language tag such as `text` or `plaintext` (e.g., change ``` to ```text) so the block becomes a labeled plaintext code fence while leaving the expression itself unchanged. ```` </details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (8)</summary><blockquote> <details> <summary>CONTRIBUTING.md (1)</summary><blockquote> `32-36`: **Align strong emphasis style with markdownlint config (MD050).** markdownlint expects underscores for strong emphasis but the doc uses asterisks. Either switch to underscores or update the lint rule. Also applies to: 54-56 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 32 - 36, The document uses asterisks for strong emphasis (e.g., "**Discovery**", "**Planning**", "**Implementation**", "**Validation**", "**Ship**") which conflicts with markdownlint MD050 expecting underscores; either replace the asterisks with underscores for all strong emphasis occurrences (including the similar block at the other location referenced for lines 54-56) so headings read _Discovery_, _Planning_, etc., or update the markdownlint rule to allow asterisks—pick one approach and apply it consistently across CONTRIBUTING.md to resolve the lint warning. ``` </details> </blockquote></details> <details> <summary>.agentkit/engines/node/src/discover.mjs (2)</summary><blockquote> `503-505`: **`fileExt` detection uses `depth=2`, which misses common component-tree paths.** `countFilesByExt(projectRoot, [d.fileExt], 2, 5)` only descends two levels. A path like `src/components/Button/Button.scss` (depth 3) is invisible to the detector. The default depth used elsewhere in the file is 4. <details> <summary>♻️ Proposed fix</summary> ```diff - if (countFilesByExt(projectRoot, [d.fileExt], 2, 5) > 0) matched = true; + if (countFilesByExt(projectRoot, [d.fileExt], 4, 5) > 0) matched = true; ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/discover.mjs around lines 503 - 505, The file extension detection can miss deeper component paths because countFilesByExt is called with depth=2; update the call where matched is set (the block that checks d.fileExt) to use the standard/default depth of 4 (i.e., call countFilesByExt(projectRoot, [d.fileExt], 4, 5)) so files like src/components/Button/Button.scss are detected; modify the invocation referencing matched and d.fileExt to pass depth=4 to countFilesByExt. ``` </details> --- `359-381`: **`getCargoContent`/`getGemfileContent`/`getPomContent` only read the root file — inconsistent with `getCsprojContent`.** `getCsprojContent` walks subdirectories up to depth 3, so NestJS/ASP.NET projects with nested `.csproj` files are handled. The three root-only readers mean frameworks detected via `cargoRefs` / `gemfileRefs` / `pomRefs` (e.g., `axum`, `rails`, `spring-boot`) will be missed in Cargo workspaces, Rails engines, or Maven multi-module projects. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/discover.mjs around lines 359 - 381, getCargoContent, getGemfileContent and getPomContent only read a single root file which misses nested files; change each to mirror getCsprojContent’s behavior by walking the projectRoot tree to a max depth of 3 and collecting contents of all matching filenames (Cargo.toml, Gemfile, pom.xml respectively), then return the combined content (or empty string if none). Locate and update the functions getCargoContent, getGemfileContent and getPomContent to perform a shallow recursive search (depth 3), read each found file with readFileSync, and concatenate/return their contents instead of only reading the root file. ``` </details> </blockquote></details> <details> <summary>.agentkit/engines/node/src/__tests__/spec-validator.test.mjs (1)</summary><blockquote> `323-329`: **`testing.coverage` upper-boundary at 101 is not explicitly tested.** The suite covers `0` ✓, `100` ✓, `150` (invalid), and `-1` (invalid). Adding `101` would pin the exact upper fence and protect against an off-by-one if the validator ever changes `> 100` to `>= 100`. <details> <summary>🔧 Suggested addition</summary> ```diff expect(validateProjectYaml({ testing: { coverage: 0 } }).errors).toEqual([]); expect(validateProjectYaml({ testing: { coverage: 100 } }).errors).toEqual([]); const { errors: neg } = validateProjectYaml({ testing: { coverage: -1 } }); expect(neg.some(e => e.includes('coverage'))).toBe(true); + +const { errors: over } = validateProjectYaml({ testing: { coverage: 101 } }); +expect(over.some(e => e.includes('coverage'))).toBe(true); ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/__tests__/spec-validator.test.mjs around lines 323 - 329, The test for testing.coverage is missing an explicit check for the first invalid value above the allowed upper bound; update the spec-validator.test (the 'validates testing.coverage boundary values' test) to assert that validateProjectYaml({ testing: { coverage: 101 } }) returns an error mentioning 'coverage' (similar to the existing -1 check), so add a case that captures errors from validateProjectYaml with coverage: 101 and expect that errors.some(e => e.includes('coverage')) is true; use the same pattern/variable naming as the existing neg case to keep consistency. ``` </details> </blockquote></details> <details> <summary>.agentkit/engines/node/src/sync.mjs (1)</summary><blockquote> `106-129`: **`{{`#each`}}` blocks don't support nesting.** The non-greedy `[\s\S]*?` in the regex means nested `{{`#each`}}` would match incorrectly (inner `{{/each}}` closes the outer block). This is fine for the current use case but worth noting if templates grow more complex. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/sync.mjs around lines 106 - 129, The current resolveEachBlocks function uses eachRegex (/\\{\\{`#each`\\s+([a-zA-Z_][a-zA-Z0-9_]*)\\}\\}([\\s\\S]*?)\\{\\{\\/each\\}\\}/g) which is non-greedy and cannot handle nested {{`#each`}} blocks; update resolveEachBlocks to parse template with a stack-based or iterative scanner: find each opening tag "{{`#each` varName}}", then walk forward counting nested "{{`#each`" and matching "{{/each}}" until the corresponding close is found, extract the inner body for that specific pair and process it (preserving the existing item/index replacement logic), and repeat until no top-level each blocks remain so nested loops are handled correctly without relying on the current non-greedy regex. ``` </details> </blockquote></details> <details> <summary>docs/07_integrations/02_webhooks.md (1)</summary><blockquote> `36-38`: **Minor grammar nit: "A `X-Signature`" → "An `X-Signature`".** "X" is pronounced with a vowel sound ("ex"), so the article should be "An". <details> <summary>📝 Suggested fix</summary> ```diff -- A `X-Signature` header is included for payload verification. +- An `X-Signature` header is included for payload verification. ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/07_integrations/02_webhooks.md` around lines 36 - 38, Update the article before the header name from "A `X-Signature` header is included for payload verification." to "An `X-Signature` header is included for payload verification." — locate the sentence containing the `X-Signature` header text in the webhooks doc (the line starting with "A `X-Signature` header") and change the indefinite article to "An" to correct the grammar. ``` </details> </blockquote></details> <details> <summary>.agentkit/engines/node/src/spec-validator.mjs (1)</summary><blockquote> `200-207`: **`checkEnum` will throw if `enumName` is not in `PROJECT_ENUMS`.** If `PROJECT_ENUMS[enumName]` returns `undefined`, the `.includes()` call on Line 204 will throw a `TypeError`. All current call sites use valid keys, but a defensive guard would prevent future breakage. <details> <summary>🛡️ Suggested defensive fix</summary> ```diff function checkEnum(value, fieldPath, enumName) { if (value === null || value === undefined || value === '') return; const allowed = PROJECT_ENUMS[enumName]; + if (!allowed) return; // unknown enum — skip if (!allowed.includes(value)) { errors.push(`project.yaml: ${fieldPath} must be one of [${allowed.join(', ')}], got "${value}"`); } } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/spec-validator.mjs around lines 200 - 207, The checkEnum function will throw if PROJECT_ENUMS[enumName] is undefined; change checkEnum to guard the lookup by verifying PROJECT_ENUMS[enumName] is an array (e.g., using Array.isArray) before calling .includes(), and if it's missing either return silently or push a clear validation error (use the existing errors array) indicating an unknown enum name; this touches function checkEnum and the PROJECT_ENUMS lookup so add the defensive guard around the allowed variable and handle the non-array case before calling allowed.includes(value). ``` </details> </blockquote></details> <details> <summary>.agentkit/engines/node/src/__tests__/sync.test.mjs (1)</summary><blockquote> `624-663`: **Consider testing invalid target names in `--only`.** The `resolveRenderTargets` tests don't cover the case where `--only` contains unrecognized target names (e.g., `--only foobar`). Currently the implementation silently accepts any string, so a typo like `--only claud` would produce no output for Claude without any warning. Consider whether validation against `ALL_RENDER_TARGETS` should be added (either here or in the implementation). <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.agentkit/engines/node/src/__tests__/sync.test.mjs around lines 624 - 663, The tests for resolveRenderTargets miss the case where the --only string contains unrecognized target names; update the implementation of resolveRenderTargets to validate parsed tokens against ALL_RENDER_TARGETS (i.e., after splitting and trimming the only string, filter tokens: token => ALL_RENDER_TARGETS.includes(token)) and decide whether to silently ignore invalid names or emit a warning; then add a unit test for resolveRenderTargets that passes an --only value with invalid names (e.g., "claude,foobar,,cursor") and asserts that the returned Set contains only the valid targets (['claude','cursor']) and that empty/invalid tokens are filtered out (and add a separate test if you choose to assert a warning/exception behavior). ``` </details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> **Configuration used**: Organization UI **Review profile**: CHILL **Plan**: Pro <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between ce42591f0802d0662bdc17975681d1db35e6d5c3 and 42616e3d65381370115d74e8d0558fc7c8e4b0bb. </details> <details> <summary>⛔ Files ignored due to path filters (1)</summary> * `.agentkit/package-lock.json` is excluded by `!**/package-lock.json` </details> <details> <summary>📒 Files selected for processing (61)</summary> * `.agentkit/.gitignore` * `.agentkit/engines/node/src/__tests__/spec-validator.test.mjs` * `.agentkit/engines/node/src/__tests__/sync.test.mjs` * `.agentkit/engines/node/src/cli.mjs` * `.agentkit/engines/node/src/discover.mjs` * `.agentkit/engines/node/src/spec-validator.mjs` * `.agentkit/engines/node/src/sync.mjs` * `.agentkit/engines/node/src/validate.mjs` * `.agentkit/spec/project.yaml` * `.editorconfig` * `.github/ISSUE_TEMPLATE/bug_report.md` * `.github/PULL_REQUEST_TEMPLATE.md` * `.github/copilot-instructions.md` * `.github/instructions/README.md` * `.github/instructions/docs.md` * `.github/instructions/marketing.md` * `.github/instructions/rust.md` * `.gitignore` * `.markdownlint.json` * `.prettierrc` * `.vscode/extensions.json` * `.vscode/settings.json` * `AGENT_BACKLOG.md` * `CHANGELOG.md` * `CONTRIBUTING.md` * `MIGRATIONS.md` * `SECURITY.md` * `docs/01_product/01_prd.md` * `docs/01_product/02_user_stories.md` * `docs/01_product/03_roadmap.md` * `docs/01_product/04_personas.md` * `docs/02_specs/01_functional_spec.md` * `docs/02_specs/02_technical_spec.md` * `docs/02_specs/03_api_spec.md` * `docs/02_specs/04_data_models.md` * `docs/03_architecture/01_overview.md` * `docs/03_architecture/02_diagrams/.gitkeep` * `docs/03_architecture/03_decisions/0001-adopt-agentkit-forge.md` * `docs/04_api/01_overview.md` * `docs/04_api/02_endpoints.md` * `docs/04_api/03_authentication.md` * `docs/04_api/04_examples.md` * `docs/05_operations/01_deployment.md` * `docs/05_operations/02_monitoring.md` * `docs/05_operations/03_incident_response.md` * `docs/05_operations/04_troubleshooting.md` * `docs/06_engineering/01_setup.md` * `docs/06_engineering/02_coding_standards.md` * `docs/06_engineering/03_testing.md` * `docs/06_engineering/04_git_workflow.md` * `docs/06_engineering/05_security.md` * `docs/07_integrations/01_external_apis.md` * `docs/07_integrations/02_webhooks.md` * `docs/07_integrations/03_sdk.md` * `docs/08_reference/01_glossary.md` * `docs/08_reference/02_faq.md` * `docs/08_reference/03_changelog.md` * `docs/08_reference/04_contributing.md` * `docs/08_reference/ai_handoffs/.gitkeep` * `docs/README.md` * `docs/prd/.gitkeep` </details> <details> <summary>💤 Files with no reviewable changes (1)</summary> * .agentkit/engines/node/src/validate.mjs </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| function resolveRenderTargets(overlayTargets, flags) { | ||
| // --only flag overrides everything | ||
| if (flags?.only) { | ||
| const onlyTargets = String(flags.only).split(',').map(t => t.trim()).filter(Boolean); | ||
| return new Set(onlyTargets); | ||
| } | ||
| // Overlay renderTargets | ||
| if (Array.isArray(overlayTargets) && overlayTargets.length > 0) { | ||
| return new Set(overlayTargets); | ||
| } | ||
| // Default: generate everything (backward compatibility) | ||
| return new Set(ALL_RENDER_TARGETS); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
--only flag accepts arbitrary target names without validation.
resolveRenderTargets does not validate that values in --only are members of ALL_RENDER_TARGETS. A typo like --only claud silently produces no matching output. Consider warning on unrecognized targets.
🛡️ Suggested validation
function resolveRenderTargets(overlayTargets, flags) {
// --only flag overrides everything
if (flags?.only) {
const onlyTargets = String(flags.only).split(',').map(t => t.trim()).filter(Boolean);
+ const unknown = onlyTargets.filter(t => !ALL_RENDER_TARGETS.includes(t));
+ if (unknown.length > 0) {
+ console.warn(`[agentkit:sync] Warning: unknown render target(s): ${unknown.join(', ')}. Valid: ${ALL_RENDER_TARGETS.join(', ')}`);
+ }
return new Set(onlyTargets);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function resolveRenderTargets(overlayTargets, flags) { | |
| // --only flag overrides everything | |
| if (flags?.only) { | |
| const onlyTargets = String(flags.only).split(',').map(t => t.trim()).filter(Boolean); | |
| return new Set(onlyTargets); | |
| } | |
| // Overlay renderTargets | |
| if (Array.isArray(overlayTargets) && overlayTargets.length > 0) { | |
| return new Set(overlayTargets); | |
| } | |
| // Default: generate everything (backward compatibility) | |
| return new Set(ALL_RENDER_TARGETS); | |
| } | |
| function resolveRenderTargets(overlayTargets, flags) { | |
| // --only flag overrides everything | |
| if (flags?.only) { | |
| const onlyTargets = String(flags.only).split(',').map(t => t.trim()).filter(Boolean); | |
| const unknown = onlyTargets.filter(t => !ALL_RENDER_TARGETS.includes(t)); | |
| if (unknown.length > 0) { | |
| console.warn(`[agentkit:sync] Warning: unknown render target(s): ${unknown.join(', ')}. Valid: ${ALL_RENDER_TARGETS.join(', ')}`); | |
| } | |
| return new Set(onlyTargets); | |
| } | |
| // Overlay renderTargets | |
| if (Array.isArray(overlayTargets) && overlayTargets.length > 0) { | |
| return new Set(overlayTargets); | |
| } | |
| // Default: generate everything (backward compatibility) | |
| return new Set(ALL_RENDER_TARGETS); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agentkit/engines/node/src/sync.mjs around lines 650 - 662, The --only
parsing currently trusts flags.only blindly; update resolveRenderTargets to
validate the split-onlyTargets against the canonical ALL_RENDER_TARGETS: build a
Set of allowed names from ALL_RENDER_TARGETS, filter onlyTargets to produce
validTargets, and log any invalid/unrecognized entries (use the module logger if
available or console.warn) so typos like "claud" are reported; return a Set of
validTargets (and decide fallback behavior: if validTargets is empty either
return an empty Set or fall back to ALL_RENDER_TARGETS per existing
backward-compatibility needs) and ensure the function still returns a Set in all
paths.
Resolve merge conflicts in sync.mjs, spec-validator.test.mjs, and sync.test.mjs by taking the feature branch versions which have more comprehensive test coverage and better code structure. Co-Authored-By: Warp <agent@warp.dev>
- Fix regeneration command path: pnpm -C agentkit → pnpm -C .agentkit (35 files) - Fix **__TEMPLATE__** → `__TEMPLATE__` to resolve MD050 lint warnings (11 files) - Fix docs/README.md: correct navigation table paths and Quick Links to match actual directory layout - Fix docs/README.md: replace unreplaced __TEMPLATE__ placeholder in heading - Fix duplicate Scenario headings in personas (MD024) - Fix duplicate Acceptance Criteria headings in user stories (MD024) - Fix HTML comment placeholders inside bash code blocks in deployment guide - Add language tags to fenced code blocks (MD040): ADR template, coding standards, git workflow, authentication, AGENT_BACKLOG, CONTRIBUTING - Fix MD031: add blank line before code block inside list item in git workflow - Fix CHANGELOG.md heading spacing (MD022) - Fix default branch inconsistency: 'develop' → 'main' in contributing docs - Fix API spec version placeholder (was using generator version 0.1.1) - Fix secret-scanner false positives: YOUR_TOKEN → $API_TOKEN in API examples - Fix path traversal vulnerability in sync.mjs stale file cleanup - Add --only flag validation with warning for unrecognized render targets - Fix regeneration command in root CONTRIBUTING.md Co-Authored-By: Warp <agent@warp.dev>
Merge phase1b into main: resolve conflicts + address review comments
| // File was in previous sync but not in this one — it's orphaned | ||
| const orphanPath = resolve(projectRoot, prevFile); | ||
| // Path traversal protection: ensure orphan path stays within project root | ||
| if (!orphanPath.startsWith(resolvedRoot) && orphanPath !== resolve(projectRoot)) { |
There was a problem hiding this comment.
Path traversal protection logic has an error. The condition uses AND instead of OR, which means files outside the project root will only be blocked if they also don't equal the project root path. This should use OR (||) to block any path that is either outside the project root OR not equal to the project root.
The correct condition should be:
if (!orphanPath.startsWith(resolvedRoot) || orphanPath !== resolve(projectRoot))
However, this logic seems overly complex. The simpler and more correct approach is:
if (!orphanPath.startsWith(resolvedRoot + path.sep))
This ensures the orphan path is strictly within the project root directory.
| for (const m of afterEq.matchAll(/["']([a-zA-Z0-9_-]+)["']/g)) { | ||
| deps.add(m[1].toLowerCase()); | ||
| } |
There was a problem hiding this comment.
The Python dependency parsing in getPythonDeps() has a potential issue with the regex pattern used to extract package names. The pattern /["']([a-zA-Z0-9_-]+)["']/g only matches package names that consist solely of alphanumeric characters, underscores, and hyphens. However, this will miss packages with dots in their names (e.g., 'zope.interface', 'google.cloud.storage').
Python package names can contain dots as separators for namespaced packages. The regex should be updated to:
/["']([a-zA-Z0-9_.-]+)["']/g
Note: Adding the dot to the character class.
| for (const m of arrayPart.matchAll(/["']([a-zA-Z0-9_-]+)["']/g)) { | ||
| deps.add(m[1].toLowerCase()); |
There was a problem hiding this comment.
The same issue exists in these lines where the regex /["']([a-zA-Z0-9_-]+)["']/g is used to extract package names. Python packages can have dots in their names (e.g., 'google.cloud.storage', 'zope.interface'), which this pattern will not match correctly.
The regex should be updated to include the dot character:
/["']([a-zA-Z0-9_.-]+)["']/g
| for (const m of line.matchAll(/["']([a-zA-Z0-9_-]+)["']/g)) { | ||
| deps.add(m[1].toLowerCase()); |
There was a problem hiding this comment.
Same issue here - the regex pattern needs to include dots to match Python package names with namespaces.
Phase 1 Wave 1B: Discovery & Gating
Depends on: #11 (Wave 1A)
Changes
Test Results
All 164 tests pass.
Co-Authored-By: Warp agent@warp.dev
Summary by CodeRabbit
New Features
Documentation
Chores