feat: add repository-based skill suggestions#272
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 1 minutes and 33 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughAdds a repo-aware Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as "agentsync CLI"
participant Service as "SuggestionService"
participant Detector as "FileSystemRepoDetector"
participant Catalog as "ResolvedSkillCatalog"
participant Registry as "installed registry (.agents/skills/registry.json)"
participant Installer as "install primitives / Provider"
User->>CLI: run `agentsync skill suggest [--json] [--install [--all]]`
CLI->>Service: suggest_with(project_root, detector?, provider?)
Service->>Detector: detect(project_root)
Detector-->>Service: detections
Service->>Catalog: load_catalog(provider?)
Catalog-->>Service: catalog rules
Service->>Service: recommend_skills(detections, catalog)
Service->>Registry: read_installed_skill_states()
Registry-->>Service: installed states
Service-->>CLI: SuggestResponse (JSON or human)
alt --install flag present
CLI->>CLI: ensure TTY (unless --all)
CLI->>User: prompt selection (Interactive) or proceed (--all)
User-->>CLI: selected IDs
CLI->>Service: install_selected_with(selected_ids, mode)
Service->>Registry: re-read registry
Service->>Installer: provider.resolve + install_fn per skill
Installer-->>Service: install results
Service-->>CLI: SuggestInstallJsonResponse (results, mode, selected_skill_ids)
end
CLI-->>User: output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 14
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md`:
- Around line 267-281: Update the design doc to match the implemented CLI
behavior for "agentsync skill suggest": replace references to the flags
"--interactive" and "--install-all" with the actual flags implemented
("--install" and "--install --all"), update the example usages to show
"agentsync skill suggest --install" and "agentsync skill suggest --install
--all", and adjust the rules text so that "--install" is described as the
TTY/interactive flow (only shows not-yet-installed suggestions) and "--install
--all" is described as the non-interactive/CI-friendly batch install path;
ensure all mentions of the install pipeline reuse note remain intact and that
"plain skill suggest" remains read-only.
- Around line 138-149: The design doc's TechnologyId enum is out of sync with
the implementation: update the design to match the code (or vice versa).
Specifically, reconcile the enum variants and serde renaming so they match the
implementation's TechnologyId (which currently uses the combined variant
NodeTypeScript and snake_case renames in src/skills/suggest.rs) or change the
implementation to use separate Node and TypeScript variants with kebab-case
serde renames as shown in the design; ensure the final document and the code
reference the same variant names (TechnologyId::Node, TechnologyId::TypeScript
or TechnologyId::NodeTypeScript) and the same serde rename rule.
In `@openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/state.yaml`:
- Around line 11-12: The YAML contains duplicate keys named "next" with values
"verify" and "archive"; remove the duplication by either making "next" a
sequence (e.g., next: [verify, archive]) if both transitions are valid, or by
using distinct keys (e.g., next and subsequent_next or next_phase) so each value
is unique; update the single "next" entry used by the state machine (the
duplicated "next" keys) to reflect the intended workflow (keep only the final
desired transition or convert to a list) so parsers won’t fail.
In `@src/commands/skill.rs`:
- Around line 193-215: The recommendation-install path currently passes raw
catalog skill IDs into install_selected_with (via SuggestInstallProvider and the
closure) which allows unvalidated IDs and trusts recommendation.installed;
update this path to call the same validation and on-disk installed-state check
used by run_install/run_update/run_uninstall before invoking
install_selected_with: for each selected_skill_id from
prompt_for_recommended_skills, run validate_skill_id(skill_id) and compute the
canonical install path under the project's .agents/skills directory to determine
real installed state (instead of using recommendation.installed); only pass
validated IDs (or a sanitized/normalized form) into install_selected_with and
ensure the closure receives the validated skill_id and target_root derived from
the filesystem layout to prevent path traversal and state divergence.
In `@src/skills/catalog.rs`:
- Around line 273-276: The current construction of detections_by_technology from
detections uses a BTreeMap<(detection.technology) -> detection> which overwrites
earlier TechnologyDetection entries with the same technology, dropping prior
evidence; update the aggregation so each TechnologyId maps to a collection
(e.g., BTreeMap<TechnologyId, Vec<&TechnologyDetection>>) or otherwise append to
a Vec for existing keys when building detections_by_technology, ensuring you
iterate detections and push detections with the same detection.technology into
the vector instead of replacing them; if the one-per-technology behavior is
intentional, add a clear comment near detections_by_technology and
TechnologyDetection explaining that duplicates are expected to be dropped.
- Around line 186-188: The get_skill method uses linear search (iter().find())
which will be O(n) as the catalog grows; replace the internal storage for
Catalog (and ProviderBackedCatalog) from Vec<CatalogSkillMetadata> to a
HashMap<String, CatalogSkillMetadata> keyed by skill_id (or maintain both Vec
and HashMap if ordering is required), update constructors/initializers to
populate the map, and change get_skill to perform a direct lookup (e.g.,
self.skills_map.get(skill_id)) so lookups become O(1); update any code that
mutates or iterates skills to use the new map (or iterate the retained Vec) and
ensure types/signatures referencing skills are adjusted accordingly.
- Around line 229-247: The mapping over metadata.rules currently drops unknown
technologies via TechnologyId::from_catalog_key but treats invalid
min_confidence as fatal via DetectionConfidence::from_catalog_key; change this
to skip any rule that ends up with an empty technologies vector instead of
producing a CatalogRule with no techs: after collecting technologies in the
closure for the rules iterator, if technologies.is_empty() log a warning
(including rule.skill_id or reason_template) and return None from the closure so
the rule is skipped, otherwise parse min_confidence with
DetectionConfidence::from_catalog_key and construct the CatalogRule as before;
ensure the iterator still collects into Option<Vec<_>> or adapt to collect only
valid rules.
In `@src/skills/provider.rs`:
- Around line 20-41: The structs ProviderCatalogMetadata, ProviderCatalogSkill,
and ProviderCatalogRule need serde deserialization to support future
provider-backed catalog loading; add the Deserialize derive to each (e.g.,
#[derive(Debug, Clone, Deserialize)]) and import serde::Deserialize at the top
of the file so these types can be deserialized from provider responses without a
breaking change later.
In `@src/skills/suggest.rs`:
- Around line 418-422: Replace the Debug formatting for detection.confidence
with a human-readable label by adding an as_human_label() method to the
DetectionConfidence enum (mirroring TechnologyId's approach) and then use that
in the formatter call; specifically implement
DetectionConfidence::as_human_label() to return "Low"/"Medium"/"High" (or
appropriate strings) and update the lines.push(format!(...)) call that currently
uses {:?} for detection.confidence to use detection.confidence.as_human_label()
instead.
- Around line 119-137: The add_match function currently sorts
matched_technologies and does a linear dedupe on reasons on every call; remove
the in-line sort and avoid repeated linear dedupe by (1) removing the
matched_technologies.sort() from add_match and instead sort matched_technologies
once in a new finalize or accessor method (e.g., sort_matched_technologies or
when building the output), and (2) replace the linear containment check for
reasons with a HashSet<String> used for deduplication (keep the Vec<String> for
stable ordering if needed, pushing to it only when the HashSet insert returns
true). Update struct fields (add a reasons_set or similar) and references in
add_match to use these symbols (add_match, matched_technologies, reasons) so the
behaviour stays the same but avoids repeated sorts and O(n) reason checks.
- Around line 321-345: The loop over response.recommendations currently fails
fast when provider.resolve or install_fn errors, losing info about prior
successes; change the loop in the function that builds results so that each call
to provider.resolve(&recommendation.skill_id) and install_fn(...) is wrapped to
catch errors, push a SuggestInstallResult with a new
SuggestInstallStatus::Failed (include an error message/string) for that skill,
and continue to the next recommendation instead of returning early; update the
SuggestInstallStatus enum to include Failed and ensure SuggestInstallResult
captures the error detail so the caller receives a complete per-skill result
list (reference response.recommendations, selected_skill_ids, provider.resolve,
install_fn, SuggestInstallResult, SuggestInstallStatus).
- Around line 565-576: The dedupe_preserve_order function currently clones each
skill_id twice; fix by cloning once per iteration: for each skill_id, let s =
skill_id.clone(); if seen.insert(s.clone()) { unique.push(s); } — this way you
only clone once for the value stored in the set and move the original clone into
unique; alternatively replace BTreeSet+Vec with an IndexSet<String> (from
indexmap) to get order-preserving deduplication with a single insert per
element.
In `@tests/unit/suggest_install.rs`:
- Around line 16-18: The test calls SuggestionService::suggest() which uses the
real SkillsShProvider; make the suite hermetic by switching to
SuggestionService::suggest_with(...) and inject your LocalSkillProvider
(LocalSkillProvider::new(root, &["rust-async-patterns","docker-expert"])) and,
if necessary, a stub detector so the recommendation catalog and detection are
fully controlled by the test rather than the production SkillsShProvider; update
the other occurrences noted (the blocks around the other asserts) to use
suggest_with and the same injected providers/stubs so the tests remain
deterministic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3205032d-bd50-49b9-a640-67e8a3a97d1d
📒 Files selected for processing (26)
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/exploration.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/proposal.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/specs/skill-recommendations/spec.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/state.yamlopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/tasks.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/verify-report.mdopenspec/specs/skill-recommendations/spec.mdsrc/commands/skill.rssrc/skills/catalog.rssrc/skills/detect.rssrc/skills/mod.rssrc/skills/provider.rssrc/skills/registry.rssrc/skills/suggest.rstests/all_tests.rstests/contracts/test_skill_suggest_output.rstests/integration/mod.rstests/integration/skill_suggest.rstests/test_skill_suggest_output.rstests/unit/mod.rstests/unit/suggest_catalog.rstests/unit/suggest_detector.rstests/unit/suggest_install.rswebsite/docs/src/content/docs/guides/skills.mdxwebsite/docs/src/content/docs/reference/cli.mdx
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md
Show resolved
Hide resolved
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md
Show resolved
Hide resolved
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/state.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/skills/suggest.rs (1)
201-220:⚠️ Potential issue | 🟠 MajorReturn per-skill failures instead of aborting the entire install batch.
install_selected_with()mutates.agents/skillsas it goes, but the firstprovider.resolve()orinstall_fn()error aborts the whole response. In--install --all, that leaves callers with partial side effects and no result object describing what already succeeded. The result model needs a failed/error path so the loop can keep collecting outcomes for the remaining selected skills.Also applies to: 304-386
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/skills/suggest.rs` around lines 201 - 220, Add a failure path to the result model and make install_selected_with() collect per-skill outcomes instead of aborting on first error: extend SuggestInstallStatus with a Failed variant (or add an Option<String> error_message field onto SuggestInstallResult) and ensure SuggestInstallJsonResponse.results holds a result for every selected skill; then update install_selected_with() to catch errors from provider.resolve() and the install_fn() call, push a SuggestInstallResult with status Failed (and include the error message) for that skill, and continue the loop so later skills are attempted and all outcomes are returned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md`:
- Around line 281-344: The archived JSON contract is out of sync with the
serializer in src/skills/suggest.rs—remove or update the documented fields that
the code doesn't emit (project_root, catalog, root_relative_paths, suggestions
array shape, per-suggestion title/summary/installed_version/catalog_source, and
install failures) or change the serializer to produce the documented fields;
specifically, either update the design.md JSON examples to match the actual
output keys produced by the functions in src/skills/suggest.rs (e.g., the
detection/suggestion structs and serializer logic) or modify the serializer
routines that build the suggestion/detection JSON to include the missing
properties and install-phase failures, ensuring names and nesting exactly match
the archived contract.
- Around line 370-374: Remove the stale unchecked todo "- [ ] Delta specs for
this change have not yet been written in
`openspec/changes/2026-03-30-autodetect-skill-suggestions/specs/`;" from the
archived design so the artifact no longer appears incomplete, leaving the
remaining Open Questions and confirmed items intact; simply delete that list
item in the markdown (the line starting with "- [ ] Delta specs...") within the
document.
In `@src/skills/catalog.rs`:
- Around line 219-277: Provider-backed catalogs that normalize to zero usable
rules currently still return Some(catalog) in
ProviderBackedCatalog::from_metadata, causing load_catalog to prefer an empty
provider catalog over the embedded fallback; change the behavior so empty
normalized catalogs are rejected: either make
ProviderBackedCatalog::from_metadata return None when the resulting rules vector
is empty (after the normalization/filter_map that produces CatalogRule entries)
or update load_catalog to verify the constructed catalog has non-empty rules
(e.g., check catalog.rules.is_empty()) and fall back to
EmbeddedSkillCatalog::default() when empty; refer to
ProviderBackedCatalog::from_metadata, the rules normalization block that builds
Vec<CatalogRule>, and load_catalog for where to add the guard.
---
Duplicate comments:
In `@src/skills/suggest.rs`:
- Around line 201-220: Add a failure path to the result model and make
install_selected_with() collect per-skill outcomes instead of aborting on first
error: extend SuggestInstallStatus with a Failed variant (or add an
Option<String> error_message field onto SuggestInstallResult) and ensure
SuggestInstallJsonResponse.results holds a result for every selected skill; then
update install_selected_with() to catch errors from provider.resolve() and the
install_fn() call, push a SuggestInstallResult with status Failed (and include
the error message) for that skill, and continue the loop so later skills are
attempted and all outcomes are returned.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 93706174-43d0-45b5-bab1-0e43200fe832
📒 Files selected for processing (7)
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.mdopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/state.yamlsrc/skills/catalog.rssrc/skills/provider.rssrc/skills/suggest.rstests/unit/suggest_catalog.rstests/unit/suggest_install.rs
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md
Show resolved
Hide resolved
openspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.md
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sonarcloud.yml:
- Around line 54-70: The step-level if conditions use the secrets context
(secrets.CODECOV_TOKEN) which is not allowed; update the job to expose
CODECOV_TOKEN as an env variable (e.g., set env: CODECOV_TOKEN: ${{
secrets.CODECOV_TOKEN }} at the job level) and then change the two step if
checks in the steps named "Upload coverage to Codecov" and "Upload coverage to
Codecov (tokenless)" to use the env context (if: env.CODECOV_TOKEN != '' and if:
env.CODECOV_TOKEN == '' respectively); ensure the CODECOV_TOKEN value is still
passed into the step env for the tokened run.
- Around line 72-76: The SonarCloud step ("Analyze with SonarCloud" using
SonarSource/sonarqube-scan-action@v7) is referencing secrets directly in the
step-level conditional which causes the same secrets/context issue; move the
SONAR_TOKEN into the job-level env (set SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
on the job) and have the step consume the job env (or reference ${{
env.SONAR_TOKEN }}), removing direct secrets usage from the step-level context
so the if condition and env access work correctly.
- Line 56: The workflow uses mutable tags for third-party actions
(codecov/codecov-action@v5 and SonarSource/sonarqube-scan-action@v7); replace
these two action references with their corresponding full commit SHAs (pin to
the exact commit SHA for codecov/codecov-action where it's used twice and for
SonarSource/sonarqube-scan-action) so the workflow no longer depends on mutable
tags, update both occurrences of codecov/codecov-action@v5 to the full SHA and
replace SonarSource/sonarqube-scan-action@v7 with its full SHA, then run the
workflow to confirm the pinned SHAs work.
In `@src/skills/catalog.rs`:
- Line 249: The call to
DetectionConfidence::from_catalog_key(&rule.min_confidence)? currently
propagates errors silently; change it to parse with a match or map_err so that
when parsing fails you log a warning (similar to the existing technology warning
block that inspects rule.technologies) indicating which rule (e.g., rule.name or
rule.key) and the invalid min_confidence value were invalid, then return None to
skip the rule; specifically update the code around the min_confidence binding
(the use of DetectionConfidence::from_catalog_key) to emit a process/log warning
on Err and behave consistently with the technology validation logic.
In `@src/skills/suggest.rs`:
- Around line 335-378: The loop repeatedly calls read_installed_skill_states
causing N file reads; move the call to
read_installed_skill_states(&target_root.join("registry.json")) outside the for
loop into a single mutable variable (e.g., mutable installed_state) before
iterating over response.recommendations, check
installed_state.get(&recommendation.skill_id).is_some_and(|s| s.installed) as
before, and when an install succeeds in the branch that calls
install_fn(&recommendation.skill_id, &resolved.download_url, &target_root)
update the in-memory installed_state to mark that skill as installed (so
subsequent iterations see it) before pushing the SuggestInstallResult; keep
provider.resolve, install_fn, and SuggestInstallResult usage unchanged.
In `@tests/e2e/fixtures/repos/ai-adoption/.codex/config.toml`:
- Line 1: Update the test fixture so it uses a real model name instead of the
non-existent "gpt-5"; locate the configuration entry that sets model = "gpt-5"
in the .codex/config.toml for the ai-adoption fixture and change it to a valid
model string such as "gpt-4" or "gpt-4o" to keep fixtures realistic and
consistent.
In `@tests/e2e/lib/assert.sh`:
- Around line 27-30: The helper assert_symlink_exists currently only checks for
a symlink node and can pass for dangling links; update the function
assert_symlink_exists to check both that the path is a symlink (test -L) and
that its target exists (test -e) and fail if either check fails, so dangling
symlinks are treated as failures in the tests that call assert_symlink_exists.
In `@tests/e2e/lib/fixtures.sh`:
- Around line 87-90: The tmp TTY input file created as input_file via mktemp can
leak if script -qec fails because set -e exits before rm -f runs; ensure
input_file is always removed by registering a trap cleanup (or wrapping the
script invocation in a conditional) that removes "$input_file" on EXIT or ERR,
then create input_file with mktemp, run script -qec "$command" /dev/null <
"$input_file", and rely on the trap to rm -f "$input_file" so cleanup occurs
whether script succeeds or fails.
In `@tests/e2e/run.sh`:
- Around line 31-34: The loop is using unquoted expansion of E2E_SCENARIOS which
allows word splitting and glob expansion; change the for-loop to iterate over
the variable safely (e.g., quote E2E_SCENARIOS or use the same safe parsing
approach as the default case) so that resolve_scenario is called only with
intact scenario strings and SCENARIOS+=(...) receives the correct values;
specifically, update the loop that references E2E_SCENARIOS, resolve_scenario,
and SCENARIOS to use a quoted expansion or read/safe-splitting method.
In `@tests/e2e/scenarios/01-init-blank.sh`:
- Around line 18-27: The test asserts an [agents.opencode] block was written but
never verifies agentsync apply produced OpenCode output; after calling agentsync
apply (in the same scenario), add an assertion that the expected OpenCode
artifact exists—e.g., add an assert_symlink_exists (or assert_file_exists) for
the OpenCode output filename (reference symbols: "[agents.opencode]", agentsync
apply, and assert_symlink_exists) so the apply path for OpenCode is covered.
In `@tests/e2e/scenarios/02-init-adoption.sh`:
- Around line 35-49: The test removes AGENTS.md but never verifies it was
restored; add an assertion to re-check the root agent instructions file by
calling assert_symlink_exists for "AGENTS.md" (same helper used for other files)
after running agentsync apply and alongside the other assert_symlink_exists
checks (e.g., next to the checks for CLAUDE.md, GEMINI.md, OPENCODE.md) so the
scenario fails if the root AGENTS.md is not recreated.
In `@tests/e2e/scenarios/04-suggest-install-guided.sh`:
- Around line 11-23: Extract the long duplicated skill list into a shared
fixture function and replace the inlined list passed to prepare_skill_sources in
suggest-install scripts with a single call; create a helper (e.g., define
function default_skill_list or prepare_default_skill_sources) in
tests/e2e/lib/fixtures.sh that invokes prepare_skill_sources with
SKILL_SOURCE_ROOT and the canonical list (accessibility, best-practices,
core-web-vitals, docker-expert, frontend-design, github-actions, makefile,
performance, pinned-tag, rust-async-patterns, seo), then update
tests/e2e/scenarios/04-suggest-install-guided.sh (and sibling suggest-install
scenarios) to call that new helper instead of repeating the list.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6d29b128-ee68-4015-b472-16a9445b0f18
📒 Files selected for processing (54)
.github/workflows/sonarcloud.ymlcodecov.ymlopenspec/changes/archive/2026-03-30-autodetect-skill-suggestions/design.mdsonar-project.propertiessrc/skills/catalog.rssrc/skills/suggest.rstests/e2e/Dockerfile.e2etests/e2e/docker-compose.ymltests/e2e/fixtures/repos/ai-adoption/.claude/commands/review.mdtests/e2e/fixtures/repos/ai-adoption/.claude/skills/debugging/SKILL.mdtests/e2e/fixtures/repos/ai-adoption/.codex/config.tomltests/e2e/fixtures/repos/ai-adoption/.codex/skills/format-skill/SKILL.mdtests/e2e/fixtures/repos/ai-adoption/.cursor/mcp.jsontests/e2e/fixtures/repos/ai-adoption/.cursor/skills/cursor-skill/SKILL.mdtests/e2e/fixtures/repos/ai-adoption/.gemini/commands/analyze.mdtests/e2e/fixtures/repos/ai-adoption/.gemini/skills/review-skill/SKILL.mdtests/e2e/fixtures/repos/ai-adoption/.github/copilot-instructions.mdtests/e2e/fixtures/repos/ai-adoption/.opencode/command/fix.mdtests/e2e/fixtures/repos/ai-adoption/.opencode/skills/opencode-skill/SKILL.mdtests/e2e/fixtures/repos/ai-adoption/GEMINI.mdtests/e2e/fixtures/repos/ai-adoption/OPENCODE.mdtests/e2e/fixtures/repos/blank-repo/README.mdtests/e2e/fixtures/repos/mega-monorepo/.github/workflows/ci.ymltests/e2e/fixtures/repos/mega-monorepo/Makefiletests/e2e/fixtures/repos/mega-monorepo/README.mdtests/e2e/fixtures/repos/mega-monorepo/apps/docs/astro.config.mjstests/e2e/fixtures/repos/mega-monorepo/apps/web/package.jsontests/e2e/fixtures/repos/mega-monorepo/apps/web/tsconfig.jsontests/e2e/fixtures/repos/mega-monorepo/crates/cli/Cargo.tomltests/e2e/fixtures/repos/mega-monorepo/docker-compose.ymltests/e2e/fixtures/repos/mega-monorepo/pnpm-workspace.yamltests/e2e/fixtures/repos/mega-monorepo/services/python-api/pyproject.tomltests/e2e/fixtures/repos/mixed-stack-a/.github/workflows/ci.ymltests/e2e/fixtures/repos/mixed-stack-a/Cargo.tomltests/e2e/fixtures/repos/mixed-stack-a/Dockerfiletests/e2e/fixtures/repos/mixed-stack-a/Makefiletests/e2e/fixtures/repos/mixed-stack-a/README.mdtests/e2e/fixtures/repos/mixed-stack-a/astro.config.mjstests/e2e/fixtures/repos/mixed-stack-a/package.jsontests/e2e/fixtures/repos/mixed-stack-a/pyproject.tomltests/e2e/fixtures/repos/mixed-stack-a/tsconfig.jsontests/e2e/lib/assert.shtests/e2e/lib/fixtures.shtests/e2e/run.shtests/e2e/scenarios/01-init-blank.shtests/e2e/scenarios/02-init-adoption.shtests/e2e/scenarios/03-suggest-mixed-stack.shtests/e2e/scenarios/04-suggest-install-guided.shtests/e2e/scenarios/05-suggest-install-all.shtests/e2e/scenarios/06-suggest-mega-monorepo.shtests/e2e/test_suite.shtests/integration/skill_suggest.rstests/unit/suggest_catalog.rstests/unit/suggest_install.rs
💤 Files with no reviewable changes (1)
- tests/e2e/test_suite.sh
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/sonarcloud.yml (1)
55-55:⚠️ Potential issue | 🔴 CriticalCritical: invalid
secretscontext in step-levelifconditions (workflow parse/runtime failure).Line 55, Line 65, and Line 73 use
secrets.*directly inif. In this scope, that context is rejected byactionlint, so these guards are unreliable/broken.🐛 Proposed fix
jobs: coverage-and-analysis: name: Coverage, Codecov, and SonarCloud runs-on: ubuntu-latest + env: + CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} + SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} steps: @@ - name: Upload coverage to Codecov - if: ${{ secrets.CODECOV_TOKEN != '' }} + if: ${{ env.CODECOV_TOKEN != '' }} @@ - name: Upload coverage to Codecov (tokenless) - if: ${{ secrets.CODECOV_TOKEN == '' }} + if: ${{ env.CODECOV_TOKEN == '' }} @@ - name: Analyze with SonarCloud - if: ${{ secrets.SONAR_TOKEN != '' && (github.event_name != 'pull_request' || !github.event.pull_request.head.repo.fork) }} + if: ${{ env.SONAR_TOKEN != '' && (github.event_name != 'pull_request' || !github.event.pull_request.head.repo.fork) }}#!/bin/bash # Read-only verification: # 1) confirm current guarded lines # 2) run actionlint to validate context usage sed -n '50,80p' .github/workflows/sonarcloud.yml echo "-----" actionlint .github/workflows/sonarcloud.ymlAlso applies to: 65-65, 73-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/sonarcloud.yml at line 55, The step-level conditionals currently use the secrets context directly (e.g., the if using ${{ secrets.CODECOV_TOKEN != '' }} at the three step locations), which actionlint/runner rejects; fix by exposing the secret as an environment variable at the job (or workflow) level (e.g., set CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} in env) and change each offending step if to check that env variable (e.g., if: env.CODECOV_TOKEN != '') so the secret is not referenced directly in the step-level if; update all three occurrences (the if conditions referencing secrets at the listed steps) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/sonarcloud.yml:
- Line 55: The step-level conditionals currently use the secrets context
directly (e.g., the if using ${{ secrets.CODECOV_TOKEN != '' }} at the three
step locations), which actionlint/runner rejects; fix by exposing the secret as
an environment variable at the job (or workflow) level (e.g., set CODECOV_TOKEN:
${{ secrets.CODECOV_TOKEN }} in env) and change each offending step if to check
that env variable (e.g., if: env.CODECOV_TOKEN != '') so the secret is not
referenced directly in the step-level if; update all three occurrences (the if
conditions referencing secrets at the listed steps) accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 379ee3a6-1526-4602-8164-93a22ec2dd49
📒 Files selected for processing (1)
.github/workflows/sonarcloud.yml
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/skills/catalog.rs`:
- Around line 411-433: The current alias check is inverted: it looks up
aliases.get(&definition.local_skill_id) and compares the provider id, which
fails to reject overlays that reuse an existing provider_skill_id with a
different local_skill_id; change the guard to search the aliases map for an
existing entry whose value equals definition.provider_skill_id and whose key !=
definition.local_skill_id, and bail with a message that the provider_skill_id is
already owned by a different local alias; keep the existing duplicate-provider
check using normalized.contains_key(&definition.provider_skill_id) and then
insert aliases.insert(definition.local_skill_id.clone(),
definition.provider_skill_id.clone()) as before.
In `@src/skills/suggest.rs`:
- Around line 345-349: The check currently uses the stale
recommendation.installed flag which can block reinstalls even after
registry.json is reread; update the install-gate in the block that references
recommendation.installed and installed_state so it only relies on
installed_state (i.e., remove the recommendation.installed check and keep the
installed_state.get(&recommendation.skill_id).is_some_and(|state|
state.installed) logic) so suggest_with()/install_selected_with() uses the
refreshed registry snapshot and the up-to-date installed_state map as the sole
determinant of AlreadyInstalled.
In `@tests/contracts/test_skill_suggest_output.rs`:
- Around line 164-166: The test uses a magic number in the assertion
assert!(astro_recommendations.len() >= 5) which makes the test brittle; replace
the literal with a named constant (e.g., MIN_ASTRO_RECOMMENDATIONS) or add an
inline comment documenting why 5 is expected based on the current embedded
catalog, then update the assertion to use that constant (or keep the comment
next to astro_recommendations) so future catalog changes are clearer; reference
the astro_recommendations variable and the assertion line so you modify that
exact check in the test.
In `@tests/e2e/lib/assert.sh`:
- Line 36: The assert_file_contains helper uses grep -F "$expected" which treats
dash-prefixed patterns (e.g., "-n") as options; update the call in
assert_file_contains to pass the pattern safely by adding a "--" separator
(change grep -F "$expected" "$path" to grep -F -- "$expected" "$path") so
leading hyphens are not interpreted as options.
In `@tests/e2e/lib/fixtures.sh`:
- Around line 85-91: The for-loop in tests/e2e/lib/fixtures.sh declares a local
variable attempt and iterates with "for attempt in $(seq 1 20)" but never uses
it; replace the unused loop variable with a discard name (e.g., "_" ) and remove
or change the "local attempt" declaration accordingly so the loop reads "for _
in $(seq 1 20)" (or declare "local _" if you want locality), keeping the curl
check and sleep logic in the body unchanged.
In `@tests/e2e/scenarios/05-suggest-install-all.sh`:
- Line 24: The current assertion using assert_json_expr on
"install-all-repeat.json" uses the jq filter that checks all(.status ==
"already_installed") which vacuously succeeds for an empty .results array;
update the assertion in tests/e2e/scenarios/05-suggest-install-all.sh (the
assert_json_expr call for "install-all-repeat.json") to require that .results is
non-empty and that every entry has status "already_installed" (i.e., combine a
check that .results length > 0 with the existing all(...) check) so the
repeat-run test fails if no recommendations are returned.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8c145a48-1f67-444b-828f-34bbf82606a1
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (27)
.github/workflows/sonarcloud.ymlnpm/agentsync/tsconfig.jsonopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/design.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/exploration.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/proposal.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/specs/skill-recommendations/spec.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/state.yamlopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/tasks.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/verify-report.mdopenspec/specs/skill-recommendations/spec.mdsrc/skills/catalog.rssrc/skills/catalog.v1.tomlsrc/skills/provider.rssrc/skills/suggest.rstests/contracts/test_skill_suggest_output.rstests/e2e/fixtures/repos/ai-adoption/.codex/config.tomltests/e2e/lib/assert.shtests/e2e/lib/fixtures.shtests/e2e/run.shtests/e2e/scenarios/01-init-blank.shtests/e2e/scenarios/02-init-adoption.shtests/e2e/scenarios/04-suggest-install-guided.shtests/e2e/scenarios/05-suggest-install-all.shtests/integration/skill_suggest.rstests/unit/suggest_catalog.rswebsite/docs/src/content/docs/guides/skills.mdxwebsite/docs/src/content/docs/reference/cli.mdx
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_update_output.rs (1)
150-159:⚠️ Potential issue | 🟡 MinorRemove the redundant re-install in the no-version-bump test.
At Line 150, this secondskill installcall is unrelated to the no-version-bump assertion and can introduce unrelated failures/noise in this contract test.Suggested cleanup
- println!( - "[TEST DEBUG] command: {} skill install noroll-skill --source {} --json", - agentsync_bin(), - install_src.to_str().unwrap() - ); - let output = Command::new(agentsync_bin()) - .current_dir(&project_root) - .arg("skill") - .arg("install") - .arg("noroll-skill") - .arg("--source") - .arg(install_src.to_str().unwrap()) - .arg("--json") - .output() - .expect("Failed to run agentsync install"); - if !output.status.success() { - eprintln!( - "[INSTALL STDERR] {}", - String::from_utf8_lossy(&output.stderr) - ); - eprintln!( - "[INSTALL STDOUT] {}", - String::from_utf8_lossy(&output.stdout) - ); - } + // Baseline install already completed above; proceed directly to update attempt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_update_output.rs` around lines 150 - 159, Remove the redundant second invocation of Command::new(agentsync_bin()).arg("skill").arg("install").arg("noroll-skill")... that appears at the start of this snippet and is unrelated to the no-version-bump assertion; instead leave only the initial install (or the call that sets up the state required by the test) and delete this extra /output() execution so the test asserts only the no-version-bump behavior without introducing unrelated install side-effects.
♻️ Duplicate comments (1)
tests/e2e/lib/fixtures.sh (1)
105-115:⚠️ Potential issue | 🟡 MinorThe temp TTY input file is still leaked on the success path.
After Line 109 succeeds, Lines 111-115 restore or clear the
EXITtrap before it fires, and there is no explicitrm -f "$input_file". Every passing interactive scenario leaves itsmktempfile behind.🧹 Proposed fix
input_file=$(mktemp) old_trap=$(trap -p EXIT || true) trap 'rm -f "$input_file"' EXIT printf '%b' "$input_text" > "$input_file" - script -qec "$command" /dev/null < "$input_file" + local status=0 + script -qec "$command" /dev/null < "$input_file" || status=$? + rm -f "$input_file" if [ -n "$old_trap" ]; then eval "$old_trap" else trap - EXIT fi + return "$status" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/lib/fixtures.sh` around lines 105 - 115, The temporary TTY file "$input_file" is not removed on the success path because the EXIT trap is restored/cleared before it fires; modify the cleanup so the file is explicitly removed after the script invocation and before restoring the previous trap: after calling script -qec "$command" ... ensure you run rm -f "$input_file" (or call the existing trap handler) and then restore the saved trap stored in old_trap (or reset with trap - EXIT) so no mktemp file is leaked; update the block around input_file, old_trap, trap, and the script invocation accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sonarcloud.yml:
- Around line 76-78: The SonarCloud GitHub Actions step "Analyze with
SonarCloud" using
SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 is
missing its required inputs and environment mapping; add a with: block that
supplies the args (e.g., -Dsonar.projectBaseDir=.) and an env: block that sets
SONAR_TOKEN: ${{ env.SONAR_TOKEN }} so the action receives the token and the
projectBaseDir argument.
In `@npm/agentsync/tsconfig.json`:
- Around line 17-18: Update the tsconfig JSON to stop suppressing the
deprecation and use the non-deprecated resolver: replace the "moduleResolution":
"node" setting with "moduleResolution": "node10" and remove the
"ignoreDeprecations": "6.0" entry so TypeScript 6+ uses the supported module
resolution; target the properties "moduleResolution" and "ignoreDeprecations" in
the tsconfig to make this change.
In
`@openspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/verify-report.md`:
- Around line 8-34: The markdown violates markdownlint: ensure there is a blank
line before and after each heading and table and surrounding fenced code blocks
(notably around the "Completeness" and "Build & Tests Execution" headings and
the code fences showing cargo/pnpm output), and fix any heading-level increments
so headings nest correctly; update the "Completeness" table and the code fences
by inserting a blank line above the "### Completeness" heading, a blank line
between the heading and the table, a blank line before the fenced blocks and one
after, and apply the same blank-line pattern to the other flagged sections
(lines referenced in the review) so lint rules pass.
In `@src/skills/suggest.rs`:
- Around line 513-578: The render_human method in SuggestResponse is too large
and repeats branching patterns; extract the detection, recommendation,
selection, and result blocks into private helper methods (e.g.,
render_detections(&self) -> Vec<String>, render_recommendations(&self) ->
Vec<String>, render_selection(&self) -> Vec<String>, render_results(&self) ->
Vec<String>) and call them from render_human to build the lines Vec instead of
inlining the logic. Move the loops/formatting for self.suggest.detections,
self.suggest.recommendations, self.selected_skill_ids, and self.results into
those helpers, keep the summary and install mode formatting in render_human, and
ensure each helper returns already-formatted lines so render_human simply
extends lines with their output.
- Around line 309-415: The install_selected_with function is too complex; split
it into small helpers: (1) extract a
validate_selected_skill_ids(selected_skill_ids, response) that checks selection
against recommendation_map (uses recommendation.skill_id), (2) move registry
handling into load_installed_state(registry_path) and
save_installed_state(registry_path, installed_state) to encapsulate
create_dir_all, registry.json path logic and read_installed_skill_states, (3)
extract perform_install_for_recommendation(recommendation, provider,
target_root, install_fn, installed_state) which resolves via provider.resolve
and calls install_fn, updates installed_state and returns a
SuggestInstallResult, and (4) extract patch_suggest_with_results(suggest_mut,
results) to set recommendation.installed and recompute
suggest.summary.installable_count; then rewrite install_selected_with to call
these helpers in sequence and return the same SuggestInstallJsonResponse.
- Around line 448-502: The render_human method is doing three responsibilities
(detections, recommendations, summary) and is too complex; extract the detection
formatting, recommendation formatting, and summary generation into small helper
functions to reduce complexity and improve readability. Create private helper
methods named render_detections(&self) -> Vec<String> (or String),
render_recommendations(&self) -> Vec<String> (or String), and
render_summary(&self) -> String that encapsulate the respective branches and
loops currently inside render_human (use the same logic for evidence formatting,
installed/version handling, reasons, and summary fields), then simplify
render_human to call these helpers, concatenate their returned lines, and join
them with "\n". Ensure you reference and reuse existing symbols like
self.detections, self.recommendations, recommendation.installed_version,
recommendation.reasons, and self.summary.detected_count when moving logic into
the helpers.
- Around line 152-156: annotate_installed_state currently only updates
SkillSuggestion when Some(InstalledSkillState) is passed, leaving stale values
when None is passed; update the method so that in the None branch you explicitly
clear the installed metadata on the SkillSuggestion instance (set self.installed
= false and reset self.installed_version to the empty/None equivalent),
referencing the annotate_installed_state function and the SkillSuggestion fields
installed and installed_version and the InstalledSkillState type.
In `@tests/e2e/lib/fixtures.sh`:
- Around line 85-89: The readiness loop that checks "$provider_url" with curl
inside the for loop should include connection and overall timeouts to avoid a
single probe hanging the 20-iteration loop; update the curl invocation in the
loop that references "$provider_url" (inside the for _ in $(seq 1 20) loop) to
add options such as --connect-timeout (e.g. 2) and --max-time or -m (e.g. 5) so
each probe fails fast and the loop respects the intended ceiling.
In `@tests/unit/suggest_catalog.rs`:
- Around line 95-119: The test should also assert that the aliased
recommendation id can be resolved by the provider to mirror the install path:
after finding custom (in
canonical_provider_skill_ids_use_local_aliases_in_recommendations) call the
provider's resolve path used during install (Provider::resolve or the
provider-specific resolve method) with recommendation.skill_id and assert it
returns a matching provider skill id/definition (i.e., not None) so the lookup
that occurs in src/skills/suggest.rs actually succeeds; this ensures the local
alias ("custom-rust") maps to the provider id (e.g., "acme/skills/rust-custom")
during install-time resolution.
In `@tests/unit/suggest_detector.rs`:
- Around line 95-125: The test
skips_unreadable_nested_directories_without_failing_detection currently sets
unreadable permissions on unreadable_dir and restores them only after calling
FileSystemRepoDetector.detect, which can leave permissions changed if detect
panics; wrap the permission-restoration in a scope guard (e.g., use the
scopeguard crate or a small RAII struct) that captures unreadable_dir and
original_permissions and restores them in Drop so permissions are always reset
even on panic, leaving the rest of the test logic (TempDir, fs::set_permissions,
calls to FileSystemRepoDetector.detect and assertions) unchanged.
---
Outside diff comments:
In `@tests/test_update_output.rs`:
- Around line 150-159: Remove the redundant second invocation of
Command::new(agentsync_bin()).arg("skill").arg("install").arg("noroll-skill")...
that appears at the start of this snippet and is unrelated to the
no-version-bump assertion; instead leave only the initial install (or the call
that sets up the state required by the test) and delete this extra /output()
execution so the test asserts only the no-version-bump behavior without
introducing unrelated install side-effects.
---
Duplicate comments:
In `@tests/e2e/lib/fixtures.sh`:
- Around line 105-115: The temporary TTY file "$input_file" is not removed on
the success path because the EXIT trap is restored/cleared before it fires;
modify the cleanup so the file is explicitly removed after the script invocation
and before restoring the previous trap: after calling script -qec "$command" ...
ensure you run rm -f "$input_file" (or call the existing trap handler) and then
restore the saved trap stored in old_trap (or reset with trap - EXIT) so no
mktemp file is leaked; update the block around input_file, old_trap, trap, and
the script invocation accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b59e669c-1267-49e4-9310-4dc05475897d
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (32)
.github/workflows/ci.yml.github/workflows/sonarcloud.ymlMakefilenpm/agentsync/tsconfig.jsonopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/design.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/exploration.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/proposal.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/specs/skill-recommendations/spec.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/state.yamlopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/tasks.mdopenspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/verify-report.mdopenspec/specs/skill-recommendations/spec.mdsrc/skills/catalog.rssrc/skills/catalog.v1.tomlsrc/skills/detect.rssrc/skills/provider.rssrc/skills/suggest.rstests/contracts/test_skill_suggest_output.rstests/e2e/fixtures/repos/ai-adoption/.codex/config.tomltests/e2e/lib/assert.shtests/e2e/lib/fixtures.shtests/e2e/run.shtests/e2e/scenarios/01-init-blank.shtests/e2e/scenarios/02-init-adoption.shtests/e2e/scenarios/04-suggest-install-guided.shtests/e2e/scenarios/05-suggest-install-all.shtests/integration/skill_suggest.rstests/test_update_output.rstests/unit/suggest_catalog.rstests/unit/suggest_detector.rswebsite/docs/src/content/docs/guides/skills.mdxwebsite/docs/src/content/docs/reference/cli.mdx
openspec/changes/archive/2026-03-31-hybrid-skill-catalog-metadata/verify-report.md
Show resolved
Hide resolved
|



This pull request introduces a new autodetect skill suggestion feature for AgentSync, enabling the CLI to recommend relevant skills for a repository based on detected technologies. The implementation is designed in two phases: an initial read-only suggestion flow and a subsequent guided installation flow, both reusing existing install/update primitives. The architecture cleanly separates detection, recommendation, and installation logic, and introduces a catalog abstraction to allow for future provider-backed metadata. Extensive testing strategies and clear CLI/JSON contracts are defined to ensure robust and predictable behavior.
Skill Suggestion Pipeline and Architecture:
skillcommand, split into three modules: repository technology detection (detect.rs), catalog-backed recommendation policy (catalog.rs), and install orchestration that reuses existing install/update/registry logic.SkillCatalogabstraction with an embedded catalog for deterministic, offline metadata, allowing for future provider-backed sources without changing CLI contracts.CLI and Output Contracts:
agentsync skill suggestCLI entry point with support for human-readable and JSON output, with phase 2 extending to interactive and install-all flows.Testing and Migration:
Exploration and Rationale:
https://linear.app/dallay/issue/DALLAY-216/prd-autodetect-repository-technologies-and-suggest-opinionated-skills