Skip to content

fix(manual): harden inference reliability and editable fields#77

Merged
github-actions[bot] merged 1 commit intomainfrom
fix/manual-inference-hardening
Feb 24, 2026
Merged

fix(manual): harden inference reliability and editable fields#77
github-actions[bot] merged 1 commit intomainfrom
fix/manual-inference-hardening

Conversation

@anand-testcompare
Copy link
Collaborator

@anand-testcompare anand-testcompare commented Feb 24, 2026

Summary

  • harden manual inference LLM parsing so non-canonical outputs are normalized instead of failing schema validation
  • stop auto-filling both connector ends from a single mention, add lightning/lightening normalization, and cap lightning paths to USB 2.0 + no video
  • make reset/default manual connectors Unknown instead of prefilled USB-C
  • improve manual-entry editability for printed markings:
    • wattage supports suffixes/lists/ranges like 240W and 60, 100, 240W
    • generation inference now handles broader markers and mixed strings
    • add USB generation datalist suggestions in the form
  • remove raw LLM schema failure text from user-facing notes (now logged server-side)

Validation

  • bun x ultracite check
  • bun test packages/backend/convex/manualInferenceLogic.test.ts packages/backend/convex/manualInference.integration.test.ts apps/web/src/lib/mappers.test.ts apps/web/src/lib/capability.test.ts
  • PUBLIC_CONVEX_URL=https://example.convex.cloud bun test packages/backend/convex/shopify.ingest.integration.test.ts
  • bun test packages/shopify-cable-source/src/source.integration.test.ts
  • bun run build
  • bun build apps/tui/src/index.tsx --compile --outfile apps/tui/dist/cable-intel-tui
  • ./apps/tui/dist/cable-intel-tui --help

Notes

  • validated UI interaction locally with d3k/browser automation for manual-entry flow and wattage input handling.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added datalist for USB generation markings with expanded options and improved placeholders.
    • Enhanced wattage input to support ranges and units display.
  • Bug Fixes

    • Improved LLM inference to use best-effort extraction with confidence scoring and uncertainty tracking.
    • Added Lightning connector ceiling logic to appropriately cap data speeds and disable video support.
    • Updated default connector values to "Unknown" for better accuracy.
  • Tests

    • Added comprehensive tests for parsing wattage ranges and USB generation markings.
    • Added tests for LLM output normalization and inference logic validation.

@vercel
Copy link

vercel bot commented Feb 24, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
cable-intel-web Ready Ready Preview, Comment Feb 24, 2026 6:46pm

Request Review

@github-actions github-actions bot enabled auto-merge (squash) February 24, 2026 18:45
@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ad0725 and 7c6229f.

📒 Files selected for processing (9)
  • apps/web/src/components/markings-form.svelte
  • apps/web/src/lib/capability.test.ts
  • apps/web/src/lib/capability.ts
  • apps/web/src/lib/mappers.test.ts
  • apps/web/src/lib/types.ts
  • packages/backend/convex/manualInference.integration.test.ts
  • packages/backend/convex/manualInference.ts
  • packages/backend/convex/manualInferenceLogic.test.ts
  • packages/backend/convex/manualInferenceLogic.ts

Walkthrough

This pull request enhances the cable markings system with UI improvements, data-driven parsing logic, expanded test coverage, and a refactored backend inference pipeline. Key updates include USB generation datalist support in the markings form, refactored capability parsing with regex-based token extraction, changed default connectors from "USB-C" to "Unknown," and significant backend changes introducing LLM-specific schemas, normalization helpers, Lightning ceiling detection, and improved manual inference processing.

Changes

Cohort / File(s) Summary
Markings Form UI
apps/web/src/components/markings-form.svelte
Updated wattage input placeholders with units and range notation; introduced datalist for USB generation markings with expanded generation options wired via list attribute.
Capability Parsing
apps/web/src/lib/capability.ts, apps/web/src/lib/capability.test.ts
Refactored numeric parsing to use regex-based token extraction (NUMERIC_TOKEN_REGEX, GBPS_REGEX, GENERATION_GBPS_HINTS); parsePositiveNumber now derives maximum numeric token; inferMaxGbpsFromGeneration combines textual hints with explicit Gbps values; added comprehensive unit tests for both functions.
Type Defaults
apps/web/src/lib/types.ts, packages/backend/convex/manualInferenceLogic.ts
Changed DEFAULT_MARKINGS_DRAFT and DEFAULT_MANUAL_DRAFT to set connectorFrom and connectorTo to "Unknown" instead of "USB-C", updating runtime defaults across web and backend.
Mappers Test Coverage
apps/web/src/lib/mappers.test.ts
Added tests for buildProfileFromMarkings verifying wattage range parsing (extracts maximum from comma-separated values) and multiple USB generation markings handling (selects highest speed).
Manual Inference Backend
packages/backend/convex/manualInference.ts, packages/backend/convex/manualInference.integration.test.ts, packages/backend/convex/manualInferenceLogic.ts, packages/backend/convex/manualInferenceLogic.test.ts
Overhauled backend inference system with updated LLM prompt using best-effort extraction; introduced normalization helpers (normalizeConnectorLabel, normalizeVideoSupportLabel, normalizeFollowUpCategoryLabel); added LLM-specific schemas with preprocessing; implemented Lightning ceiling detection to cap data class and disable video; expanded CONNECTOR_MATCHERS to recognize "lightening" alias; added maxRetries to LLM calls; restructured follow-up category prioritization and question prompts; extended test coverage for connector detection, Lightning normalization, and flexible LLM output handling.

Sequence Diagram(s)

sequenceDiagram
    participant User as User Input
    participant Deterministic as Deterministic Parser
    participant Normalize as Normalizer
    participant LLM as LLM Inference
    participant Merge as Signal Merger
    participant Output as Final Result

    User->>Deterministic: Submit markings text
    Deterministic->>Deterministic: Extract signals (connector, power, data)
    Deterministic->>Normalize: Pass extracted signals
    Normalize->>Normalize: Apply Lightning ceiling<br/>(cap data, disable video)
    Normalize->>LLM: Send normalized context
    LLM->>LLM: Preprocess input via<br/>coerceLlmText
    LLM->>LLM: Extract draft patch<br/>with normalized schemas
    LLM-->>Merge: Return LLM results<br/>+ confidence score
    Deterministic-->>Merge: Deterministic signals
    Merge->>Merge: Combine patches<br/>Filter power if Lightning
    Merge->>Output: Return merged inference
    Output-->>User: Final cable profile
Loading

Possibly related PRs

  • cable-intel#76: Modifies manual inference follow-up question thresholds and priorities alongside related test updates, directly extending the inference logic refactored here.
  • cable-intel#73: Updates the same manual inference workflow, parsing logic, schemas, and related tests, representing parallel work on the cable intelligence pipeline.

Poem

🐰 A hop through schemas and regexes bright,
Lightning ceilings shine with normalized might,
Datalists sprouting from form inputs free,
LLM whispers merge with logic so spry,
Unknown defaults replacing USB-C's sigh! ✨

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/manual-inference-hardening

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot merged commit 4849117 into main Feb 24, 2026
4 of 5 checks passed
@anand-testcompare anand-testcompare deleted the fix/manual-inference-hardening branch February 24, 2026 22:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant