Skip to content

feat: add Prompt API integration for built-in AI support#323

Merged
NakataCode merged 9 commits intomasterfrom
prompt-api-poc
Mar 24, 2026
Merged

feat: add Prompt API integration for built-in AI support#323
NakataCode merged 9 commits intomasterfrom
prompt-api-poc

Conversation

@NakataCode
Copy link
Copy Markdown
Contributor

🚀 Enhanced AI Chat with Interactive Viewers and Improved UX

📋 Summary

This PR significantly enhances the AI chat interface with interactive JSON/code viewers, improved context awareness, and better user experience. The AI assistant now has full page visibility and provides rich, interactive responses.


✨ Features Added

🎨 Interactive Content Display

  • Interactive JSON Viewer

    • Expandable/collapsible tree structure with ▼/▶ triangles
    • Syntax-highlighted values (strings, numbers, booleans, null)
    • Formatted with proper indentation and structure
  • Enhanced Code Blocks

    • Rendered as styled DOM elements (not plain <pre><code>)
    • Language badges (xml, javascript, etc.) displayed in top-right corner
    • Line-by-line rendering for better readability
    • Fixed strikethrough/crossed-out text rendering issues

🧠 AI Context & Intelligence

  • Full Page Awareness

    • Expanded context includes full bindings data (not just previews)
    • Expanded context includes full aggregations data
    • Control selection automatically updates AI context
  • Smart Session Management

    • Session auto-initializes when Gemini Nano model is ready
    • No manual session creation needed

💎 UX Improvements

  • Token Counter

    • Always visible with current usage (tokens used / quota)
    • Shows percentage used
    • Warning state at 90%+ usage with red background
    • Automatic warning message at 70% usage
  • Custom Confirmation Dialog

    • In-panel modal for "Clear History" action (not browser popup)
    • Centered in AI chat with backdrop overlay
    • Smooth slide-in animation
    • Can dismiss by clicking overlay or Cancel button
  • Context Indicator

    • Shows currently selected control type and ID
    • Clear button (×) to remove context
    • Updates automatically on control selection

@NakataCode NakataCode requested a review from kgogov February 4, 2026 15:30
@kgogov
Copy link
Copy Markdown
Contributor

kgogov commented Feb 11, 2026

Hello @dobrinyonkov,

Together with @NakataCode, we reviewed the functionality.

Overall, the AI functionality works quite well, but we noticed a few things that I will outline below:

  • Before downloading the AI model, the text fields as well as the “SEND” button should be disabled.
  • Idea: we could add a “Copy response” button after the response is generated.
  • Idea: we could add a “Copy” button in the top-right corner of the code snippets (HTML, XML, JavaScript, CSS, JSON etc.).
  • Maybe: We could consider renaming the “AI” tab to something more descriptive.
  • We could improve the message loading experience by adding a clearer busy/loading indicator state.
  • Important: please verify exactly which model is being downloaded when running in the Edge environment and adjust the message if necessary.

Additionally, I performed a full review of all changes and attached an AI analysis generated with Claude Code using the Opus 4.5 model. There are some interesting findings that are worth reviewing to determine whether they are relevant.

CODE_REVIEW_ANALYSIS.md
CODE_REVIEW_PLAN.md

That’s all from me for now.

Screenshot 2026-02-11 at 15 12 05
Screenshot 2026-02-11 at 15 11 46
Screenshot 2026-02-11 at 15 29 21

@NakataCode
Copy link
Copy Markdown
Contributor Author

57 new unit tests added across 3 spec files.

ChatStorageManager.spec.js (7 tests) covers storage key generation and URL sanitization, including null/undefined inputs, empty strings, special character handling, and complex URL patterns.

AISessionManager.spec.js (21 tests) covers constructor initialization, message handler registration, system prompt generation (framework version, theme, loaded libraries), JSON truncation with circular reference handling, prompt formatting with control context, and message array building.

AIChat.spec.js (29 tests) covers constructor defaults, ARIA compliance on all interactive elements and dialogs, XSS prevention via HTML escaping, markdown parsing (bold, italic, inline code, links, line breaks), JSON and code viewer rendering, dialog focus management (focus trap, ESC key, focus restoration), debounced rendering behavior, and event listener wiring.

@NakataCode
Copy link
Copy Markdown
Contributor Author

and about the "verify exactly which model is being downloaded when running in the Edge environment and adjust the message if necessary."

The 'Gemini Nano is ready' message is hardcoded in the UI5 Inspector and simply indicates that the browser's built-in AI model has been downloaded and is ready — it doesn't reflect which model is actually running underneath. From what I have looked up, the difference in responses between Chrome and Edge suggests that each browser interferes somehow when downloading the model, which would explain the inconsistent behavior. From the code side, there doesn't seem to be a way to determine which model is actually being used.

@NakataCode NakataCode changed the title Prompt api poc feat: add Prompt API integration for built-in AI support Feb 18, 2026
@kgogov
Copy link
Copy Markdown
Contributor

kgogov commented Mar 4, 2026

Code review

Found 3 issues:

  1. Streaming chunks are accumulated instead of replaced. The Chrome Prompt API's promptStreaming() yields cumulative text with each chunk (the full response so far, not a delta). Using fullResponse += chunk causes every prior chunk to be re-appended, producing garbled output like "HelloHello worldHello world today". The fix is fullResponse = chunk.

let fullResponse = '';
// Process stream
for await (const chunk of stream) {
fullResponse += chunk;
this._debouncedRender(fullResponse);
}

  1. Port disconnect during active streaming causes a permanent UI hang. The onDisconnect handler sets _isConnected = false but never rejects the in-flight resolveChunk/rejectChunk promise used by the promptStreaming async generator. When the MV3 service worker terminates mid-stream (which Chrome does on idle), the async iterator stalls indefinitely and the UI is permanently stuck with the send button disabled and no recovery path.

// Handle disconnect
this._port.onDisconnect.addListener(() => {
this._isConnected = false;
this._hasActiveSession = false;
this._port = null;
});
};

  1. sendButton is never re-enabled after a successful model download. sendButton.disabled = true is set at the start of _handleDownloadModel but the success path only resets input.disabled = false, leaving the send button permanently disabled until the user types in the input field (which triggers the input event listener that re-evaluates button state).

await this._sessionManager.downloadModel((progress) => {
const percent = Math.round(progress * 100);
this._renderModelStatus('downloading', progress, `Downloading: ${percent}%`);
});
this._renderModelStatus('ready', 1, 'Model ready!');
// Initialize session after download
await this._initializeSession();
// Re-enable UI after successful download
input.disabled = false;

🤖 Generated with Claude Code

If this code review was useful, please react with 👍. Otherwise, react with 👎.

@kgogov
Copy link
Copy Markdown
Contributor

kgogov commented Mar 10, 2026

Following up on my previous review — I took a closer look at each finding to verify which ones are actionable:

1. Streaming chunks (fullResponse += chunk) — Not a bug

After further investigation, the Chrome Prompt API's promptStreaming() originally returned cumulative text (Chrome 131 Origin Trial), but was changed to return delta chunks in later versions. The current code fullResponse += chunk is correct for delta-based streaming — confirmed by our own testing (the screenshots show clean, non-garbled responses; cumulative chunks with += would produce visibly duplicated text). No change needed here, though a clarifying comment in the code wouldn't hurt.

2. Port disconnect during streaming — Valid, low risk

This one holds up. If the port disconnects while the async generator is awaiting a chunk, the pending promise is never settled and the UI hangs permanently (_isStreaming stays true, send button disabled, no error shown). The practical risk is low since Chrome won't terminate an active service worker mid-stream, but it's good defensive coding. Suggested fix — trigger the error handler in onDisconnect:

// In AISessionManager.js _connect()
this._port.onDisconnect.addListener(() => {
    this._isConnected = false;
    this._hasActiveSession = false;
    this._port = null;

    // Reject any in-flight streaming promise to prevent UI hang
    var errorHandler = this._messageHandlers['error'];
    if (errorHandler) {
        errorHandler({ message: 'Connection to background script lost. Please try again.' });
    }
});

This rejects the generator's pending promise on disconnect, which unblocks the for await loop in AIChat.js and lets the catch block reset _isStreaming and show an error message. When no stream is active, the handler is already cleaned up so the if check safely no-ops.

3. sendButton not re-enabled after download — Valid, cosmetic

This one is real but minor. The success path in _handleDownloadModel re-enables input but not sendButton. It self-corrects on the first keystroke via the input event listener, but for consistency:

// Re-enable UI after successful download
input.disabled = false;
sendButton.disabled = !input.value.trim().length;

@NakataCode NakataCode merged commit 3fbde9d into master Mar 24, 2026
3 checks passed
@NakataCode NakataCode deleted the prompt-api-poc branch March 24, 2026 12:53
@github-actions
Copy link
Copy Markdown
Contributor

🎉 This PR is included in version 1.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants