Skip to content

feat: improve AI pipeline robustness, add OpenRouter and Grok support#54

Open
rixabhh wants to merge 1 commit intomainfrom
feat/ai-output-quality-improvements-14913244531174148368
Open

feat: improve AI pipeline robustness, add OpenRouter and Grok support#54
rixabhh wants to merge 1 commit intomainfrom
feat/ai-output-quality-improvements-14913244531174148368

Conversation

@rixabhh
Copy link
Copy Markdown
Owner

@rixabhh rixabhh commented Mar 28, 2026

What

Improved the AI analysis output quality and robustness by implementing stricter JSON schemas, retry logic, and fallback responses. Added support for new AI providers (OpenRouter, Grok, Anthropic, Gemini) with specific client-side API key validation.

Why

This makes the analysis output significantly better for users, adding actionable coaching advice and growth areas while ensuring the application handles LLM non-determinism and timeouts gracefully. The addition of new BYOK providers expands options for privacy-conscious users.

Changes

  • functions/api/analyze.js — Added robust callLLM logic with AbortController timeouts, structured JSON schema validation, and provider-specific prompt parsing.
  • dashboard.html — Updated UI to display newly parsed fields: growth_areas and coaching_advice.
  • static/js/app.js — Enhanced client-side validation to correctly identify API key formats for each provider before attempting calls.
  • static/js/dashboard.js — Mapped the new JSON fields to the UI components.

Prompt Changes

Created a PROVIDER_SYSTEM_PROMPTS object to tailor system instructions to each LLM's preferred format (e.g., Anthropic XML tags). Enforced a strict JSON output schema and added a retry mechanism that appends a critical strict-JSON warning if the first attempt fails.

New Providers

  • OpenRouter: Drop-in support for OpenAI-compatible endpoint with necessary headers.
  • Anthropic: Direct integration with Claude-3 using x-api-key.
  • Gemini: Native integration with gemini-1.5-flash endpoint.
  • Grok: Added x.ai endpoint support.

Error Handling

Added comprehensive try/catch logic around LLM calls with setTimeout driven AbortControllers to prevent hanging requests. Safely masked API keys in error logs to maintain privacy standards. Added a fallback JSON report if all generation attempts fail.


PR created automatically by Jules for task 14913244531174148368 started by @rixabhh

- Rewrite analysis prompts for deeper emotional insight
- Add robust JSON schema validation and retry logic for LLM responses
- Add OpenRouter, Grok, and Anthropic provider integrations
- Add client-side validation for specific AI provider API key formats
- Render new coaching advice and growth area insights on the dashboard
- Handle malformed LLM JSON responses and timeouts gracefully
@google-labs-jules
Copy link
Copy Markdown

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

} catch (err) {
clearTimeout(timeoutId);
let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This logs sensitive data returned by
an access to apiKey
as clear text.

Copilot Autofix

AI 2 days ago

In general, to fix clear-text logging of sensitive information, remove the sensitive data from log messages or replace it with a non-sensitive placeholder. If you need to distinguish different calls for debugging, use non-secret identifiers (e.g., a generated request ID) instead of secret material.

In this specific case, the problematic behavior is in the catch block of callLLM, where safeKey is derived from the potentially sensitive apiKey and interpolated into a console.error message. The safest change that preserves existing functionality is to stop including any portion of the key in the log. We can remove the safeKey variable and change the log message to not reference the key at all, retaining currentProvider and the error message. This change is confined to the catch block around lines 187–192 in functions/api/analyze.js and requires no new imports or helper methods.

Concretely:

  • Delete the line that computes safeKey.
  • Update the console.error call to exclude the key, e.g., console.error(\LLM call failed for ${currentProvider}: ${err.message}`);`.

Suggested changeset 1
functions/api/analyze.js

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/functions/api/analyze.js b/functions/api/analyze.js
--- a/functions/api/analyze.js
+++ b/functions/api/analyze.js
@@ -186,8 +186,7 @@
                 return parsed;
             } catch (err) {
                 clearTimeout(timeoutId);
-                let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
-                console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);
+                console.error(`LLM call failed for ${currentProvider}: ${err.message}`);
                 return null;
             }
         };
EOF
@@ -186,8 +186,7 @@
return parsed;
} catch (err) {
clearTimeout(timeoutId);
let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);
console.error(`LLM call failed for ${currentProvider}: ${err.message}`);
return null;
}
};
Copilot is powered by AI and may make mistakes. Always verify output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant