-
Notifications
You must be signed in to change notification settings - Fork 0
feat: improve AI pipeline robustness, add OpenRouter and Grok support #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
rixabhh
wants to merge
1
commit into
main
Choose a base branch
from
feat/ai-output-quality-improvements-14913244531174148368
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| Title: Add Follow-Up Conversational Interface to Insights | ||
|
|
||
| ## Why this feature matters | ||
| Currently, the AI analysis is a static, one-time read. Users often have questions about specific red flags, want elaboration on their coaching advice, or want to ask specific context about their chat data. Making the report interactive will significantly increase user retention and session time. | ||
|
|
||
| ## Rough implementation approach | ||
| - Add a new "Ask The Algorithm" chat input box below the Deep Insights section. | ||
| - Create a new backend endpoint `/api/followup` that accepts the original `stats`, the initial `report`, and the user's `question`. | ||
| - Pass these as context to the LLM (using the same provider abstraction logic) and stream the response back to the UI. | ||
| - Ensure the prompt maintains the persona chosen by the user (Playful, Balanced, or Direct). | ||
|
|
||
| ## User benefit | ||
| Allows users to treat their chat analysis as a personalized relationship coach rather than just a one-off report. It deepens emotional engagement and makes the product significantly more shareable and valuable. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| Title: Add Mistral AI Support for Privacy-Focused Processing | ||
|
|
||
| ## Why this provider matters | ||
| The Algorithm's entire branding and architecture centers around being "paranoid-level privacy-first". Mistral is a European AI alternative known for strong open-weight models and better privacy alignment than OpenAI or Google. Supporting it naturally fits the product's ethos and gives users another option for BYOK. | ||
|
|
||
| ## Rough implementation approach | ||
| - Add `mistral` to the `providers` list in the UI dropdown (`index.html` or settings modal). | ||
| - Implement basic client-side API key validation (Mistral keys typically start with a specific format or are alphanumeric). | ||
| - Add a new block in `functions/api/analyze.js`'s `callLLM` function to make a POST request to `https://api.mistral.ai/v1/chat/completions`. | ||
| - Ensure JSON parsing is handled gracefully since Mistral might have slightly different output tendencies. | ||
|
|
||
| ## User benefit | ||
| Enhances trust among privacy-conscious users and developers. Provides access to fast, cost-effective models like `mistral-small` or `mistral-large` for analysis without data entering US-based corporate LLM pipelines. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| ## 2024-05-24 — Initial Setup | ||
| **Discovery:** Need to set up scribe journal and improve prompt structure for different providers. | ||
| **Provider:** All | ||
| **Impact:** Will allow better error handling, consistent JSON responses, and more engaging outputs. | ||
| **Pattern:** Provide explicit JSON schemas and format instructions per provider. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Copilot Autofix
AI 2 days ago
In general, to fix clear-text logging of sensitive information, remove the sensitive data from log messages or replace it with a non-sensitive placeholder. If you need to distinguish different calls for debugging, use non-secret identifiers (e.g., a generated request ID) instead of secret material.
In this specific case, the problematic behavior is in the
catchblock ofcallLLM, wheresafeKeyis derived from the potentially sensitiveapiKeyand interpolated into aconsole.errormessage. The safest change that preserves existing functionality is to stop including any portion of the key in the log. We can remove thesafeKeyvariable and change the log message to not reference the key at all, retainingcurrentProviderand the error message. This change is confined to thecatchblock around lines 187–192 infunctions/api/analyze.jsand requires no new imports or helper methods.Concretely:
safeKey.console.errorcall to exclude the key, e.g.,console.error(\LLM call failed for ${currentProvider}: ${err.message}`);`.