Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .github/issues/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Title: Add Follow-Up Conversational Interface to Insights

## Why this feature matters
Currently, the AI analysis is a static, one-time read. Users often have questions about specific red flags, want elaboration on their coaching advice, or want to ask specific context about their chat data. Making the report interactive will significantly increase user retention and session time.

## Rough implementation approach
- Add a new "Ask The Algorithm" chat input box below the Deep Insights section.
- Create a new backend endpoint `/api/followup` that accepts the original `stats`, the initial `report`, and the user's `question`.
- Pass these as context to the LLM (using the same provider abstraction logic) and stream the response back to the UI.
- Ensure the prompt maintains the persona chosen by the user (Playful, Balanced, or Direct).

## User benefit
Allows users to treat their chat analysis as a personalized relationship coach rather than just a one-off report. It deepens emotional engagement and makes the product significantly more shareable and valuable.
13 changes: 13 additions & 0 deletions .github/issues/provider_integration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Title: Add Mistral AI Support for Privacy-Focused Processing

## Why this provider matters
The Algorithm's entire branding and architecture centers around being "paranoid-level privacy-first". Mistral is a European AI alternative known for strong open-weight models and better privacy alignment than OpenAI or Google. Supporting it naturally fits the product's ethos and gives users another option for BYOK.

## Rough implementation approach
- Add `mistral` to the `providers` list in the UI dropdown (`index.html` or settings modal).
- Implement basic client-side API key validation (Mistral keys typically start with a specific format or are alphanumeric).
- Add a new block in `functions/api/analyze.js`'s `callLLM` function to make a POST request to `https://api.mistral.ai/v1/chat/completions`.
- Ensure JSON parsing is handled gracefully since Mistral might have slightly different output tendencies.

## User benefit
Enhances trust among privacy-conscious users and developers. Provides access to fast, cost-effective models like `mistral-small` or `mistral-large` for analysis without data entering US-based corporate LLM pipelines.
5 changes: 5 additions & 0 deletions .jules/scribe.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## 2024-05-24 — Initial Setup
**Discovery:** Need to set up scribe journal and improve prompt structure for different providers.
**Provider:** All
**Impact:** Will allow better error handling, consistent JSON responses, and more engaging outputs.
**Pattern:** Provide explicit JSON schemas and format instructions per provider.
14 changes: 12 additions & 2 deletions dashboard.html
Original file line number Diff line number Diff line change
Expand Up @@ -136,16 +136,26 @@ <h4 class="mb-3 font-bold text-xs uppercase letter-spacing-05 flex align-center
<p id="ai-insight-shift" class="text-sm line-height-16"></p>
</div>
</div>
<div class="grid gap-6" style="grid-template-columns:1fr 1fr">
<div class="grid gap-6 mb-8" style="grid-template-columns:1fr 1fr">
<div class="flag-box flag-box--red">
<h4 class="mb-4 font-black">🚩 RED FLAGS</h4>
<ul id="ai-insight-red-flags" class="text-sm"></ul>
</div>
<div class="flag-box flag-box--green">
<h4 class="mb-4 font-black">🚩 GREEN FLAGS</h4>
<h4 class="mb-4 font-black"> GREEN FLAGS</h4>
<ul id="ai-insight-green-flags" class="text-sm"></ul>
</div>
</div>
<div class="grid gap-6 mb-8" style="grid-template-columns:1fr 1fr">
<div class="card p-6 bg-white">
<h4 class="mb-3 font-bold text-xs uppercase letter-spacing-05 flex align-center gap-2"><span class="text-xl">🌱</span> GROWTH AREAS</h4>
<ul id="ai-insight-growth" class="text-sm"></ul>
</div>
<div class="card p-6 bg-white">
<h4 class="mb-3 font-bold text-xs uppercase letter-spacing-05 flex align-center gap-2"><span class="text-xl">🗣️</span> COACHING ADVICE</h4>
<p id="ai-insight-coaching" class="text-sm line-height-16"></p>
</div>
</div>
<div class="brutal-verdict-box">
<h4 class="mb-4 text-xs font-bold uppercase letter-spacing-2 color-yellow">⚖️ THE FINAL WORD</h4>
<blockquote id="ai-insight-verdict">"..."</blockquote>
Expand Down
208 changes: 183 additions & 25 deletions functions/api/analyze.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,35 +16,192 @@
await env.KV_RATELIMIT.put(limitKey, (count + 1).toString(), { expirationTtl: 3600 });
}

// 2. AI GENERATION
let report = null;
const systemPrompt = "You are 'The Algorithm', a brutally honest relationship analyst. Return ONLY a JSON object: { relationship_persona, compatibility_score, ai_insight: { dynamic_title, reality_check, recent_shift, red_flags: [], green_flags: [], brutal_verdict } }.";
// 2. AI GENERATION SETTINGS
const ANALYSIS_SCHEMA = {
"relationship_persona": "string (creative title)",
"compatibility_score": "integer 1-100",
"ai_insight": {
"dynamic_title": "string (short headline)",
"reality_check": "string (1-2 sentences)",
"recent_shift": "string (1-2 sentences)",
"red_flags": ["string"],
"green_flags": ["string"],
"brutal_verdict": "string (short impactful summary)",
"coaching_advice": "string (2-3 actionable sentences)",
"growth_areas": ["string"]
}
};

const PROVIDER_SYSTEM_PROMPTS = {
"openai": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.`,
"anthropic": `<role>You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.</role>`,
"gemini": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights. 1. Be direct. 2. Follow schema exactly.`,
"openrouter": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.`,
"cloudflare": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.`,
"grok": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.`,
"groq": `You are an expert relationship analyst and communication coach. You provide brutally honest but helpful insights.`
};

const ANALYSIS_PROMPT_TEMPLATE = `
Analyze these anonymous conversation statistics and provide deep behavioral insights. Tone: ${tone}.

## Statistics
{stats_json}

## Relationship Context
- Person A: ${my_name}
- Person B: ${partner_name}

## Required Output
Respond ONLY with valid JSON matching this exact schema. No preamble, no explanation, no markdown blocks.

${JSON.stringify(ANALYSIS_SCHEMA, null, 2)}
`;

let userPrompt = `Analyze chat: ${my_name} & ${partner_name}. Tone: ${tone}. Stats: ${JSON.stringify(stats)}.`;
let statsJson = JSON.stringify(stats);
if (compare_data) {
userPrompt = `COMPARE two chats for ${my_name}. Chat A: ${compare_data.nameA} vs Chat B: ${compare_data.nameB}. Stats A: ${JSON.stringify(compare_data.a)}. Stats B: ${JSON.stringify(compare_data.b)}. Be direct.`;
statsJson = `COMPARE two chats for ${my_name}. Chat A: ${compare_data.nameA} vs Chat B: ${compare_data.nameB}. Stats A: ${JSON.stringify(compare_data.a)}. Stats B: ${JSON.stringify(compare_data.b)}.`;
}

if (provider === 'cloudflare' && env.AI) {
const aiResult = await env.AI.run('@cf/meta/llama-3-8b-instruct', {
messages: [{ role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt }]
});
const match = aiResult.response.match(/\{[\s\S]*\}/);
if (match) report = JSON.parse(match[0]);
} else if (api_key && (provider === 'openai' || provider === 'groq')) {
const url = provider === 'openai' ? 'https://api.openai.com/v1/chat/completions' : 'https://api.groq.com/openai/v1/chat/completions';
const model = provider === 'openai' ? 'gpt-4o-mini' : 'llama-3.1-70b-versatile';
const resp = await fetch(url, {
method: 'POST',
headers: { 'Authorization': `Bearer ${api_key}`, 'Content-Type': 'application/json' },
body: JSON.stringify({ model, messages: [{ role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt }], response_format: { type: "json_object" } })
});
const resData = await resp.json();
if (resData.choices) report = JSON.parse(resData.choices[0].message.content);
const systemPrompt = PROVIDER_SYSTEM_PROMPTS[provider] || PROVIDER_SYSTEM_PROMPTS["openai"];
const userPrompt = ANALYSIS_PROMPT_TEMPLATE.replace('{stats_json}', statsJson);

let report = null;

// Validation Function
const validateAnalysisResponse = (response) => {
if (!response) return false;
const requiredKeys = ['relationship_persona', 'compatibility_score', 'ai_insight'];
const hasOuter = requiredKeys.every(key => key in response);
if (!hasOuter) return false;
const requiredInsightKeys = ['dynamic_title', 'reality_check', 'recent_shift', 'red_flags', 'green_flags', 'brutal_verdict', 'coaching_advice', 'growth_areas'];
const hasInner = requiredInsightKeys.every(key => key in response.ai_insight);
return hasInner;
};

// LLM Caller
const callLLM = async (currentProvider, apiKey, sysPrompt, usrPrompt, strict = false) => {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 30000);

let finalUserPrompt = usrPrompt;
if (strict) {
finalUserPrompt += "\n\nCRITICAL: You MUST return ONLY valid JSON. No markdown backticks, no text before or after.";
}

try {
let parsed = null;

if (currentProvider === 'cloudflare' && env.AI) {
const aiResult = await env.AI.run('@cf/meta/llama-3-8b-instruct', {
messages: [{ role: 'system', content: sysPrompt }, { role: 'user', content: finalUserPrompt }]
});
const match = aiResult.response.match(/\{[\s\S]*\}/);
if (match) parsed = JSON.parse(match[0]);
} else if (currentProvider === 'anthropic' && apiKey) {
const resp = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'x-api-key': apiKey,
'anthropic-version': '2023-06-01',
'content-type': 'application/json'
},
body: JSON.stringify({
model: 'claude-3-haiku-20240307',
max_tokens: 1000,
system: sysPrompt,
messages: [{ role: 'user', content: finalUserPrompt }]
}),
signal: controller.signal
});
const resData = await resp.json();
if (resData.content && resData.content.length > 0) {
const text = resData.content[0].text;
const match = text.match(/\{[\s\S]*\}/);
if (match) parsed = JSON.parse(match[0]);
}
} else if (currentProvider === 'gemini' && apiKey) {
const url = `https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=${apiKey}`;
const resp = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
systemInstruction: { parts: [{ text: sysPrompt }] },
contents: [{ parts: [{ text: finalUserPrompt }] }],
generationConfig: { responseMimeType: "application/json" }
}),
signal: controller.signal
});
const resData = await resp.json();
if (resData.candidates && resData.candidates.length > 0) {
const text = resData.candidates[0].content.parts[0].text;
const match = text.match(/\{[\s\S]*\}/);
if (match) parsed = JSON.parse(match[0]);
}
} else if (apiKey && (currentProvider === 'openai' || currentProvider === 'grok' || currentProvider === 'openrouter' || currentProvider === 'groq')) {
let url, model;
if (currentProvider === 'openai') {
url = 'https://api.openai.com/v1/chat/completions';
model = 'gpt-4o-mini';
} else if (currentProvider === 'openrouter') {
url = 'https://openrouter.ai/api/v1/chat/completions';
model = 'openai/gpt-4o-mini';
} else if (currentProvider === 'grok') {
url = 'https://api.x.ai/v1/chat/completions';
model = 'grok-beta';
} else {
url = 'https://api.groq.com/openai/v1/chat/completions';
model = 'llama-3.1-70b-versatile';
}

const headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' };
if (currentProvider === 'openrouter') {
headers['HTTP-Referer'] = 'https://thealgorithm.reports';
headers['X-Title'] = 'The Algorithm';
}

const body = {
model,
messages: [{ role: 'system', content: sysPrompt }, { role: 'user', content: finalUserPrompt }]
};

if (currentProvider !== 'openrouter') {
body.response_format = { type: "json_object" };
}

const resp = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
signal: controller.signal
});
const resData = await resp.json();
if (resData.choices && resData.choices.length > 0) {
const text = resData.choices[0].message.content;
const match = text.match(/\{[\s\S]*\}/);
if (match) parsed = JSON.parse(match[0]);
}
}
clearTimeout(timeoutId);
return parsed;
} catch (err) {
clearTimeout(timeoutId);
let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This logs sensitive data returned by
an access to apiKey
as clear text.

Copilot Autofix

AI 2 days ago

In general, to fix clear-text logging of sensitive information, remove the sensitive data from log messages or replace it with a non-sensitive placeholder. If you need to distinguish different calls for debugging, use non-secret identifiers (e.g., a generated request ID) instead of secret material.

In this specific case, the problematic behavior is in the catch block of callLLM, where safeKey is derived from the potentially sensitive apiKey and interpolated into a console.error message. The safest change that preserves existing functionality is to stop including any portion of the key in the log. We can remove the safeKey variable and change the log message to not reference the key at all, retaining currentProvider and the error message. This change is confined to the catch block around lines 187–192 in functions/api/analyze.js and requires no new imports or helper methods.

Concretely:

  • Delete the line that computes safeKey.
  • Update the console.error call to exclude the key, e.g., console.error(\LLM call failed for ${currentProvider}: ${err.message}`);`.

Suggested changeset 1
functions/api/analyze.js

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/functions/api/analyze.js b/functions/api/analyze.js
--- a/functions/api/analyze.js
+++ b/functions/api/analyze.js
@@ -186,8 +186,7 @@
                 return parsed;
             } catch (err) {
                 clearTimeout(timeoutId);
-                let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
-                console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);
+                console.error(`LLM call failed for ${currentProvider}: ${err.message}`);
                 return null;
             }
         };
EOF
@@ -186,8 +186,7 @@
return parsed;
} catch (err) {
clearTimeout(timeoutId);
let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none';
console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`);
console.error(`LLM call failed for ${currentProvider}: ${err.message}`);
return null;
}
};
Copilot is powered by AI and may make mistakes. Always verify output.
return null;
}
};

// 3. EXECUTE & VALIDATE
report = await callLLM(provider, api_key, systemPrompt, userPrompt, false);

if (!validateAnalysisResponse(report)) {
// Retry once with stricter prompt
report = await callLLM(provider, api_key, systemPrompt, userPrompt, true);
}

// 3. FALLBACK
if (!report) {
// 4. FALLBACK
if (!validateAnalysisResponse(report)) {
report = {
relationship_persona: "Vibe Explorer",
compatibility_score: 80,
Expand All @@ -54,7 +211,9 @@
recent_shift: "The energy is stable.",
red_flags: ["Limited data for deep read"],
green_flags: ["Active check-ins"],
brutal_verdict: "It's a vibe."
brutal_verdict: "It's a vibe.",
coaching_advice: "Keep communicating openly and building trust.",
growth_areas: ["More frequent deep conversations"]
}
};
}
Expand All @@ -65,4 +224,3 @@
return new Response(JSON.stringify({ error: e.message }), { status: 500 });
}
}

Loading
Loading