-
Notifications
You must be signed in to change notification settings - Fork 0
Add OpenAI-based sentiment analysis for crypto posts #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a new sentiment-analysis module at Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller as Caller
participant Module as analyzeMultiplePosts
participant Client as OpenAI Client (gpt-4o-mini)
participant Parser as JSON Parser/Validator
rect rgb(230,245,240)
Note right of Module: prepare default client (no clientOverride)
end
Caller->>Module: analyzeMultiplePosts(posts[])
loop per post (serial)
Module->>Client: create chat completion with JSON-only system prompt
Client-->>Module: model response (content)
Module->>Parser: attempt JSON parse & validate sentiment
alt valid sentiment
Parser-->>Module: { sentiment }
Module->>Caller: accumulate SentimentResult (post + sentiment)
else invalid / parse error
Parser-->>Module: error
Module->>Client: retry (up to 3) with feedback about prior failure
alt still invalid after 3
Module-->>Caller: log error for this post and continue
end
end
end
Module-->>Caller: return SentimentResult[]
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
✨ Finishing Touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
src/analyzeSentiment.ts (7)
5-6
: Tighten the OpenAI client typing (avoidunknown
/any
).Stronger types will catch request/response shape issues at compile time.
-// Minimal type for the shape we use from the OpenAI client so we avoid `any`. -type OpenAILike = { chat: { completions: { create(opts: unknown): Promise<any> } } }; +// Minimal, type-safe surface we use from the OpenAI client. +type ChatMessage = { role: "system" | "user" | "assistant"; content: string }; +type ChatChoice = { message: { content?: string | null } }; +type ChatCompletion = { choices: ChatChoice[] }; +type ChatCreateOpts = { + model: string; + messages: ChatMessage[]; + temperature?: number; + max_tokens?: number; + response_format?: { type: "json_object" | "text" }; +}; +type OpenAILike = { chat: { completions: { create(opts: ChatCreateOpts): Promise<ChatCompletion> } } };
8-14
: Memoize client and trim API key to avoid repeated instantiation and false negatives.Prevents building a new client per call and rejects blank/whitespace keys.
-function getClient(clientOverride?: OpenAILike): OpenAILike { - if (clientOverride) return clientOverride; - if (process.env.OPENAI_API_KEY) { - return (new OpenAI({ apiKey: process.env.OPENAI_API_KEY }) as unknown) as OpenAILike; - } - throw new Error("OPENAI_API_KEY is not set. Set it in the environment or pass a clientOverride."); -} +let cachedClient: OpenAILike | null = null; +function getClient(clientOverride?: OpenAILike): OpenAILike { + if (clientOverride) return clientOverride; + if (cachedClient) return cachedClient; + const key = process.env.OPENAI_API_KEY?.trim(); + if (key) { + cachedClient = (new OpenAI({ + apiKey: key, + // maxRetries: 2, // uncomment if supported in your SDK version + // timeout: 15_000, + }) as unknown) as OpenAILike; + return cachedClient; + } + throw new Error("OPENAI_API_KEY is not set. Set it in the environment or pass a clientOverride."); +}
16-21
: Extract allowed sentiments into a typed constant and reuse the union.Removes magic strings and centralizes validation.
-// Sentiment analysis result type -export type SentimentResult = { - post: string; - sentiment: "BULLISH" | "BEARISH" | "NEUTRAL"; -}; +export type Sentiment = "BULLISH" | "BEARISH" | "NEUTRAL"; +const ALLOWED_SENTIMENTS = ["BULLISH", "BEARISH", "NEUTRAL"] as const satisfies ReadonlyArray<Sentiment>; +export type SentimentResult = { post: string; sentiment: Sentiment };- if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) { + if (!ALLOWED_SENTIMENTS.includes(raw as Sentiment)) { console.error("Invalid sentiment from model:", parsedAny.sentiment, "for post:", post); continue; }- results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); + results.push({ post, sentiment: raw as Sentiment });Also applies to: 56-56, 60-60
23-27
: Create the client once per call, not once per post.Minor perf/overhead improvement and cleaner error semantics.
-export async function analyzeMultiplePosts(posts: string[], clientOverride?: OpenAILike): Promise<SentimentResult[]> { - const results: SentimentResult[] = []; +export async function analyzeMultiplePosts(posts: string[], clientOverride?: OpenAILike): Promise<SentimentResult[]> { + const results: SentimentResult[] = []; + const usedClient = getClient(clientOverride); for (const post of posts) { try { - const usedClient = clientOverride ?? getClient(); + // use memoized/injected client
57-63
: Sanitize logs to avoid leaking full post contents.Log a short snippet to reduce accidental PII/secret exposure in logs.
- console.error("Invalid sentiment from model:", parsedAny.sentiment, "for post:", post); + const snippet = post.length > 120 ? post.slice(0, 120) + "…" : post; + console.error("Invalid sentiment from model:", parsedAny.sentiment, "for post:", snippet);- console.error("Error analyzing post:", post, e); + const snippet = post.length > 120 ? post.slice(0, 120) + "…" : post; + console.error("Error analyzing post:", snippet, e);
70-74
: Broaden fenced-block detection (handle ``` without language tag).Models sometimes omit the language identifier.
- const code = /```json\s*([\s\S]*?)```/i.exec(content); + const code = /```(?:json)?\s*([\s\S]*?)```/i.exec(content);
23-66
: Optional: add bounded concurrency to speed up batches while respecting rate limits.Process posts in parallel with a small concurrency cap (e.g., 3–5). Keeps API pressure controlled and improves throughput on large arrays.
I can provide a drop-in helper (no deps) like:
async function mapWithConcurrency<T, R>(items: T[], limit: number, fn: (t: T, i: number) => Promise<R>): Promise<R[]> { const results: R[] = Array(items.length); let i = 0; const workers = Array(Math.min(limit, items.length)).fill(0).map(async () => { while (i < items.length) { const idx = i++; results[idx] = await fn(items[idx], idx); } }); await Promise.all(workers); return results; }Then replace the for-loop with a call using limit = 4 and reuse
usedClient
.
Would you like me to wire this into the function?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/analyzeSentiment.ts
(1 hunks)
🔇 Additional comments (1)
src/analyzeSentiment.ts (1)
22-23
: API shape and normalization look good.Clear return type and uppercase normalization align with the DB contract.
src/analyzeSentiment.ts
Outdated
const response = await usedClient.chat.completions.create({ | ||
model: "gpt-4o-mini", | ||
messages: [{ role: "user", content: prompt }], | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use JSON mode and deterministic params to reduce parse failures and cost.
For classification, JSON mode + low temperature keeps outputs tight and valid.
- const response = await usedClient.chat.completions.create({
- model: "gpt-4o-mini",
- messages: [{ role: "user", content: prompt }],
- });
+ const response = await usedClient.chat.completions.create({
+ model: "gpt-4o-mini",
+ temperature: 0,
+ max_tokens: 10,
+ response_format: { type: "json_object" },
+ messages: [{ role: "user", content: prompt }],
+ });
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/analyzeSentiment.ts around lines 45 to 48, the chat completion call uses
freeform text output which increases parse failures and cost; change the request
to use JSON mode and deterministic sampling by setting temperature: 0 (and
top_p: 0 if supported) and request a strict JSON response (use the SDK's
json/schema or response_format/json option with a defined JSON schema for the
classification fields you need—e.g., sentiment,label,confidence). Ensure you
validate and parse the returned JSON schema-compliant object rather than
freeform text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/analyzeSentiment.ts (1)
49-52
: Use JSON mode + deterministic params to slash parse errors and cost.Request strict JSON and keep sampling deterministic.
- const response = await usedClient.chat.completions.create({ - model: "gpt-4o-mini", - messages, - }); + const response = await usedClient.chat.completions.create({ + model: "gpt-4o-mini", + temperature: 0, + max_tokens: 10, + response_format: { type: "json_object" }, + messages, + });
🧹 Nitpick comments (6)
src/analyzeSentiment.ts (6)
11-16
: Don’t reselect the OpenAI client inside the loop.Pick the client once to avoid per-iteration branching.
export async function analyzeMultiplePosts(posts: string[], clientOverride?: OpenAI): Promise<SentimentResult[]> { - const results: SentimentResult[] = []; - for (const post of posts) { + const results: SentimentResult[] = []; + const usedClient = clientOverride ?? openai; + for (const post of posts) { try { - const usedClient = clientOverride ?? openai;
63-69
: Optional: tolerate fenced code blocks when not using JSON mode.If the model ever returns
json …
fences,JSON.parse
fails. Extract the JSON slice first.Add helper near the top of the file:
function extractJsonBlock(input: string): string { const fence = input.match(/```(?:json)?\s*([\s\S]*?)\s*```/i); if (fence) return fence[1]; const obj = input.match(/\{[\s\S]*\}/); return obj ? obj[0] : input; }And update parse line:
- const parsedAny = JSON.parse(lastResponse) as { sentiment?: string }; + const parsedAny = JSON.parse(extractJsonBlock(lastResponse)) as { sentiment?: string };
81-83
: Reduce PII/large payload leakage in logs.Avoid logging full post content on errors; log length + preview or a hash.
- console.error("Error analyzing post:", post, e); + const preview = post.slice(0, 120) + (post.length > 120 ? "…" : ""); + console.error("Error analyzing post:", { length: post.length, preview }, e);
10-16
: Parallelize with a small concurrency limit.Current loop is strictly sequential; a bounded pool accelerates multi-post runs while respecting rate limits.
Sketch (keeping order):
-export async function analyzeMultiplePosts(posts: string[], clientOverride?: OpenAI): Promise<SentimentResult[]> { +export async function analyzeMultiplePosts( + posts: string[], + clientOverride?: OpenAI, + concurrency = 3 +): Promise<SentimentResult[]> { const results: SentimentResult[] = []; const usedClient = clientOverride ?? openai; - for (const post of posts) { - try { - // existing per-post logic... - } catch (e) { /* ... */ } - } - return results; + let i = 0; + const out: SentimentResult[] = new Array(posts.length); + const worker = async () => { + while (true) { + const idx = i++; + if (idx >= posts.length) break; + const post = posts[idx]; + try { + // existing per-post logic, but push into `out[idx] = { post, sentiment }` + } catch (e) { + console.error("Error analyzing post:", { idx, length: post.length }, e); + } + } + }; + await Promise.all(Array.from({ length: Math.min(concurrency, posts.length) }, worker)); + // compact results (drop undefined entries) + return out.filter((x): x is SentimentResult => Boolean(x)); }Also applies to: 33-52, 88-98
17-28
: System prompt is fine; JSON mode can simplify it.With
response_format: { type: "json_object" }
, you can shorten the instruction block and rely on enforcement.
88-98
: Example runner: consider moving to examples/ or gating behind NODE_ENV.Prevents accidental import/export in production bundles while keeping local testing easy.
-export async function runExample() { +export async function runExample() { // … }And only export in dev:
if (process.env.NODE_ENV !== "production") { // export or register CLI hook here }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (2)
package.json
(1 hunks)src/analyzeSentiment.ts
(1 hunks)
🔇 Additional comments (1)
src/analyzeSentiment.ts (1)
5-8
: Single source of truth for sentiment type.Avoid hardcoding the union in multiple places—derive it from one definition:
-export type SentimentResult = { - post: string; - sentiment: "BULLISH" | "BEARISH" | "NEUTRAL"; -}; +const ALLOWED_SENTIMENTS = ["BULLISH","BEARISH","NEUTRAL"] as const; +type UpperSentiment = typeof ALLOWED_SENTIMENTS[number]; +export type SentimentResult = { post: string; sentiment: UpperSentiment };If your Prisma schema defines an enum named
Sentiment
, import and use that instead:import type { Sentiment as PrismaSentiment } from "@prisma/client"; export type SentimentResult = { post: string; sentiment: PrismaSentiment };Please confirm whether a
Sentiment
enum exists in your Prisma schema; if not, stick with the constant approach above.
@@ -4,6 +4,7 @@ | |||
"node": ">=18.0.0" | |||
}, | |||
"dependencies": { | |||
"@prisma/client": "^6.15.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Prisma client added, but verify CLI/generation to avoid postinstall failures.
If the repo doesn’t already include prisma
(CLI) and a generate step, installing @prisma/client
can fail or ship an ungenerated client.
Run to verify presence of Prisma schema, CLI, and a generate hook:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== package.json prisma devDependency =="
cat package.json | jq -r '.devDependencies.prisma // "MISSING"'
echo "== package.json scripts.postinstall =="
cat package.json | jq -r '.scripts.postinstall // "MISSING"'
echo "== prisma schema files =="
fd -H -a -t f -g "schema.prisma" || true
echo "== imports of @prisma/client =="
rg -n --no-heading '@prisma/client' -S || true
Length of output: 421
Add Prisma CLI, schema, and generate hook
The repo includes @prisma/client
but no prisma
CLI in devDependencies, no schema.prisma
, and no generation step—this will ship an ungenerated client or fail on install. Add prisma
to devDependencies, include a prisma:generate
script, and invoke it (e.g., via postinstall
) after schema creation.
🤖 Prompt for AI Agents
In package.json around line 7, the project lists "@prisma/client" but is missing
the Prisma CLI, schema, and a generate step; add "prisma" to devDependencies,
add a "prisma:generate" npm script that runs "prisma generate", and add a
postinstall hook to invoke that script (so the client is generated after
install); ensure you also add a schema.prisma file under a prisma/ directory (or
point PRISMA_SCHEMA_PATH if different) so the generate step has a schema to
read.
src/analyzeSentiment.ts
Outdated
while (attempts < maxAttempts) { | ||
attempts++; | ||
|
||
const messages: Array<{ role: "system" | "user"; content: string }> = [ | ||
{ role: "system", content: systemPrompt }, | ||
{ role: "user", content: post } | ||
]; | ||
|
||
// Add feedback about previous failed attempt | ||
if (attempts > 1) { | ||
messages.push({ | ||
role: "user", | ||
content: `Your previous response was invalid: "${lastResponse}". Please provide only valid JSON with the exact format specified.` | ||
}); | ||
} | ||
|
||
const response = await usedClient.chat.completions.create({ | ||
model: "gpt-4o-mini", | ||
messages, | ||
}); | ||
|
||
const content = response.choices?.[0]?.message?.content; | ||
if (!content) { | ||
lastResponse = "No content returned"; | ||
continue; | ||
} | ||
|
||
lastResponse = content.trim(); | ||
|
||
try { | ||
const parsedAny = JSON.parse(lastResponse) as { sentiment?: string }; | ||
const raw = (parsedAny.sentiment ?? '').toString().toUpperCase().trim(); | ||
|
||
if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) { | ||
lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`; | ||
continue; | ||
} | ||
|
||
results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); | ||
break; // Success, exit retry loop | ||
} catch (parseError) { | ||
if (attempts === maxAttempts) { | ||
throw new Error(`Failed to get valid JSON after ${maxAttempts} attempts. Last response: ${lastResponse}`); | ||
} | ||
// Continue to next attempt with feedback | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent drops on max attempts (no-content/invalid-sentiment paths).
When content
is empty or sentiment is invalid for all attempts, the loop exits without pushing a result or throwing—post is lost silently.
- let lastResponse = "";
- let attempts = 0;
- const maxAttempts = 3;
+ let lastResponse = "";
+ let attempts = 0;
+ const maxAttempts = 3;
+ let success = false;
@@
- if (!content) {
- lastResponse = "No content returned";
- continue;
- }
+ if (!content) {
+ lastResponse = "No content returned";
+ if (attempts === maxAttempts) {
+ throw new Error(`No content returned after ${maxAttempts} attempts.`);
+ }
+ continue;
+ }
@@
- if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) {
- lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`;
- continue;
- }
+ if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) {
+ lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`;
+ if (attempts === maxAttempts) {
+ throw new Error(`Invalid sentiment after ${maxAttempts} attempts. Last response: ${lastResponse}`);
+ }
+ continue;
+ }
@@
- results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" });
+ results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" });
+ success = true;
break; // Success, exit retry loop
} catch (parseError) {
if (attempts === maxAttempts) {
throw new Error(`Failed to get valid JSON after ${maxAttempts} attempts. Last response: ${lastResponse}`);
}
// Continue to next attempt with feedback
}
}
+ // Defensive: if we somehow exited without success or throw, raise.
+ if (!success) {
+ throw new Error(`Exhausted attempts without success. Last response: ${lastResponse}`);
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
while (attempts < maxAttempts) { | |
attempts++; | |
const messages: Array<{ role: "system" | "user"; content: string }> = [ | |
{ role: "system", content: systemPrompt }, | |
{ role: "user", content: post } | |
]; | |
// Add feedback about previous failed attempt | |
if (attempts > 1) { | |
messages.push({ | |
role: "user", | |
content: `Your previous response was invalid: "${lastResponse}". Please provide only valid JSON with the exact format specified.` | |
}); | |
} | |
const response = await usedClient.chat.completions.create({ | |
model: "gpt-4o-mini", | |
messages, | |
}); | |
const content = response.choices?.[0]?.message?.content; | |
if (!content) { | |
lastResponse = "No content returned"; | |
continue; | |
} | |
lastResponse = content.trim(); | |
try { | |
const parsedAny = JSON.parse(lastResponse) as { sentiment?: string }; | |
const raw = (parsedAny.sentiment ?? '').toString().toUpperCase().trim(); | |
if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) { | |
lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`; | |
continue; | |
} | |
results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); | |
break; // Success, exit retry loop | |
} catch (parseError) { | |
if (attempts === maxAttempts) { | |
throw new Error(`Failed to get valid JSON after ${maxAttempts} attempts. Last response: ${lastResponse}`); | |
} | |
// Continue to next attempt with feedback | |
} | |
} | |
let lastResponse = ""; | |
let attempts = 0; | |
const maxAttempts = 3; | |
let success = false; | |
while (attempts < maxAttempts) { | |
attempts++; | |
const messages: Array<{ role: "system" | "user"; content: string }> = [ | |
{ role: "system", content: systemPrompt }, | |
{ role: "user", content: post } | |
]; | |
// Add feedback about previous failed attempt | |
if (attempts > 1) { | |
messages.push({ | |
role: "user", | |
content: `Your previous response was invalid: "${lastResponse}". Please provide only valid JSON with the exact format specified.` | |
}); | |
} | |
const response = await usedClient.chat.completions.create({ | |
model: "gpt-4o-mini", | |
messages, | |
}); | |
const content = response.choices?.[0]?.message?.content; | |
if (!content) { | |
lastResponse = "No content returned"; | |
if (attempts === maxAttempts) { | |
throw new Error(`No content returned after ${maxAttempts} attempts.`); | |
} | |
continue; | |
} | |
lastResponse = content.trim(); | |
try { | |
const parsedAny = JSON.parse(lastResponse) as { sentiment?: string }; | |
const raw = (parsedAny.sentiment ?? "").toString().toUpperCase().trim(); | |
if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) { | |
lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`; | |
if (attempts === maxAttempts) { | |
throw new Error(`Invalid sentiment after ${maxAttempts} attempts. Last response: ${lastResponse}`); | |
} | |
continue; | |
} | |
results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); | |
success = true; | |
break; // Success, exit retry loop | |
} catch (parseError) { | |
if (attempts === maxAttempts) { | |
throw new Error(`Failed to get valid JSON after ${maxAttempts} attempts. Last response: ${lastResponse}`); | |
} | |
// Continue to next attempt with feedback | |
} | |
} | |
// Defensive: if we somehow exited without success or throw, raise. | |
if (!success) { | |
throw new Error(`Exhausted attempts without success. Last response: ${lastResponse}`); | |
} |
🤖 Prompt for AI Agents
In src/analyzeSentiment.ts around lines 33 to 79, the retry loop can finish
without adding a result when content is empty or sentiment stays invalid; to
avoid silently dropping the post, after the while-loop (or inside the catch when
attempts === maxAttempts) ensure you handle the failure: log a warning including
lastResponse and push a fallback result such as results.push({ post, sentiment:
"NEUTRAL" }) (or alternatively throw a descriptive error if you want the caller
to handle it); make sure the fallback uses one of the allowed sentiment literals
and that you include lastResponse in the log message for debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/analyzeSentiment.ts (1)
29-81
: Prevent silent drops: fail deterministically on final attempt (and assert success).If content is empty or sentiment stays invalid across retries, the loop exits without pushing a result or throwing—post is lost silently. Add a success flag and throw on the last attempt for no-content/invalid-sentiment, then assert after the loop.
let lastResponse = ""; let attempts = 0; const maxAttempts = 3; + let success = false; while (attempts < maxAttempts) { attempts++; @@ const content = response.choices?.[0]?.message?.content; if (!content) { lastResponse = "No content returned"; - continue; + if (attempts === maxAttempts) { + throw new Error(`No content returned after ${maxAttempts} attempts.`); + } + continue; } lastResponse = content.trim(); try { const parsedAny = JSON.parse(lastResponse) as { sentiment?: string }; const raw = (parsedAny.sentiment ?? '').toString().toUpperCase().trim(); if (!["BULLISH", "BEARISH", "NEUTRAL"].includes(raw)) { lastResponse = `Invalid sentiment: ${parsedAny.sentiment}`; - continue; + if (attempts === maxAttempts) { + throw new Error(`Invalid sentiment after ${maxAttempts} attempts. Last response: ${lastResponse}`); + } + continue; } - results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); + results.push({ post, sentiment: raw as "BULLISH" | "BEARISH" | "NEUTRAL" }); + success = true; break; // Success, exit retry loop } catch (parseError) { if (attempts === maxAttempts) { throw new Error(`Failed to get valid JSON after ${maxAttempts} attempts. Last response: ${lastResponse}`); } // Continue to next attempt with feedback } } + if (!success) { + throw new Error(`Exhausted attempts without success. Last response: ${lastResponse}`); + }Optional alternative: instead of throwing after the loop, push a fallback to preserve 1:1 results (log the error):
- results.push({ post, sentiment: "NEUTRAL" as const })
🧹 Nitpick comments (2)
src/analyzeSentiment.ts (2)
49-54
: Make generations deterministic and cheaper.Classification doesn’t need creativity. Set temperature: 0 and shrink max_tokens.
const response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages, response_format: { type: "json_object" }, - max_tokens: 100 + temperature: 0, + max_tokens: 10 });
41-47
: Trim feedback to the model to avoid echoing huge payloads.Large lastResponse echoes can inflate cost and hit token limits.
if (attempts > 1) { - messages.push({ - role: "user", - content: `Your previous response was invalid: "${lastResponse}". Please provide only valid JSON with the exact format specified.` - }); + const prevSnippet = lastResponse.length > 500 ? lastResponse.slice(0, 500) + "…" : lastResponse; + messages.push({ + role: "user", + content: `Your previous response was invalid: "${prevSnippet}". Please provide only valid JSON with the exact format specified.` + }); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/analyzeSentiment.ts
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/analyzeSentiment.ts (1)
src/classifyWithOpenAI.ts (2)
categorizePost
(33-74)runCategorization
(77-81)
🔇 Additional comments (1)
src/analyzeSentiment.ts (1)
5-9
: Verify Prisma enum import
Noschema.prisma
or Prisma enum import was detected. Please locate your Prisma schema (e.g.prisma/schema.prisma
), confirm theenum Sentiment { … }
and its generated export in@prisma/client
(eitherimport { Sentiment } from "@prisma/client"
or viaPrisma.$Enums.Sentiment
), then replace the duplicated string union inSentimentResult
with that enum.
// Example runner | ||
export async function runExample() { | ||
const posts = [ | ||
"Bitcoin is going to skyrocket after the halving!", | ||
"Ethereum might drop below $1000 soon, risky market.", | ||
"The market seems calm today, no major moves." | ||
]; | ||
|
||
const results = await analyzeMultiplePosts(posts); | ||
console.log("Sentiment Analysis Results:", JSON.stringify(results, null, 2)); | ||
} | ||
runExample() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove side-effectful execution on import.
Calling runExample() at module load will fire network calls during test/imports. Move to a dedicated CLI or guard with a module check.
export async function runExample() {
@@
console.log("Sentiment Analysis Results:", JSON.stringify(results, null, 2));
}
-runExample()
If you want a guarded runner here, I can wire an ESM-safe check or create src/cli/run-sentiment.ts and a package.json bin entry.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Example runner | |
export async function runExample() { | |
const posts = [ | |
"Bitcoin is going to skyrocket after the halving!", | |
"Ethereum might drop below $1000 soon, risky market.", | |
"The market seems calm today, no major moves." | |
]; | |
const results = await analyzeMultiplePosts(posts); | |
console.log("Sentiment Analysis Results:", JSON.stringify(results, null, 2)); | |
} | |
runExample() | |
// Example runner | |
export async function runExample() { | |
const posts = [ | |
"Bitcoin is going to skyrocket after the halving!", | |
"Ethereum might drop below $1000 soon, risky market.", | |
"The market seems calm today, no major moves." | |
]; | |
const results = await analyzeMultiplePosts(posts); | |
console.log("Sentiment Analysis Results:", JSON.stringify(results, null, 2)); | |
} |
🤖 Prompt for AI Agents
In src/analyzeSentiment.ts around lines 90 to 101, the module currently calls
runExample() at import time which causes side-effectful network calls during
tests and imports; remove the top-level runExample() invocation and either
export runExample for manual invocation or move the example runner to a
dedicated CLI file (e.g., src/cli/run-sentiment.ts) or wrap the call behind a
safe ESM runtime guard (only call runExample when executed as a script). Ensure
the module exports functions without executing them on import and, if adding a
CLI file, import runExample there and invoke it from that entrypoint only.
--run-sentiment
.Summary by CodeRabbit
New Features
Reliability
Configuration
Chores