Skip to content

feat: Improve OpenAI LLM adapter and types#7

Merged
dev-dami merged 1 commit intomainfrom
qirrelcontext
Dec 15, 2025
Merged

feat: Improve OpenAI LLM adapter and types#7
dev-dami merged 1 commit intomainfrom
qirrelcontext

Conversation

@dev-dami
Copy link
Owner

@dev-dami dev-dami commented Dec 15, 2025

  • Refactor BaseLLMAdapter to include text, tokens, and entities in input data.
  • Update OpenAI adapter to use ReturnType for timeout ID.
  • Handle unexpected OpenAI API response structures more robustly.
  • Remove unnecessary prefix from requestId in QirrelContext.

Summary by CodeRabbit

  • Bug Fixes

    • Enhanced error handling for OpenAI responses with missing content; now properly throws an error instead of silently logging a warning.
  • Changes

    • Modified request ID generation format in generated contexts.
    • Updated context data structure to explicitly include text, tokens, and entities in API responses.

✏️ Tip: You can customize this high-level summary in your review settings.

- Refactor BaseLLMAdapter to include text, tokens, and entities in input
  data.
- Update OpenAI adapter to use ReturnType<typeof setTimeout> for timeout
  ID.
- Handle unexpected OpenAI API response structures more robustly.
- Remove unnecessary prefix from requestId in QirrelContext.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

Walkthrough

The changes modify LLM integration handling and context creation: returning structured context data with explicit fields (text, tokens, entities), converting missing content responses from warnings to fatal errors in the OpenAI adapter, adjusting type definitions for cross-runtime compatibility, and simplifying request ID generation from prefixed to plain UUID format.

Changes

Cohort / File(s) Summary
LLM Integration
src/llms/base.ts, src/llms/openai.ts
Modified generateWithContext to explicitly include text, tokens, and entities in returned context data. Updated OpenAI adapter to throw fatal errors on missing response content (replacing console.warn with console.error) and changed content extraction from optional chaining to non-optional access. Aligned timeoutId type with cross-runtime setTimeout signature.
Type Definitions
src/types/index.ts
Simplified meta.requestId generation in createQirrelContext from prefixed UUID ('req_' + uuidv4()) to plain UUID (uuidv4()).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Attention areas:
    • Content extraction change from optional (data.choices[0]?.message?.content || "") to non-optional (data.choices[0].message.content) may affect error handling downstream if content is ever undefined
    • Error handling conversion from warning to fatal exception in OpenAI adapter changes control flow—verify callers handle thrown errors appropriately
    • Explicit inclusion of text, tokens, and entities in returned context data should be validated against consumers of generateWithContext

Possibly related PRs

Poem

🐰 With tokens bright and context clear,
The OpenAI adapter's code draws near,
No warnings whispered, errors ring true,
UUID stripped simple—fresh and new! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Improve OpenAI LLM adapter and types' accurately reflects the main changes across all modified files, covering improvements to the OpenAI adapter and type definitions.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)

✅ Unit Test PR creation complete.

  • Create PR with unit tests
  • Commit unit tests in branch qirrelcontext
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/types/index.ts (1)

53-76: Inconsistent requestId format across codebase requires unification

The change to createQirrelContext now generates plain UUIDs for meta.requestId, but src/core/pipeline.ts (line 90) still generates IDs with the 'req_' prefix. This creates two different ID formats in the same codebase depending on which code path is used.

Additionally, documentation examples (docs/usage/basic.md) still reference the old 'req_'-prefixed format.

To resolve:

  • Update src/core/pipeline.ts:90 to use the same UUID format (either call createQirrelContext or use uuidv4() directly)
  • Update documentation examples to reflect the new format
🧹 Nitpick comments (2)
src/llms/base.ts (1)

49-72: Context enrichment and defaults look good

Ensuring text, tokens, and entities are always present (with sane defaults) while still spreading input.data and attaching llmResponse is sound. One minor behavioral note: because ...input.data comes after the explicit fields, any pre-existing text/tokens/entities on input.data will override the computed ones—if you ever derive or normalize these before returning, you may want to flip the order to make the derived values authoritative.

src/llms/openai.ts (1)

38-103: OpenAI timeout typing and stricter content handling mostly solid, with a few edge‑case notes

  • Using ReturnType<typeof setTimeout> for timeoutId is a good cross‑runtime typing improvement.
  • Treating missing content as a hard error improves correctness, but note that if (!data.choices?.[0]?.message?.content) will also treat an empty string as “no content”. If you ever need to allow empty completions, consider checking for == null instead of general falsiness.
  • console.error('…', data) is useful for debugging but will log the full OpenAI response, including user prompts/completions. If logs are persisted or exported, you may want to redact or truncate payload fields for privacy/compliance.
  • Optional cleanup nicety: moving the clearTimeout into a finally block would avoid the timer firing and calling abort() on requests that already failed for other reasons.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 97c0ed6 and fb9bbfd.

📒 Files selected for processing (3)
  • src/llms/base.ts (1 hunks)
  • src/llms/openai.ts (2 hunks)
  • src/types/index.ts (1 hunks)

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

@dev-dami dev-dami merged commit d6dfc0e into main Dec 15, 2025
3 checks passed
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

UTG Post-Process Complete

No new issues were detected in the generated code and all check runs have completed. The unit test generation process has completed successfully.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

Creating a PR to put the unit tests in...

The changes have been created in this pull request: View PR

@coderabbitai coderabbitai bot mentioned this pull request Dec 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant