Refactor: Introduce QirrelContext and update core functionality#4
Refactor: Introduce QirrelContext and update core functionality#4
Conversation
Replaces the `IntentResult` interface with the new `QirrelContext` for a more structured representation of processing data. This includes namespaced fields for meta, memory, and LLM information, along with the original data. Updates all relevant components, API functions, and documentation to reflect this change. The `IntentResult` interface and related type definitions have been removed.
WalkthroughThis PR replaces IntentResult with a canonical QirrelContext across types, API signatures, pipeline, LLM adapters/components, processors, docs, tests, and package metadata. Data (text/tokens/entities) is now nested under Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
README.MD (3)
24-32: Docs example: prefer?? []so output is consistently an array (notundefined).- console.log(result.data?.entities); + console.log(result.data?.entities ?? []);
36-44: Same doc stability tweak for snippet consistency.- console.log(result.data?.entities); + console.log(result.data?.entities ?? []);
48-53: Same doc stability tweak for snippet consistency.- console.log(result.data?.entities); + console.log(result.data?.entities ?? []);src/llms/llm-processor.ts (2)
5-23: Don’t return adapter context “as-is” unless you enforce it contains required fields.If an adapter accidentally returns a partial object at runtime, later processors may crash. Safer: merge onto
input(and keepdataif omitted).return async (input: QirrelContext): Promise<QirrelContext> => { try { // Use the adapter to process the full context const result = await options.adapter.generateWithIntentResult( input, options.promptTemplate, options.config, ); - // Return the result as is (should be QirrelContext) - return result; + // Merge defensively in case an adapter returns a partial shape at runtime + return { + ...input, + ...result, + meta: { ...input.meta, ...result.meta }, + memory: { ...input.memory, ...result.memory }, + llm: { ...input.llm, ...result.llm }, + data: result.data ?? input.data, + }; } catch (error) { console.warn("LLM processor failed:", error); return input; } };
79-96: Ensureinput.data.entitiesis initialized before pushing.Even if types say it exists, runtime objects can still violate it (especially if adapters construct
datamanually).- if (parsed.entities && Array.isArray(parsed.entities)) { + if (parsed.entities && Array.isArray(parsed.entities)) { + input.data.entities ??= []; for (const entity of parsed.entities) { if ( entity.type && entity.value && typeof entity.start === "number" && typeof entity.end === "number" ) { input.data.entities.push({ type: entity.type, value: entity.value, start: entity.start, end: entity.end, }); } } }src/llms/gemini.ts (1)
31-39: Avoid usingeval()for dynamic imports.Using
eval("import(...)")to dynamically load the Google SDK is a security concern and will break bundlers, tree-shaking, and static analysis tools. Standard dynamic imports work fine withouteval.- const genAIModule = await eval("import('@google/generative-ai')"); - GoogleGenerativeAI = genAIModule.GoogleGenerativeAI; + const { GoogleGenerativeAI: GAI } = await import('@google/generative-ai'); + GoogleGenerativeAI = GAI;Additionally, the error message on line 37 assumes the Bun runtime. Consider a more generic message:
- "Google Generative AI SDK not installed. Run: bun add @google/generative-ai" + "Google Generative AI SDK not installed. Install it with your package manager (e.g., npm install @google/generative-ai)"src/llms/base.ts (1)
43-72: Remove unused LLM response variable or apply it to the context.The method generates an LLM response at line 61 but immediately discards it, returning the input unchanged. The comment at line 63 claims "potentially updated data," but no such update occurs. Neither
GenericLLMAdapternorGeminiLLMAdapteroverride this method, so this base implementation is the source of the problem. Either the response should be applied toinput.databefore returning, or thegeneratecall should be removed if the intent is merely to validate the prompt template fills correctly.docs/usage/advanced.md (2)
256-270: Inconsistent type:IntentResultshould beQirrelContext.This function still references
IntentResultwhich has been removed in this PR. Update to useQirrelContextand access data viaresult.data:-function analyzeTextComplexity(result: IntentResult) { +function analyzeTextComplexity(result: QirrelContext) { + if (!result.data) return { wordCount: 0, avgWordLength: 0, punctuationDensity: 0 }; + // Calculate average word length - const words = result.tokens.filter(t => t.type === 'word'); + const words = result.data.tokens.filter(t => t.type === 'word'); const avgWordLength = words.reduce((sum, token) => sum + token.value.length, 0) / words.length; // Find punctuation patterns - const punctTokens = result.tokens.filter(t => t.type === 'punct'); + const punctTokens = result.data.tokens.filter(t => t.type === 'punct'); return { wordCount: words.length, avgWordLength, - punctuationDensity: punctTokens.length / result.tokens.length + punctuationDensity: punctTokens.length / result.data.tokens.length }; }
301-309: Inconsistent type:IntentResultshould beQirrelContext.This processor still uses
IntentResultwhich conflicts with thePipelineComponenttype definition that now expectsQirrelContext:-const resilientProcessor: PipelineComponent = async (input: IntentResult): Promise<IntentResult> => { +const resilientProcessor: PipelineComponent = async (input: QirrelContext): Promise<QirrelContext> => { try { return await extract(input); } catch (extractError) {
🧹 Nitpick comments (9)
src/llms/llm-processor.ts (1)
31-38: Good guard on missingdata; consider safer prompt quoting.- const enhancedPrompt = `${promptTemplate}\n\nText: "${input.data.text}"\n\nPlease extract entities in JSON format with structure: {entities: [{type: string, value: string, start: number, end: number}]}`; + const enhancedPrompt = `${promptTemplate}\n\nText: ${JSON.stringify(input.data.text)}\n\nPlease extract entities in JSON format with structure: {entities: [{type: string, value: string, start: number, end: number}]}`;src/llms/types.ts (1)
24-24: Method namegenerateWithIntentResultis now misleading.The method accepts and returns
QirrelContext, but the name still referencesIntentResult. Consider renaming togenerateWithContextor similar to align with the new type.- generateWithIntentResult(input: QirrelContext, promptTemplate: string, options?: Partial<LLMConfig>): Promise<QirrelContext>; + generateWithContext(input: QirrelContext, promptTemplate: string, options?: Partial<LLMConfig>): Promise<QirrelContext>;src/llms/generic.ts (1)
1-1: Unused import.
QirrelContextis imported but not used anywhere in this file. The class only implementsgenerate()and related helper methods without referencingQirrelContext.-import type { QirrelContext } from "../types";src/llms/gemini.ts (1)
1-1: Unused import.
QirrelContextis imported but not referenced in this file. The class only implementsgenerate()andmakeAPICall()without usingQirrelContextdirectly.-import type { QirrelContext } from "../types";src/llms/fallback.ts (1)
72-87: Method signature correctly updated to use QirrelContext.The fallback behavior is preserved: on error, the method returns the input unchanged. The implementation is correct.
Optional refactor: Consider renaming
generateWithIntentResultto something likegenerateWithContextin a future PR to better reflect the new QirrelContext-based API.docs/integrations/llm.md (1)
83-103: Avoid instantiatingPipelineinside the processor function.Creating a new
Pipelineinstance on every processor invocation (line 85) is inefficient and contradicts the best practice documented indocs/usage/advanced.md(lines 121-123) which states "Pipeline construction is more expensive than processing, so reuse instances when possible."Consider refactoring to accept the LLM adapter as a parameter or use a closure pattern:
-const llmEnhancedProcessor: PipelineComponent = async (input: QirrelContext): Promise<QirrelContext> => { - if (input.data) { - const pipeline = new Pipeline('./config-with-llm.yaml'); - const llmAdapter = pipeline.getLLMAdapter(); +// Factory function to create processor with injected adapter +function createLLMEnhancedProcessor(llmAdapter: LLMAdapter | undefined): PipelineComponent { + return async (input: QirrelContext): Promise<QirrelContext> => { + if (input.data && llmAdapter) { + const analysis = await llmAdapter.analyzeText(`Analyze this text for sentiment: ${input.data.text}`); - if (llmAdapter) { - // Use LLM to analyze sentiment, categorize content, etc. - const analysis = await llmAdapter.analyzeText(`Analyze this text for sentiment: ${input.data.text}`); - - // Add LLM-derived entities or metadata to the result input.data.entities.push({ type: 'llm_analysis', value: analysis.sentiment || 'neutral', start: 0, end: input.data.text.length }); } - } - return input; -}; + return input; + }; +} + +// Usage: +const pipeline = new Pipeline('./config-with-llm.yaml'); +const llmEnhancedProcessor = createLLMEnhancedProcessor(pipeline.getLLMAdapter()); +pipeline.addLLMProcessor(llmEnhancedProcessor);src/types/index.ts (1)
33-42: Consider makingmeta,memory, andllmoptional or providing factory defaults.With the current interface, creating a
QirrelContextrequires initializing several nested required fields (meta.requestId,meta.timestamp,llm.model,llm.safety.allowTools), which increases boilerplate for simple processing use cases.Consider one of these approaches:
- Make context fields optional with a factory function:
export interface QirrelContext { meta?: MetaContext; memory?: MemoryContext; llm?: LLMContext; data?: { ... }; } export function createQirrelContext(data: QirrelContext['data']): QirrelContext { return { meta: { requestId: crypto.randomUUID(), timestamp: Date.now() }, memory: {}, llm: { model: 'default', safety: { allowTools: false } }, data }; }
- Provide default values in existing factory/initialization code to reduce caller burden.
This would maintain backward compatibility while simplifying common usage patterns.
src/core/pipeline.ts (1)
48-65: Consider the timing of async LLM adapter initialization.The LLM adapter is initialized asynchronously in the constructor, which means
process()could be called before the adapter is ready. If this is intentional (graceful degradation), consider documenting this behavior. Otherwise, consider making the initialization synchronous or exposing anasync initialize()method.src/processors/extract.ts (1)
96-96: Replaceany[]withEntity[]for stronger typing.The helper functions use
any[]for the entities array. Consider using the properEntity[]type for better type safety.Apply this diff to improve type safety:
-function extractEmails(inputData: { text: string; entities: any[] }): void { +function extractEmails(inputData: { text: string; entities: Entity[] }): void {-function extractPhones(inputData: { text: string; entities: any[] }): void { +function extractPhones(inputData: { text: string; entities: Entity[] }): void {-function extractUrls(inputData: { text: string; entities: any[] }): void { +function extractUrls(inputData: { text: string; entities: Entity[] }): void {-function extractNumbers(inputData: { text: string; entities: any[] }): void { +function extractNumbers(inputData: { text: string; entities: Entity[] }): void {You'll need to import the Entity type:
import type { PipelineComponent } from "../core/types"; import { QirrelContext, Entity } from "../types";Also applies to: 118-118, 184-184, 205-205
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (27)
CHANGELOG.md(2 hunks)README.MD(3 hunks)docs/api.md(4 hunks)docs/examples.md(1 hunks)docs/integrations/llm.md(1 hunks)docs/usage/advanced.md(2 hunks)docs/usage/basic.md(1 hunks)docs/walkthrough.md(5 hunks)package.json(2 hunks)src/api/index.ts(1 hunks)src/core/pipeline.ts(2 hunks)src/core/types.ts(1 hunks)src/index.ts(1 hunks)src/llms/base.ts(2 hunks)src/llms/fallback.ts(2 hunks)src/llms/gemini.ts(1 hunks)src/llms/generic.ts(4 hunks)src/llms/llm-components.ts(7 hunks)src/llms/llm-processor.ts(3 hunks)src/llms/types.ts(2 hunks)src/processors/advclean.ts(2 hunks)src/processors/clean.ts(1 hunks)src/processors/extract.ts(9 hunks)src/processors/normalize.ts(1 hunks)src/processors/segment.ts(1 hunks)src/types/index.ts(1 hunks)tests/extraction.test.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (14)
src/api/index.ts (2)
src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/core/types.ts (2)
src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/processors/normalize.ts (3)
src/core/types.ts (1)
PipelineComponent(4-6)src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/llms/types.ts (2)
src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/llms/base.ts (2)
src/types/index.ts (1)
QirrelContext(33-42)src/llms/types.ts (1)
LLMConfig(3-10)
src/processors/clean.ts (3)
src/core/types.ts (1)
PipelineComponent(4-6)src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/processors/segment.ts (3)
src/core/types.ts (1)
PipelineComponent(4-6)src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/processors/advclean.ts (4)
src/processors/index.ts (1)
advClean(5-5)src/core/types.ts (1)
PipelineComponent(4-6)src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
tests/extraction.test.ts (2)
src/types/index.ts (1)
QirrelContext(33-42)src/processors/extract.ts (5)
extract(6-42)extractEmailsOnly(44-55)extractPhonesOnly(57-68)extractUrlsOnly(70-81)extractNumbersOnly(83-94)
src/processors/extract.ts (4)
src/processors/index.ts (2)
extract(2-2)extractPhonesOnly(2-2)src/core/types.ts (1)
PipelineComponent(4-6)src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/llms/fallback.ts (2)
src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/types/index.ts (2)
src/index.ts (2)
QirrelContext(6-6)Entity(6-6)src/core/Tokenizer.ts (1)
Token(9-14)
src/llms/llm-components.ts (2)
src/index.ts (1)
QirrelContext(6-6)src/types/index.ts (1)
QirrelContext(33-42)
src/llms/llm-processor.ts (3)
src/llms/types.ts (1)
LLMProcessorOptions(27-31)src/core/types.ts (1)
PipelineComponent(4-6)src/types/index.ts (1)
QirrelContext(33-42)
🔇 Additional comments (30)
package.json (1)
3-3: Version bump to 0.2.0 matches the public API break (IntentResult → QirrelContext).src/llms/llm-processor.ts (1)
1-3: Type import swap toQirrelContextis clean.docs/walkthrough.md (1)
9-11: Walkthrough updates correctly reflect QirrelContext as the canonical context.Also applies to: 52-55, 92-99
src/index.ts (1)
1-9: Public exports updated appropriately (IntentResult no longer exposed).src/api/index.ts (1)
4-9: LGTM!The function signature correctly reflects the migration to
QirrelContextas the return type, maintaining consistency with the pipeline's updated return type.src/processors/advclean.ts (1)
10-15: LGTM!The guard check for
result.datacorrectly handles the optional nature of thedatafield inQirrelContext. The in-place mutation pattern is appropriate for pipeline components.src/core/types.ts (1)
1-6: LGTM! Type signature updated correctly.The
PipelineComponenttype has been properly updated to useQirrelContextfor both input and output. The import path is correct and the signature maintains consistency.docs/examples.md (1)
186-221: Documentation accurately reflects the new QirrelContext structure.The custom processor example correctly demonstrates:
- Importing
QirrelContexttype- Using the guard
if (input.data)to handle the optional data field- Accessing tokens and entities through
input.datadocs/usage/basic.md (1)
34-74: Documentation accurately describes the new QirrelContext structure.The example clearly shows the new canonical context with namespaced fields (meta, memory, llm) and the nested data structure containing text, tokens, and entities.
src/processors/segment.ts (1)
8-71: Segment processor correctly updated to use QirrelContext.The implementation properly:
- Guards against missing
input.data- Accesses text through
input.data.text- Pushes sentence entities to
input.data.entities- Returns the modified input context
The sentence boundary detection logic (advancing past whitespace after punctuation marks) and the handling of remaining text without ending punctuation are both correct.
src/llms/llm-components.ts (5)
13-35: Summarizer component correctly updated.The implementation properly guards against missing data, accesses text through
input.data.text, and creates a summary entity with the appropriate length from the LLM response.
42-66: Sentiment analyzer correctly updated.The guard pattern is applied correctly, and the sentiment entity spans the entire input text (using
input.data.text.lengthfor the end position), which is appropriate for document-level sentiment.
74-105: Intent classifier correctly updated.The implementation correctly uses QirrelContext, includes the data guard, and creates intent entities spanning the full text.
112-139: Topic classifier correctly updated.Multiple topics are correctly extracted and each is added as a separate entity spanning the input text.
146-163: Text enhancer correctly updated.The component appropriately modifies
input.data.textdirectly with the enhanced version from the LLM, which aligns with its purpose of improving the text while preserving meaning.tests/extraction.test.ts (7)
7-24: Test input correctly structured as QirrelContext.The test properly constructs a QirrelContext object with all required fields (meta, memory, llm) and the data payload. The placeholder values for meta/memory/llm are appropriate for unit testing.
29-47: Assertions safely handle optional data field.The use of optional chaining (
result.data?.entities) throughout the assertions properly guards against undefined data, and the null coalescing operator provides safe fallback values for filtering operations.
51-72: Empty entity test correctly validates the new structure.The test confirms that when no entities are extractable, the result correctly returns an empty entities array within the data structure.
77-107: Email extraction test properly updated.The test validates selective email extraction using the new QirrelContext structure and safely accesses the extracted entity with appropriate guards.
112-136: Phone extraction test correctly structured.The test properly validates phone number extraction with the new context structure and uses safe array filtering.
141-166: URL extraction test correctly updated.The test validates multiple URL extraction and uses safe array operations with null coalescing for the fallback case.
171-198: Number extraction test handles various formats correctly.The test validates extraction of different number formats (decimal, integer, scientific notation) using the new QirrelContext structure with appropriate safety checks.
CHANGELOG.md (1)
8-27: LGTM!The changelog accurately documents the breaking change from
IntentResulttoQirrelContext, along with all related updates to processors, adapters, and documentation. The categorization into Added/Changed/Removed sections is clear and follows the Keep a Changelog format.docs/usage/advanced.md (2)
14-38: LGTM!The custom processor example correctly uses
QirrelContext, includes theinput.dataguard, and properly accessesinput.data.tokensandinput.data.entities.
149-168: LGTM!The hashtag extractor example is correctly updated to use
QirrelContextwith proper null guarding.src/processors/clean.ts (1)
4-16: LGTM!The processor correctly migrates to
QirrelContext, includes proper null guarding forinput.data, and maintains the original filtering logic for tokens and entities.src/core/pipeline.ts (1)
73-110: LGTM! Context initialization is well-structured.The new QirrelContext structure properly separates meta, memory, and llm concerns from data. The progressive context building pattern (initialContext → contextWithText → resultContext) is clear and maintainable.
src/processors/normalize.ts (1)
4-22: LGTM! Proper data guard and clean refactoring.The addition of the
if (input.data)guard prevents runtime errors, and the nested property access correctly aligns with the new QirrelContext structure.docs/api.md (1)
1-260: LGTM! Comprehensive documentation update.The documentation accurately reflects the new QirrelContext-based API. The new context interfaces (MetaContext, MemoryContext, LLMContext) are well-documented with clear structure definitions.
src/processors/extract.ts (1)
6-94: LGTM! Consistent guard pattern across all extraction functions.All extraction functions properly guard against missing
input.dataand correctly delegate to helper functions. The refactoring maintains error handling while adapting to the new QirrelContext structure.
package.json
Outdated
| "uuid": "^11.0.3", | ||
| "validator": "^13.15.23" | ||
| }, | ||
| "files": [ | ||
| "dist/**/*", | ||
| "README.MD", | ||
| "default.yaml", | ||
| "LICENSE" | ||
| ], | ||
| "devDependencies": { | ||
| "@types/emoji-regex": "^9.2.2", | ||
| "@types/jest": "^30.0.0", | ||
| "@types/js-yaml": "^4.0.9", | ||
| "@types/validator": "^13.15.10", | ||
| "@types/uuid": "^10.0.0", | ||
| "jest": "^30.2.0", | ||
| "ts-jest": "^29.4.6" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
uuid npm package version 11 typescript types bundled
💡 Result:
Short answer: Yes — uuid@11 (and later 11.x) ships its own TypeScript types (types are bundled). Remove @types/uuid from your deps and use the built-in types. [1][2][3]
Sources:
- uuid npm README (v11.x) — TypeScript support included / "remove @types/uuid". [1]
- uuid v11.0.0 release notes (port to TypeScript). [2]
- @types/uuid package notes stating uuid provides its own types (stub). [3]
🌐 Web query:
uuid v11 @types/uuid conflict maintainers recommendation
💡 Result:
Short answer: The uuid maintainers recommend removing @types/uuid when upgrading to uuid@11+ because uuid ships its own TypeScript types — leaving @types/uuid installed can cause type conflicts. [1][2]
What to do
- Uninstall @types/uuid (npm rm @types/uuid / yarn remove @types/uuid). [1]
- Keep using the named imports from uuid (e.g. import { v4 } from 'uuid'). [2]
- If you must stay on the older typings (temporary), pin uuid to a pre-11 version; otherwise prefer the built-in types. [1][2]
Sources
- npm uuid package notes (uuid@11 — “TypeScript support is now included (remove @types/uuid)”). [npm]. [1]
- uuid GitHub README / changelog (notes about TypeScript being included and options changes in v11). [2]
🌐 Web query:
uuid@11.0.3 npm registry typescript definitions
💡 Result:
Short answer: uuid@11.0.3 includes TypeScript definitions — you do NOT need @types/uuid. Install uuid and import the named exports (e.g. import { v4 as uuidv4 } from 'uuid'). [1][2][3]
Sources:
- uuid package README (notes that "TypeScript support is now included"). [1]
- Release notes for v11.0.3 (shows types exported/typing fixes). [2]
- @types/uuid stub entry: "uuid provides its own types definitions" / remove @types/uuid. [3]
Remove @types/uuid from devDependencies — uuid@11.0.3 includes its own TypeScript types.
The uuid package v11+ bundles TypeScript definitions natively. The @types/uuid package is unnecessary and can cause type conflicts. Remove it:
"devDependencies": {
- "@types/uuid": "^10.0.0",
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "uuid": "^11.0.3", | |
| "validator": "^13.15.23" | |
| }, | |
| "files": [ | |
| "dist/**/*", | |
| "README.MD", | |
| "default.yaml", | |
| "LICENSE" | |
| ], | |
| "devDependencies": { | |
| "@types/emoji-regex": "^9.2.2", | |
| "@types/jest": "^30.0.0", | |
| "@types/js-yaml": "^4.0.9", | |
| "@types/validator": "^13.15.10", | |
| "@types/uuid": "^10.0.0", | |
| "jest": "^30.2.0", | |
| "ts-jest": "^29.4.6" | |
| "uuid": "^11.0.3", | |
| "validator": "^13.15.23" | |
| }, | |
| "devDependencies": { | |
| "@types/emoji-regex": "^9.2.2", | |
| "@types/jest": "^30.0.0", | |
| "@types/js-yaml": "^4.0.9", | |
| "@types/validator": "^13.15.10", | |
| "jest": "^30.2.0", | |
| "ts-jest": "^29.4.6" |
🤖 Prompt for AI Agents
In package.json around lines 45 to 55, remove the unnecessary "@types/uuid"
entry from devDependencies because uuid@11.0.3 bundles its own TypeScript types;
edit the devDependencies block to delete the "@types/uuid": "^10.0.0" line and
run npm/yarn install (or update lockfile) to ensure dependencies are synced and
to avoid type conflicts.
| "files": [ | ||
| "dist/**/*", | ||
| "README.MD", | ||
| "default.yaml", | ||
| "LICENSE" | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for README files in the repository
find . -iname "README*" -type f 2>/dev/null | head -20Repository: dev-dami/Qirrel
Length of output: 70
🌐 Web query:
npm README.md casing case-insensitive handling package.json files
💡 Result:
Short answer: npm itself follows the OS/filesystem for filenames (so package.json / README.md casing matters only on case-sensitive systems). To avoid problems, use consistent canonical casing in your repo (package.json, README.md), run CI/build on a case-sensitive Linux runner, and add tooling that detects mismatched casing locally.
Recommendations & tools:
- Rely on consistent filenames (always use package.json and README.md exactly).
- Catch issues in development/CI by enforcing case-sensitive checks (run on Linux CI). [1][2][3]
- Use libraries to detect/normalize true filesystem case or to do case-insensitive lookups when needed: true-case-path, detect-file (options to perform nocase checks). [3][4]
- For module imports/assets, use plugins that enforce exact-case resolution (CaseSensitivePathsPlugin, caseify, make-express-case-sensitive) to surface mismatches before deployment. [2][5][6]
- Be aware git records case-sensitive names, but working trees on case-insensitive OSes can hide conflicts; check core.ignorecase and consider repository policies or pre-commit checks. [7]
Sources:
[1] Case Sensitive Paths / Webpack plugin documentation.
[2] caseify / case-sensitive modules discussion.
[3] true-case-path package (returns true filesystem casing).
[4] detect-file (nocase option for detection).
[5] make-express-case-sensitive (enforce case in Express static).
[6] case-sensitive-paths-webpack-plugin (enforce path case).
[7] Git / core.ignorecase discussion (Stack Overflow) and is-fs-case-sensitive repo.
If you want, I can: (a) show example scripts/pre-commit hooks to detect mismatched casing, (b) add a CI step for Linux case checks, or (c) scan a repo for potential casing problems and suggest fixes. Which would you like?
Standardize README filename to lowercase README.md for npm convention. The current entry README.MD matches the actual file in the repository, but deviates from the npm/Node.js standard of lowercase README.md. While case-insensitive filesystems (Windows, macOS) will find it, case-sensitive systems (Linux CI) may fail to locate it if tools expect the canonical lowercase format. Update both the filename in the repo and the package.json entry to README.md.
🤖 Prompt for AI Agents
In package.json around lines 57 to 62, the README filename is listed as
"README.MD" but should use the canonical lowercase "README.md"; rename the
repository file from README.MD to README.md (git mv README.MD README.md) and
update the "files" array entry in package.json to "README.md", then commit the
rename so case-sensitive CI and npm tooling will find the README; also scan for
and update any other references to README.MD in the repo (docs, CI, scripts) to
the lowercase name.
This commit updates various dependencies to their latest versions, including @types/express, @types/node, express, and uuid. It also bumps the minor version of the package from 0.2.0 to 0.2.1.
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
Update the `requestId` generation to use `slice` for consistency. Add comments to `MetaContext`, `MemoryContext`, and `LLMContext` to clarify their purpose and properties.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/types/index.ts (1)
10-16: Clarify or create the AGENTS.md specification reference.This issue was previously flagged. The comment references an "AGENTS.md specification" that does not exist in the repository. Either update the comment to provide inline documentation about the MetaContext interface requirements, or create and commit the AGENTS.md file.
🧹 Nitpick comments (1)
src/types/index.ts (1)
18-22: Consider more specific types for memory fields.The
unknowntype forshortTermandlongTermprovides maximum flexibility but sacrifices type safety. If the structure of memory data is known or can be parameterized, consider using more specific types or generics to improve type checking and developer experience.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/core/pipeline.ts(2 hunks)src/types/index.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/types/index.ts (1)
src/core/Tokenizer.ts (1)
Token(9-14)
src/core/pipeline.ts (1)
src/types/index.ts (1)
QirrelContext(33-42)
🔇 Additional comments (5)
src/types/index.ts (2)
24-31: LGTM with minor documentation suggestion.The LLMContext interface is well-structured. The temperature comment helpfully documents the expected range (0.0-1.0). Consider adding a comment for the
redactionsfield to clarify whether these are regex patterns, literal strings, or other formats.
33-42: LGTM! Well-structured context interface.The
QirrelContextinterface effectively composes the three context namespaces and appropriately makes thedatafield optional to support contexts at different processing stages. The structure clearly separates concerns between operational metadata, memory state, LLM configuration, and processing data.src/core/pipeline.ts (3)
77-77: Good fix on the deprecated.substr()method.The previous review comment about using the deprecated
.substr()method has been addressed correctly with.slice(2, 7).
92-101: LGTM! Clear tokenization and context initialization.The tokenization flow and context creation is straightforward. The spread operator appropriately copies the initial context, and the
datafield is properly populated with text, tokens, and an empty entities array.One minor consideration: if
tokenizer.tokenize()can throw exceptions, you may want to add error handling to provide better diagnostics.
103-109: LGTM! Clean pipeline execution pattern.The pipeline correctly threads the
QirrelContextthrough each component, collecting the transformations. The pattern is clear and maintains type safety through the TypeScript type system.
| meta: { | ||
| requestId: 'req_' + Date.now().toString(36) + Math.random().toString(36).slice(2, 7), | ||
| timestamp: Date.now(), | ||
| source: 'cli' |
There was a problem hiding this comment.
Hardcoded source value doesn't reflect actual usage context.
The source field is hardcoded as 'cli', but the Pipeline class can be instantiated and used in various contexts (HTTP servers, workers, CLI tools, etc.). This defeats the purpose of having a source field in MetaContext.
Consider accepting an optional source parameter in the process() method or setting it based on how the Pipeline is configured during construction.
Apply this diff to accept source as a parameter:
- public async process(text: string): Promise<QirrelContext> {
+ public async process(text: string, source?: "http" | "cli" | "worker"): Promise<QirrelContext> {
// Create initial context with empty data
const initialContext: QirrelContext = {
meta: {
requestId: 'req_' + Date.now().toString(36) + Math.random().toString(36).slice(2, 7),
timestamp: Date.now(),
- source: 'cli'
+ source: source || 'cli'
},Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/core/pipeline.ts around line 79, the MetaContext.source is hardcoded to
'cli'; change the pipeline to accept a source parameter instead. Add an optional
source argument to Pipeline.process (or to the constructor if you prefer
configuration-time setting), pass that value into MetaContext.source (defaulting
to a sensible value such as 'cli' or 'unknown' when not provided), and update
all call sites to forward the correct source where available. Ensure
types/interfaces are updated to mark source optional and update tests/usage to
reflect the new signature.
| safety: { | ||
| allowTools: true | ||
| } |
There was a problem hiding this comment.
Consider security implications of defaulting allowTools to true.
The safety.allowTools field defaults to true, which means tool usage is enabled by default unless explicitly disabled in the configuration. Depending on what tools are available in your LLM integration, this could pose security risks.
Consider whether a more conservative default of false would be appropriate, requiring explicit opt-in for tool usage.
🤖 Prompt for AI Agents
In src/core/pipeline.ts around lines 86 to 88, the safety.allowTools default is
set to true which enables tool usage by default; change the default to false so
tools are opt-in, update any related type/interface defaults and configuration
docs/tests to reflect the new conservative default, and ensure code paths that
expect tools enabled handle the explicit opt-in (e.g., check for allowTools ===
true before initializing or invoking tools).
Replaces the
IntentResultinterface with the newQirrelContextfor amore structured representation of processing data. This includes
namespaced fields for meta, memory, and LLM information, along with the
original data.
Updates all relevant components, API functions, and documentation to
reflect this change. The
IntentResultinterface and related typedefinitions have been removed.
Summary by CodeRabbit
New Features
Changed
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.