-
Notifications
You must be signed in to change notification settings - Fork 30
feat: Convert LangChain implementation to new AIProvider interface #942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Convert LangChain implementation to new AIProvider interface #942
Conversation
| { | ||
| "extends": "./tsconfig.json", | ||
| "include": ["src/**/*", "**/*.test.ts", "**/*.spec.ts"] | ||
| "include": ["/**/*.ts"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: ESLint Include Pattern Uses Absolute Path
The include pattern ["/**/*.ts"] in tsconfig.eslint.json uses a leading slash, which makes it an absolute path from the filesystem root. This prevents ESLint from finding TypeScript files within the package, as it expects a relative path from the project directory.
| if (langChainResponse?.response_metadata?.tokenUsage) { | ||
| const { tokenUsage } = langChainResponse.response_metadata; | ||
| usage = { | ||
| total: tokenUsage.totalTokens || 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a thought: should this be promptTokens + completionTokens instead? I don't know if totalTokens would ever be different than prompt+completion, but if it ever was I could see users being confused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am trying to find the conversation where this came up before. I believe @kinyoklion mentioned it. We used total tokens because it is possible some models may use additional tokens not accounted for in just the prompt / completion tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that is the thought. We originally didn't know that prompt + completion = total would be a guarantee. Which seems more likely if we eventually care about thinking tokens.
For example total may become prompt + thinking + completion in which case our total would be incorrect if it was just prompt + token.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though some providers currently aren't charging for thinking so they are excluding it from their total calculation. But I could see it change either way.
2143219
into
jb/sdk-1454/ai-provider-langchain
Note
Refactors LangChain integration to an AIProvider-based provider with a new invoke flow and metrics, removes TrackedChat, and updates tests and configs.
AIProvidersubclass with constructor-heldBaseChatModel,invokeModel, andgetChatModel.create(aiConfig)andcreateLangChainModelfactory usinginitChatModelandmapProvider.createTokenUsage/tracking helpers withcreateAIMetricsreturning{ success, usage }.convertMessagesToLangChainand provider mapping (e.g.,gemini→google-genai).LangChainTrackedChatand its exports;index.tsnow exports onlyLangChainProvider.@langchain/core/messages; mocklangchain/chat_models/universal.createAIMetricsandmapProvider; adjust expectations and formatting.transform, broadentestMatch, simplify coverage settings, and addmoduleFileExtensions.Written by Cursor Bugbot for commit fe29f8f. This will update automatically on new commits. Configure here.