-
Notifications
You must be signed in to change notification settings - Fork 705
Remove caching logic from lib/
#779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: 1683cb9 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
This PR removes all LLM caching logic from the library level, centralizing caching at the observe level instead. The changes involve removing the entire caching infrastructure including cache classes and related functionality from LLM clients.
- Removes entire caching infrastructure (
BaseCache
,ActionCache
,LLMCache
) and all cache-related files fromlib/cache/
- Removes caching logic from all LLM clients (OpenAI, Anthropic, Google, Groq, Cerebras, AISDK) including constructor params and cache operations
- Removes cache cleanup code from
StagehandPage
extract/observe operations - Removes
enableCaching
parameter fromLLMProvider
constructor while maintaining flag at Stagehand class level - Preserves core LLM client functionality for API calls, response handling and error management
13 file(s) reviewed, 3 comment(s)
Edit PR Review Bot Settings | Greptile
this.modelName = modelName; | ||
this.clientOptions = clientOptions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Redundant assignment of modelName since it's already set in super constructor
this.modelName = modelName; | |
this.clientOptions = clientOptions; | |
this.clientOptions = clientOptions; |
@@ -121,7 +112,7 @@ export class OpenAIClient extends LLMClient { | |||
throw new StagehandError("Temperature is not supported for o1 models"); | |||
} | |||
|
|||
const { image, requestId, ...optionsWithoutImageAndRequestId } = options; | |||
const { requestId, ...optionsWithoutImageAndRequestId } = options; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: The image
property is still being destructured in the variable name but was removed from the actual destructuring
public clientOptions: ClientOptions; | ||
|
||
constructor({ | ||
enableCaching = false, | ||
cache, | ||
modelName, | ||
clientOptions, | ||
}: { | ||
logger: (message: LogLine) => void; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: The logger
parameter is required in the constructor type but not used
why
This needs a refactor -- caching should be done at the
observe
level, not at the LLM inference level. In order to do the refactor though, we should clean up this logicwhat changed
Removed existing LLM caching logic
test plan
Evals + running custom client examples