Skip to content

Remove caching logic from lib/ #779

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

kamath
Copy link
Member

@kamath kamath commented May 28, 2025

why

This needs a refactor -- caching should be done at the observe level, not at the LLM inference level. In order to do the refactor though, we should clean up this logic

what changed

Removed existing LLM caching logic

test plan

Evals + running custom client examples

Copy link

changeset-bot bot commented May 28, 2025

🦋 Changeset detected

Latest commit: 1683cb9

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@kamath kamath requested a review from seanmcguire12 May 28, 2025 23:21
@kamath kamath marked this pull request as ready for review May 29, 2025 00:54
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR removes all LLM caching logic from the library level, centralizing caching at the observe level instead. The changes involve removing the entire caching infrastructure including cache classes and related functionality from LLM clients.

  • Removes entire caching infrastructure (BaseCache, ActionCache, LLMCache) and all cache-related files from lib/cache/
  • Removes caching logic from all LLM clients (OpenAI, Anthropic, Google, Groq, Cerebras, AISDK) including constructor params and cache operations
  • Removes cache cleanup code from StagehandPage extract/observe operations
  • Removes enableCaching parameter from LLMProvider constructor while maintaining flag at Stagehand class level
  • Preserves core LLM client functionality for API calls, response handling and error management

13 file(s) reviewed, 3 comment(s)
Edit PR Review Bot Settings | Greptile

Comment on lines 39 to 40
this.modelName = modelName;
this.clientOptions = clientOptions;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Redundant assignment of modelName since it's already set in super constructor

Suggested change
this.modelName = modelName;
this.clientOptions = clientOptions;
this.clientOptions = clientOptions;

@@ -121,7 +112,7 @@ export class OpenAIClient extends LLMClient {
throw new StagehandError("Temperature is not supported for o1 models");
}

const { image, requestId, ...optionsWithoutImageAndRequestId } = options;
const { requestId, ...optionsWithoutImageAndRequestId } = options;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: The image property is still being destructured in the variable name but was removed from the actual destructuring

public clientOptions: ClientOptions;

constructor({
enableCaching = false,
cache,
modelName,
clientOptions,
}: {
logger: (message: LogLine) => void;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: The logger parameter is required in the constructor type but not used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant