Skip to content

Conversation

@LRL2-ModelCloud
Copy link
Collaborator

@LRL2-ModelCloud LRL2-ModelCloud commented Aug 19, 2025

Fixes #1686

@Qubitium
Copy link
Collaborator

Notes: Some models, likely broken logic, does not check for model config use_cache during inference if the arg is not passed. Normally, model.config.use_cache should be used as default for model inference unless overriden during inference.

@Qubitium Qubitium merged commit 558449b into ModelCloud:main Aug 19, 2025
1 check passed
@Qubitium Qubitium changed the title some models require explicitly specifying the value of use_cache Model config.use_cache not correctly used during inference for some models Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants