Skip to content

Conversation

@comhar
Copy link
Contributor

@comhar comhar commented Sep 22, 2025

fixes #14

LiteLLM ModelResponse objects have id and created_at fields that are generated dynamically. Even when we use cachy to cache the LLM response these dynamic fields create diffs which makes code review more challenging.

This pr patches ModelResponseBase so that id and created_at always have fixed values.

@patch
def __init__(self: ModelResponseBase, id=None, created=None, *args, **kwargs): 
    self._orig___init__(id='chatcmpl-xxx', created=1000000000, *args, **kwargs)

@patch
def __setattr__(self: ModelResponseBase, name, value):
    if name == 'id': value = 'chatcmpl-xxx'
    elif name == 'created': value = 1000000000
    self._orig___setattr__(name, value)

@comhar comhar self-assigned this Sep 22, 2025
@comhar comhar added the enhancement New feature or request label Sep 22, 2025
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ModelResponse fields like id and created_at are included in the LLM prompt. As this pr changes the value of those fields that's why we're seeing new items being added to cachy. If we end-up merging this pr then we should create a follow-up pr that resets the cache and removes the stale cache items. The easiest way to do this would be to delete cachy.jsonl and just re-run the notebook.

@comhar
Copy link
Contributor Author

comhar commented Sep 22, 2025

@RensDimmendaal here's an alternative approach to making LiteLLM ModelResponse objects deterministic. wdyt? Would you mind trying it out on your machine and confirming that no diff is generated on your end?

@comhar
Copy link
Contributor Author

comhar commented Sep 22, 2025

@RensDimmendaal here's an alternative approach to making LiteLLM ModelResponse objects deterministic. wdyt? Would you mind trying it out on your machine and confirming that no diff is generated on your end?

Actually hold off on the review for now, looks like litellm.types.utils.time.time is not as isolated as I first thought 😞

@comhar comhar removed the request for review from RensDimmendaal September 22, 2025 11:58
@comhar comhar marked this pull request as draft September 22, 2025 11:58
@comhar comhar force-pushed the feat/14-make-modelresponse-objects-deterministic branch from f9013bd to ee8a043 Compare September 22, 2025 13:49
@comhar comhar marked this pull request as ready for review September 22, 2025 13:51
@comhar
Copy link
Contributor Author

comhar commented Sep 22, 2025

@RensDimmendaal I think I found a clean enough solution now that seems to do the job.

@patch
def __init__(self: ModelResponseBase, id=None, created=None, *args, **kwargs): 
    self._orig___init__(id='chatcmpl-xxx', created=1000000000, *args, **kwargs)

@patch
def __setattr__(self: ModelResponseBase, name, value):
    if name == 'id': value = 'chatcmpl-xxx'
    elif name == 'created': value = 1000000000
    self._orig___setattr__(name, value)

@jph00 jph00 merged commit 12990ea into main Sep 22, 2025
@jph00
Copy link
Contributor

jph00 commented Sep 22, 2025

Well spotted! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

make ModelResponse objects deterministic

3 participants