Skip to content

Make cache persistent #13

@alexanderkoller

Description

@alexanderkoller

Superopenai currently caches the results of LLM calls only in memory. Is there a way to save the cache on disk for subsequent runs of the program? I don't usually have multiple identical LLM calls within one run of the program, but duplication across multiple program runs is frequent.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions