Skip to content

Currently, all AI upstream services are simulated using this fake server method. #13340

@shreemaan-abhishek

Description

@shreemaan-abhishek

Currently, all AI upstream services are simulated using this fake server method.
I'm worried that the difference between the fake server and the real LLM request here is too big.

Should we introduce a container specifically for LLM Fake Server?

Originally posted by @membphis in #13307 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    Status

    📋 Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions