Skip to content

Does llama.cpp deploy support mutil_nodes mutil-GPUs #11865

@Tian14267

Description

@Tian14267

I have two machine with 2 * 8 * A800, want deploy a GGUF model with two machines。
Does llama.cpp deploy support mutil_nodes mutil-GPUs ,
if OK, How can I do this ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions