Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

epic: llama.cpp should support sycl for Intel-based CPUs #1252

@dan-menlo

Description

@dan-menlo

Goal

  • Should support sycl for Intel-based CPUs

Tasklist

  • What should be the hardware heuristic to opt for LLVM?
  • Should Installer support this?
  • This may require us to have cortex engines be able to define a "default" llama.cpp version (if two or more exist)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions