-
Notifications
You must be signed in to change notification settings - Fork 18
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
ggml-qnn: bug free and update comments according to the refined ggml-…
…backend-subsystem (#217)
- Loading branch information
Showing
4 changed files
with
24 additions
and
50 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
0d05e7e
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the existing "Backend Sched" feature is used for other scenarios which the backend need to use/operate device(CPU/GPU) memory directly, for example, the Intel SYCL backend.
any existing or new backend which only need to use/operate system memory can follow the style in ggml-qnn.cpp along the proposal/refine ggml backend subsystem(although the PR is not accepted by the maintainer of ggml backend subsystem).
in the fact, the existing "Backend Sched" feature is heavily used in the llama.cpp for various complex scenarios.