You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the advantages of using WasmEdge as the LLM inference runtime is that WasmEdge is portable across different CPUs and GPUs. So it's important to support more chips for WasmEdge.
ARM NPU chip is a popular AI processor that WasmEdge should support.
Details
Support running LLM inference with WasmEdge on ARM NPU
Appendix
No response
The text was updated successfully, but these errors were encountered:
I think we can start with Rockchip RK3588 SOC, which is a popular chip recently. It supports 32GB of memory, which is enough for LLM. There are also a large number of SBC (Single-board computers) products that can be tested, such as Radxa ROCK 5B/5C and Orange Pi 5 Plus.
Summary
One of the advantages of using WasmEdge as the LLM inference runtime is that WasmEdge is portable across different CPUs and GPUs. So it's important to support more chips for WasmEdge.
ARM NPU chip is a popular AI processor that WasmEdge should support.
Details
Support running LLM inference with WasmEdge on ARM NPU
Appendix
No response
The text was updated successfully, but these errors were encountered: