Hi Teams,
forgive me about "too noisy" with so many issues. For I am so excited with the activate community with edge deployment with qualcomm chips.
I found some code about 8397/8797 with multiple NPUs with 320 tops (320 tops int8 dense). So we can image that larger models used in the chipset.
So I have some questions about it:
-
To speed up the prefill, can we use two/or more npu in the prefill stage, like tensor parallelism on GPU? If it can be achieved, how to?
-
about the MOE models (low computation, lower bandwith, powerful), can we use moe model with executorch/qnn sdk?
-
lora/multi lora support
-
SSD for larger models
Looking forward to you insights.
cc @cccclai @cbilgin @abhinaykukkadapu @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic
Hi Teams,
forgive me about "too noisy" with so many issues. For I am so excited with the activate community with edge deployment with qualcomm chips.
I found some code about 8397/8797 with multiple NPUs with 320 tops (320 tops int8 dense). So we can image that larger models used in the chipset.
So I have some questions about it:
To speed up the prefill, can we use two/or more npu in the prefill stage, like tensor parallelism on GPU? If it can be achieved, how to?
about the MOE models (low computation, lower bandwith, powerful), can we use moe model with executorch/qnn sdk?
lora/multi lora support
SSD for larger models
Looking forward to you insights.
cc @cccclai @cbilgin @abhinaykukkadapu @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic