Skip to content

[SAMPLE]I want to use NPU&Hi4 mini for generating equations for inference. #404

Open
@qihui-liu

Description

@qihui-liu

My PC is a Qualcomm processor, and I need to use Phi4 mini for inference. There is a mandatory requirement to use QNN (NPU).
I found an example using Phi4 mini&CPU in the example, but what I need is NPU.
Can you help provide an example?
Actually, I have seen a parameter (provider) that can pass "QNN". I forced "provider" to be set to QNN, but my model using CPU seems to have no effect.
Do I need to convert the model again?

cancellationToken.ThrowIfCancellationRequested();
var config = new Config(modelDir);
//if (!string.IsNullOrEmpty(provider))
//{
//    config.AppendProvider(provider);
//}
config.AppendProvider("qnn");
chatClient = new OnnxRuntimeGenAIChatClient(config, true, options);
cancellationToken.ThrowIfCancellationRequested();

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions