Skip to content

Gemma 3n E4B IT cannot run with LlmInference.Backend.GPU on Pixel 7 #2

@nmrenyi

Description

@nmrenyi

Doesn't work. The APP would stuck on the LLM loading page, and crash

private val mediaPipeLanguageModelOptions: LlmInferenceOptions =
    LlmInferenceOptions.builder().setModelPath(
        baseFolder + GEMMA_MODEL
    ).setPreferredBackend(LlmInference.Backend.GPU).setMaxTokens(4096).build()

Though on CPU it works normally

private val mediaPipeLanguageModelOptions: LlmInferenceOptions =
    LlmInferenceOptions.builder().setModelPath(
        baseFolder + GEMMA_MODEL
    ).setPreferredBackend(LlmInference.Backend.CPU).setMaxTokens(4096).build()

Metadata

Metadata

Assignees

Labels

wontfixThis will not be worked on

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions