Genai LLMInference :Failed to load GPU model with the Error - Failed to build program executable - Out of host memoryPass #5406
Labels
platform:android
Issues with Android as Platform
stat:awaiting response
Waiting for user response
task:LLM inference
Issues related to MediaPipe LLM Inference Gen AI setup
type:bug
Bug in the Source Code of MediaPipe Solution
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
Android 14
Mobile device if the issue happens on mobile device
Android Mobile device
Browser and version if the issue happens on browser
No response
Programming Language and version
Kotlin
MediaPipe version
0.10.14
Bazel version
No response
Solution
LLMInference
Android Studio, NDK, SDK versions (if issue is related to building in Android environment)
No response
Xcode & Tulsi version (if issue is related to building for iOS)
No response
Describe the actual behavior
Initialization of LLmInference fails loading the GPU models on the latest Maven package - 0.10.14 with the Error -"Failed to build program executable - Out of host memoryPass"
Describe the expected behaviour
LLMInference App should run using GPU model and Information retrieving works successfully
Standalone code/steps you may have used to try to get what you need
Other info / Complete Logs
The text was updated successfully, but these errors were encountered: