ADFA-3016 | Queue model loading while inference engine initializes#991
ADFA-3016 | Queue model loading while inference engine initializes#991
Conversation
Stores the selected model path temporarily if the engine is not ready, processing it upon successful initialization
📝 WalkthroughRelease NotesFeatures
Implementation Details
|
| Cohort / File(s) | Summary |
|---|---|
Engine initialization state management app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt |
Added pendingModelUri field to queue model load requests when engine is not ready. Modified loadModelFromUri to detect uninitialized/initializing engine state and defer the request. After engine initialization, automatically processes queued model URIs. |
Estimated code review effort
🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
🐰 A model waits in queue so neat,
While engines start their init beat,
No rush, no crash—just poised and cool,
When ready comes, we load our fuel! 🚀
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | ✅ Passed | The title accurately describes the main change: introducing a queuing mechanism for model loading during engine initialization. |
| Description check | ✅ Passed | The description is well-related to the changeset, explaining the queuing mechanism, its purpose, implementation details, and the race condition it addresses. |
| Docstring Coverage | ✅ Passed | Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%. |
✏️ Tip: You can configure your own custom pre-merge checks in the settings.
✨ Finishing Touches
- 📝 Generate docstrings
🧪 Generate unit tests (beta)
- Create PR with unit tests
- Post copyable unit tests in a comment
- Commit unit tests in branch
fix/ADFA-3016-queue-model-loading
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt (1)
70-82:⚠️ Potential issue | 🟠 MajorEngine-error path leaves
_modelLoadingStatestuck inLoadingwithpendingModelUriun-cleared.When
success == false(lines 79-82),_engineStateis updated toError, but:
pendingModelUriis never nulled out._modelLoadingState, which was set toLoadingat line 95 when the path was queued, is never updated to an error state.The result is a permanent loading spinner with no error message for the model-load operation, even though the engine has definitively failed. Additionally, a stale
pendingModelUricould interfere if initialization is ever retried.🐛 Proposed fix — clear pending state in the error branch
} else { _engineState.value = EngineState.Error("Failed to load inference library. Please ensure it's installed.") + pendingModelUri?.let { + pendingModelUri = null + _modelLoadingState.value = ModelLoadingState.Error("Inference engine initialization failed.") + } Log.e("AiSettingsViewModel", "LLM Inference Engine initialization failed.") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt` around lines 70 - 82, The engine-error branch in AiSettingsViewModel currently sets _engineState to EngineState.Error but fails to clear pendingModelUri and update _modelLoadingState, leaving the UI stuck in Loading; modify the failure branch (where success == false) to set pendingModelUri = null and update _modelLoadingState to a terminal error state (e.g., ModelLoadingState.Error with an explanatory message) alongside the existing _engineState update and Log.e call so the UI shows an error and the queued path is cleared; reference the symbols _engineState, pendingModelUri, _modelLoadingState, EngineState.Error, ModelLoadingState (or whatever concrete error enum/class is used) and the AiSettingsViewModel initialization/error handling block to locate the change.
🧹 Nitpick comments (1)
app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt (1)
91-97:ModelLoadingState.Loadingat line 95 is slightly misleading; also note silent last-write-wins for multiple rapid selections.Two minor observations on the queuing guard:
Setting
_modelLoadingState.value = ModelLoadingState.Loading(line 95) when the model is only queued (not yet being copied/loaded) may cause the UI to show a loading indicator that begins before any real I/O starts. Consider whether a distinctQueuedstate or a guarded state (e.g., not settingLoadinguntil the deferredloadModelFromUriactually starts) is more accurate — though the current behaviour is tolerable.If the user selects a second (or third) model URI while the engine is still initialising,
pendingModelUriis silently overwritten (line 94). Only the last selection is processed; earlier ones are discarded without any notification. This is a reasonable "last write wins" policy, but is worth a brief comment for future maintainers.📝 Suggested documentation comment
if (currentState is EngineState.Uninitialized || currentState is EngineState.Initializing) { + // Only the most recent selection is retained; earlier calls while initializing are discarded. pendingModelUri = path _modelLoadingState.value = ModelLoadingState.Loading return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt` around lines 91 - 97, The current guard sets _modelLoadingState.value = ModelLoadingState.Loading and overwrites pendingModelUri when the engine is still initializing; change this by either (a) introducing a distinct queued state (e.g., ModelLoadingState.Queued) or deferring setting ModelLoadingState.Loading until loadModelFromUri actually begins, and add a short comment on the last-write-wins behavior; specifically update the logic around _engineState checks (EngineState.Uninitialized / EngineState.Initializing), pendingModelUri, and where _modelLoadingState is assigned so that the UI only shows Loading when loadModelFromUri starts, and document that subsequent selections while initializing overwrite pendingModelUri intentionally.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt`:
- Around line 70-82: The engine-error branch in AiSettingsViewModel currently
sets _engineState to EngineState.Error but fails to clear pendingModelUri and
update _modelLoadingState, leaving the UI stuck in Loading; modify the failure
branch (where success == false) to set pendingModelUri = null and update
_modelLoadingState to a terminal error state (e.g., ModelLoadingState.Error with
an explanatory message) alongside the existing _engineState update and Log.e
call so the UI shows an error and the queued path is cleared; reference the
symbols _engineState, pendingModelUri, _modelLoadingState, EngineState.Error,
ModelLoadingState (or whatever concrete error enum/class is used) and the
AiSettingsViewModel initialization/error handling block to locate the change.
---
Nitpick comments:
In
`@app/src/main/java/com/itsaky/androidide/agent/viewmodel/AiSettingsViewModel.kt`:
- Around line 91-97: The current guard sets _modelLoadingState.value =
ModelLoadingState.Loading and overwrites pendingModelUri when the engine is
still initializing; change this by either (a) introducing a distinct queued
state (e.g., ModelLoadingState.Queued) or deferring setting
ModelLoadingState.Loading until loadModelFromUri actually begins, and add a
short comment on the last-write-wins behavior; specifically update the logic
around _engineState checks (EngineState.Uninitialized /
EngineState.Initializing), pendingModelUri, and where _modelLoadingState is
assigned so that the UI only shows Loading when loadModelFromUri starts, and
document that subsequent selections while initializing overwrite pendingModelUri
intentionally.
Description
This PR introduces a queuing mechanism for model loading in the
AiSettingsViewModel. If a user selects a model URI while the LLM inference engine is still initializing (which often happens on slower devices due to process death during the file picker intent), the path is temporarily stored inpendingModelUri. Once the engine completes its initialization successfully, the queued model is automatically loaded.Details
Logic-related change:
_modelLoadingStatetransitions toLoadingandpendingModelUriis set if_engineStateisUninitializedorInitializing.loadModelFromUriis called with the queued path, andpendingModelUriis cleared.Before changes
Screen.Recording.2026-02-18.at.1.32.36.PM.mov
After changes
Screen.Recording.2026-02-18.at.3.12.06.PM.mov
Ticket
ADFA-3016
Observation
This fix resolves the race condition caused by Android's aggressive memory management, where the ViewModel is recreated and the engine begins initialization at the exact moment the file picker returns the selected model URI.