feat(core): in-process mistral.rs local runtime for qwen3.5-1b#46
Merged
Conversation
- Introduced a new Local Model panel for managing local AI runtime, including status monitoring and download options. - Updated settings navigation to include 'local-model' route. - Enhanced AIPanel and Home components to integrate local model management features. - Improved error handling in Tauri commands for local AI operations. - Updated subproject reference in skills.
- Improved error handling mechanisms in the Local Model panel to provide clearer feedback during AI runtime operations. - Updated relevant components to ensure consistent error reporting and user notifications. - Refactored code to streamline error management processes across local model features.
- Added CoreRunMode enum to support in-process and child process execution modes. - Updated CoreProcessHandle to manage tasks and child processes based on the selected run mode. - Enhanced process spawning logic to differentiate between in-process server and dedicated core binary execution. - Improved error handling for process readiness checks in both run modes.
- Removed unused HashMap import from runtime.rs to enhance code clarity. - Updated runtime module exports to only include necessary components, simplifying the interface for consumers. - Changed variable name in llm_generator.rs for clarity without altering functionality.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
rust-coreusingmistral.rswithqwen3.5-1bdefaults.Problem
rust-corefor background execution.Solution
local_aiconfiguration schema and module inrust-core.LocalAiServiceto:mistralrs::GgufModelBuilderModelresident in memoryopenhuman.local_ai_statusopenhuman.local_ai_downloadopenhuman.local_ai_summarizeopenhuman.local_ai_suggest_questionsTesting
yarn -s compilecargo check --manifest-path src-tauri/Cargo.tomlcargo fmt --allcargo check -p openhuman-corecargo check -p OpenHumanyarn -s tsc --noEmityarn format:check,yarn lint,yarn compile) during pushImpact
Breaking Changes
Related
qwen3.5-1bGGUF download/configuration.