Neural Companion is a local desktop AI companion shell for realtime chat, speech, avatar output, visual replies, and addon-driven workflows.
The project is designed as a box of LEGO pieces:
- chat providers are addons
- TTS backends are addons
- avatar engines are addons
- vision/sensory sources are addons
- workspace tabs and tools can be extended over time
The app runs locally on Windows and is built around a PySide6 desktop UI.
See the Neural Companion Manual for installation, first-run, avatar, TTS, PocketTTS, MuseTalk, addon, and troubleshooting guidance.
This repository is an early public release candidate. It is usable, but still an experimental local AI/avatar application. Expect sharp edges around GPU setup, third-party model installs, external avatar engines, and local device configuration.
- Local or API chat providers: LM Studio, OpenAI, xAI/Grok, Claude, and addon providers.
- TTS backends: Chatterbox, Gemini TTS Preview, PocketTTS, and addon backends.
- Avatar engines: MuseTalk, VSeeFace, VaM, or no-avatar mode.
- Dockable Qt workspace with system shaping, runtime console/chat, preview panels, and addon tabs.
- MuseTalk preview and avatar-pack support.
- VaM bridge support through VMC and file-bridge flows.
- Vision and sensory supervisors for screen, webcam, clipboard, heart-rate, and visual replies.
- Presets, Dry Run-generated performance profiles, tutorials, hotkeys, and chat replay tools.
Recommended baseline:
- Windows
- Python 3.11
- FFmpeg on PATH, or the installer-bundled FFmpeg tools
- a local or API chat provider
- NVIDIA CUDA GPU for MuseTalk
Useful external tools:
- LM Studio for local LLMs
- VSeeFace for VRM-style avatar output
- VaM plus the Neural Companion bridge/plugin for VaM output
- MuseTalk model weights if using MuseTalk
For the detailed public install guide, see docs/install.md.
Graphical installer:
Double-click:
INSTALL_NEURAL_COMPANION.bat
or run:
py install_neural_companion_gui.pyFor the main app:
py install_neural_companion.py --main --non-interactiveFor a fuller install:
py install_neural_companion.py --allIf Python 3.11 is not your default Python:
py install_neural_companion.py --python-exe "C:\Path\To\Python311\python.exe"Optional installs:
py install_neural_companion.py --musetalk --non-interactive
py install_neural_companion.py --pockettts --non-interactive
py install_neural_companion.py --avatar-packs --non-interactiveThe graphical installer selects the default Echo and Eon MuseTalk avatar packs
by default. The command-line --all target installs the main, MuseTalk, and
PocketTTS runtimes; add --avatar-packs if you also want the default avatar
packs from the separate avatar-pack release.
run_neural_companion.batOr directly:
py qt_app.pyThe legacy fallback UI is kept for diagnostics and can be launched with:
py qt_app.py --legacy-uiThe simplest first run is:
- Start LM Studio and load a chat model.
- Start Neural Companion.
- Select
LM Studioas Chat Provider. - Select
Noneas Avatar Engine. - Select a TTS backend.
- Press
Initialize System. - Use push-to-talk or type input to verify chat and speech.
Once that path works, enable MuseTalk, VSeeFace, VaM, visual replies, or sensory addons one at a time.
The public repo does not ship voice samples.
If you want Chatterbox or another backend to clone a reference voice, place your
own .wav files under:
voices/
Only use voice files you have the right to use.
MuseTalk avatar packs belong in:
avatar_packs/<pack_id>/
Large avatar packs and frame caches are intentionally not stored in the main repository. Demo packs live in the separate NeuralCompanion-AvatarPacks repository. See docs/avatar_packs.md and docs/release_asset_policy.md.
Generated files are ignored by Git. Important generated locations include:
runtime/MuseTalk/runtime/avatar_packs/voices/
Diagnostic file logs are off by default. Enable only when debugging:
$env:NC_MUSETALK_WORKER_LOG = "1"
$env:NC_MUSETALK_PREVIEW_LOG = "1"Most runtime capabilities are implemented as addons under addons/.
Useful docs:
- docs/addon_quickstart.md
- docs/templates/README.md
- docs/chat_provider_addons.md
- docs/vision_source_addons.md
- docs/vision_supervisor_addons.md
- docs/visual_reply_addons.md
- docs/addon_state_and_presets.md
The main repository should not contain local runtime outputs, model weights, avatar packs, voice samples, generated images, logs, or local virtual environments.
See:
- CONTRIBUTING.md
- docs/avatar_packs.md
- docs/release_checklist.md
- docs/release_asset_policy.md
- docs/third_party_and_assets.md
- docs/troubleshooting.md
- docs/known_limitations.md
Neural Companion is released under the MIT License. See LICENSE.
Bundled third-party components may carry their own licenses. MuseTalk is included under its upstream MIT license in MuseTalk/LICENSE.
You are responsible for complying with the terms of any external model, provider, voice, avatar, or generated asset you use with the app.
- Setup is still Windows/Python-heavy.
- MuseTalk requires separate model weights and benefits strongly from CUDA.
- Some integrations require external applications or plugins.
- The Designer-backed
main.uishell is the default; the legacy Qt shell remains as a temporary fallback. - Public demo assets are intentionally not bundled in the main repo.
The project is intended to grow through community feedback, addon development, and shared workflows. Join the setup/help Discord here:
https://discord.gg/UqnwX46rcK