Skip to content

Rakile/NeuralCompanion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

371 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Companion

Neural Companion is a local desktop AI companion shell for realtime chat, speech, avatar output, visual replies, and addon-driven workflows.

The project is designed as a box of LEGO pieces:

  • chat providers are addons
  • TTS backends are addons
  • avatar engines are addons
  • vision/sensory sources are addons
  • workspace tabs and tools can be extended over time

The app runs locally on Windows and is built around a PySide6 desktop UI.

Manual

See the Neural Companion Manual for installation, first-run, avatar, TTS, PocketTTS, MuseTalk, addon, and troubleshooting guidance.

Status

This repository is an early public release candidate. It is usable, but still an experimental local AI/avatar application. Expect sharp edges around GPU setup, third-party model installs, external avatar engines, and local device configuration.

Highlights

  • Local or API chat providers: LM Studio, OpenAI, xAI/Grok, Claude, and addon providers.
  • TTS backends: Chatterbox, Gemini TTS Preview, PocketTTS, and addon backends.
  • Avatar engines: MuseTalk, VSeeFace, VaM, or no-avatar mode.
  • Dockable Qt workspace with system shaping, runtime console/chat, preview panels, and addon tabs.
  • MuseTalk preview and avatar-pack support.
  • VaM bridge support through VMC and file-bridge flows.
  • Vision and sensory supervisors for screen, webcam, clipboard, heart-rate, and visual replies.
  • Presets, Dry Run-generated performance profiles, tutorials, hotkeys, and chat replay tools.

Requirements

Recommended baseline:

  • Windows
  • Python 3.11
  • FFmpeg on PATH, or the installer-bundled FFmpeg tools
  • a local or API chat provider
  • NVIDIA CUDA GPU for MuseTalk

Useful external tools:

  • LM Studio for local LLMs
  • VSeeFace for VRM-style avatar output
  • VaM plus the Neural Companion bridge/plugin for VaM output
  • MuseTalk model weights if using MuseTalk

Install

For the detailed public install guide, see docs/install.md.

Graphical installer:

Double-click:

INSTALL_NEURAL_COMPANION.bat

or run:

py install_neural_companion_gui.py

For the main app:

py install_neural_companion.py --main --non-interactive

For a fuller install:

py install_neural_companion.py --all

If Python 3.11 is not your default Python:

py install_neural_companion.py --python-exe "C:\Path\To\Python311\python.exe"

Optional installs:

py install_neural_companion.py --musetalk --non-interactive
py install_neural_companion.py --pockettts --non-interactive
py install_neural_companion.py --avatar-packs --non-interactive

The graphical installer selects the default Echo and Eon MuseTalk avatar packs by default. The command-line --all target installs the main, MuseTalk, and PocketTTS runtimes; add --avatar-packs if you also want the default avatar packs from the separate avatar-pack release.

Run

run_neural_companion.bat

Or directly:

py qt_app.py

The legacy fallback UI is kept for diagnostics and can be launched with:

py qt_app.py --legacy-ui

First Run

The simplest first run is:

  1. Start LM Studio and load a chat model.
  2. Start Neural Companion.
  3. Select LM Studio as Chat Provider.
  4. Select None as Avatar Engine.
  5. Select a TTS backend.
  6. Press Initialize System.
  7. Use push-to-talk or type input to verify chat and speech.

Once that path works, enable MuseTalk, VSeeFace, VaM, visual replies, or sensory addons one at a time.

Voice References

The public repo does not ship voice samples.

If you want Chatterbox or another backend to clone a reference voice, place your own .wav files under:

voices/

Only use voice files you have the right to use.

Avatar Packs

MuseTalk avatar packs belong in:

avatar_packs/<pack_id>/

Large avatar packs and frame caches are intentionally not stored in the main repository. Demo packs live in the separate NeuralCompanion-AvatarPacks repository. See docs/avatar_packs.md and docs/release_asset_policy.md.

Runtime Data

Generated files are ignored by Git. Important generated locations include:

  • runtime/
  • MuseTalk/runtime/
  • avatar_packs/
  • voices/

Diagnostic file logs are off by default. Enable only when debugging:

$env:NC_MUSETALK_WORKER_LOG = "1"
$env:NC_MUSETALK_PREVIEW_LOG = "1"

Addons

Most runtime capabilities are implemented as addons under addons/.

Useful docs:

Repository Hygiene

The main repository should not contain local runtime outputs, model weights, avatar packs, voice samples, generated images, logs, or local virtual environments.

See:

Licensing

Neural Companion is released under the MIT License. See LICENSE.

Bundled third-party components may carry their own licenses. MuseTalk is included under its upstream MIT license in MuseTalk/LICENSE.

You are responsible for complying with the terms of any external model, provider, voice, avatar, or generated asset you use with the app.

Current Limitations

  • Setup is still Windows/Python-heavy.
  • MuseTalk requires separate model weights and benefits strongly from CUDA.
  • Some integrations require external applications or plugins.
  • The Designer-backed main.ui shell is the default; the legacy Qt shell remains as a temporary fallback.
  • Public demo assets are intentionally not bundled in the main repo.

Community

The project is intended to grow through community feedback, addon development, and shared workflows. Join the setup/help Discord here:

https://discord.gg/UqnwX46rcK

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages