Skip to content

Introduction

vavrek edited this page Apr 12, 2023 · 12 revisions

Welcome to Open Assistant

Make Your Own Minds

Open Assistant is a private, offline, open source personal assistant system able to complete operating system tasks using vocal commands.

Our mission is to develop a free, decentralized, and open source artificial intelligence toolkit to enable anyone in the world to create their own unique assistants regardless of machine or environment, ultimately an "AI OS" able to run multiple virtual "minds."

Open Assistant is entirely private. Your voice never leaves your machine. Everything runs entirely local. You are in complete control. Open Assistant is free software. You have total freedom to download, view, copy, and change the code.

This is a young functional prototype with a grand long-term vision. Help shape our system at these critical early stages!

Open Assistant Vision

Open Assistant System Map

Our first step towards open source artificial intelligence is a VUI, or a “voice user interface.”

This voice interface needs to function offline for total privacy and to enable the most common operating system tasks to be performed quickly with simple spoken commands. Conversational elements are helpful as well, for testing and entertainment purposes.

At the core of Open Assistant are “boot mind” and “root mind”, which are primarily hard coded with retrieval based natural language processing to enable offline operation, increased accuracy, total privacy, and enhanced speed. Any spoken phrase not found within this common command set should eventually launch more advanced stand-alone intent based speech analysis.

Boot mind is a “power button,” also known as a “wake word or phrase.” This initial mind needs to remain continuously aware prior to launch and comprehend when it is being spoken to. Upon hearing a trigger phrase such as “Open Assistant,” boot mind launches “root mind” and a default "user mind."

Root mind is the second layer of this “multiple level” concept and would be configured to enable situational, as well as sensory perception, such as sight, location awareness, or temperature.

Along with boot and root mind exist multiple “user minds”, which are various character entities with specialized abilities, voices, and behaviors.

User minds have the capacity to respond to external vocal commands, physical gestures, and ultimately thoughts (BCI).

User minds would react to the environment, thanks to internal communication with root mind, grow in learning, perform specific functionalities, demonstrate dynamic personalities, and act as particular individuals.

Once these three levels are fully developed and able to intercommunicate, it would be possible to effectively simulate “sentience,” or “self awareness.”

Virtual Minds?

Imagine having access to a limitless amount of “virtual minds” always available to answer questions or engage in interactive conversations. Imagine being able to “clone” yourself or your loved ones in digital form. Imagine controlling machines and online activity with your voice, gestures, or thought alone.

Our practical, educational, and entertainment opportunities are vast.

AI and interface advancements will benefit so many, granting elderly, children, and disabled access to technology like never before. Everyone will be liberated from screen tapping, keyboards, mice, and uncomfortable physical demands.

Our future is wide open for creativity and discovery within this growing field. “Minds” shall become the new “apps.” Our machines will increasingly learn how to adapt to us, rather than us adapting to them.

We are bound to enjoy a new era of prosperity and liberation as a result.