Natasha for Home
Python Other
Switch branches/tags
Nothing to show
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
ActionsA entities from greenseer Sep 1, 2017
Modules reflect generic AA/NLG/generate in Motivation Aug 29, 2017
core core update Jun 11, 2017
legacy updating gitignore Apr 12, 2017
logs ding working after listening May 16, 2017
modules fract 7 test May 4, 2017
natlib Really basic interfaces for natlib, more to come in the next selected… Apr 12, 2017
resources a little bit better integration Apr 2, 2017
snowboy easier interface Apr 20, 2017
speech-to-text-websockets-python easier interface Apr 20, 2017
.gitignore updating gitignore Jul 26, 2017 scheduling aync commands May 19, 2017
LICENSE adding json rem .pycv Dec 13, 2016
Makefile better basic framework Apr 29, 2017 Website HEad Aug 26, 2017 WIP autoloader init, specs.json initial structure, probably redundant… Apr 12, 2017 ding working after listening May 16, 2017
dump.rdb removing .so for osx Apr 24, 2017 task attributes May 23, 2017
nohup.out correct so file for rPi May 2, 2017 a little bit better integration Apr 2, 2017 queue server updates Apr 20, 2017
requirements.txt error in requirements Apr 22, 2017 correct 2 May 2, 2017 a little bit better integration Apr 2, 2017
story.pages adding new entities Apr 5, 2017 better basic framework Apr 29, 2017 thresh change May 4, 2017 a little bit better integration Apr 2, 2017

NatOS - Libraries

If at first you don't succeed, we have a lot in common.

Scheduling async bash commands with one line. Literally.

from ActionsA import scheduler

scheduler.message(seconds=7, command='afplay imagine.mp3')
scheduler.message(minutes=5, command='python &')

Voice messages too...

from ActionsA import scheduler

scheduler.message(seconds=4, text='Nitesh is brilliant...')
scheduler.message(minutes=2, text='Or is he?')
scheduler.message(hours=6, text='Only time will tell', player='afplay')
scheduler.message(days=7, text='He is!', player='mpg123')

The default player is omxplayer, which runs well on raspberry Pi, but a player must be specified for other OS.

NLTK Smarter with sentence similarity


git clone
cd NatOS
sudo make

If you are running this on a Mac or any other Unix, please make sure your redis-server and queue-server is running. Open CLI, and run


cd NatOS
python &

In Raspberry Pi, for some weird reason, even after adding queue-server into init.d, it still has to be manually started.

Do not forget to run redis-server and queue-server.

Technical Documentation

Uses Redis and tasktiger as backend.

During make, redis-server and a queue for "tasks" is added to deamon for perceived seemless interaction. (Tasks = when to play, text to be played).

text-to-speech is powered by omxplayer on raspberry pi.

-- more coming soon --

comments, criticism, and contributions welcome

Designed and Tested for Raspberry Pi running Raspbian Jessie


Commands it can understand

Ok Nat, -- Greetings --

OK Nat, Call me -- Khal Drogo? --

Ok Nat, What time is it?

Ok Nat, Hows the weather?

Things to be added

  1. Wake work Engine (completed) (Thanks to
  2. Image Recognition
  3. Asynchronous process management for wait_on(s)

P.S. Good internet is of utmost important

alt tag

Technical Documentation

Overview of code

The only thing one has to run, is Which in turn executes, with WakeWord being "Ok Nat."

When the WakeWord is detected, the snowboydecoder immediately calls Mothership which starts recording the rest of the voice. Processes.

Processing Commands

  1. Core Common functions and casual Natural Language interactions (weather, time, hi-hello) will be handles by Core directory, mainly, Thoughful contributions to casual NLI welcomed.

  2. Advance Actions

I. Standard Modules. Reminders, Schedule, Monitoring; will be handled by Actions Architecture. Which is our Module Based approach.

II. Custom Modules Any hacker or organization can built their own Module and push into main repository which will allow any user of NatOS to download and use them. Any Custom Module which follows Actions Architectures Documentation (see here - work in progress) will be integrated on its own and all the functionality of that module will be available to NatOS main voice interface directly. It is gonna be that simple.