Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
A Markov chain based text generation library and MegaHAL style chatbot
Python Shell
Pull request Compare This branch is 5 commits ahead, 395 commits behind master.

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
cobe
tests
.gitignore
COPYING
README
ez_setup.py
setup.py

README

COBE stands for Code of Business Ethics. Cobe is a conversation
simulator, a database backed port of MegaHAL.

It has been inspired by the success of Hailo:
http://blogs.perl.org/users/aevar_arnfjor_bjarmason/2010/01/hailo-a-perl-rewrite-of-megahal.html

Our goals are similar: an on-disk data store for lower memory usage,
better support for Unicode, and general stability.

You can read about the original MegaHAL here:
http://megahal.alioth.debian.org/How.html

In short, it uses Markov modeling to generate text responses after
learning from input text.

Cobe currently behaves similarly to MegaHAL 9.1.1. It uses 5th-order
Markov chains by default, generates as many responses as it can in a
constant amount of time, and scores those responses based on the
surprise they create given the Markov probabilities. This is described
in greater detail in the document linked above.

Cobe installs a command line tool (called cobe) for interacting with a
brain database, though it is also intended to be used as a Python
api. See the documentation in the cobe.brain module for details.

To install from a tarball:

  $ python setup.py install

Or from the Python Package Index:

  $ easy_install cobe

Usage:

  $ cobe init
  $ cobe learn <text file>
  $ cobe console
Something went wrong with that request. Please try again.