Skip to content

neuromancer/libmind

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
src
 
 
 
 
 
 
 
 
 
 
 
 

libmind

A general purpose library to process and predict sequences of elements (for example, sequences of letters as words, or phrases) using echo state networks. The usage of libmind tries to be as simple as it is possible.

Dedication

This project is dedicated to someone special who taught me the difference between hope and despair.

Required dependencies for libmind:

Installation

  • In Debian/Ubuntu
  1. Install the dependencies of Oger:

    apt-get install python-numpy python-mdp python2.7-dev

  2. Install Oger (a debian package generated using checkinstall is available)

    wget "https://github.com/downloads/neuromancer/libmind/oger_1.1.3-1_all.deb" ; dpkg -i oger_1.1.3-1_all.deb

  • In Arch Linux
  1. Install Oger from aur (http://aur.archlinux.org/packages.php?ID=51256), for example, using packer:

    packer -S python2-oger

After that, just clone this repository, and execute:

make

to compile the C code. Then you are ready to start playing with libmind.

Using libmind

To create a simulated mind, it will be necessary to define how to vectorize its inputs' and outputs' elements. Once the mind is created, the initialization will bootstrap it to allow it to learn how generate correctly its outputs' elements. An assimilation function is used to introduce inputs, one by one, in a sequence. Every time an input is assimilated, the prediction of the next output is returned. A stop function is used to finish with the current sequence, resetting the internal state of the simulated mind.

Examples

The use of libmind is shown using several example or tests. The examples available are:

Identification of part of speech (POS) of English words only using their letters

Reduction of variableless propositional logic formulas

  • File: test_logic.py

  • Objective: To classify between true and false variableless propositional formulas.

  • Dataset: The dataset is generated using the following grammar:

      Formula := Formula and Formula | Formula or Formula | True | False | ~True | ~False
    

The evaluation of the resulting formula is made using Python evaluation of booleans (where evaluation of "and" precedes "or")

I'm thinking in more examples to extend and improve libmind but of course, this experimental project is open to new ideas!

About

A general purpose library to process and predict sequences of elements using echo state networks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published