A Ruby port of Ed Chen's Python RBM implementation.
Ruby
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.gitignore
.rvmrc
README.md
random_gaussian.rb
restricted_boltzmann_machine.rb
training_data.rb

README.md

Ruby Restricted Boltzmann Machine

This is a Ruby port of Ed Chen's Python RBM implementation.

I built this library to learn how Restricted Boltzmann Machines (RBM) and Deep Learning work. No better way to understanding something than implementing it...

It's best to read Ed Chen's blog post and follow along with this Ruby code instead of the Python code in his example.

Take it for a test drive

Run it like so in a Ruby irb console in the root directory of the repo:

require "#{ Dir.pwd }/restricted_boltzmann_machine.rb"
require 'pp'

# Initialize and train the RBM
rbm = RestrictedBoltzmannMachine.new(6, 2)
training_data = [
  [1,1,1,0,0,0],
  [1,0,1,0,0,0],
  [1,1,1,0,0,0],
  [0,0,1,1,1,0],
  [0,0,1,1,1,0],
  [0,0,1,1,1,0]
]
rbm.train(training_data, 10000)

# Inspect the trained weights
# NOTE: The weights here will be different from the ones in Ed's example.
# They could be transposed, however they should converge on the same clusters
# as Ed's.
pp rbm.weights

# Now provide a new input and see what we get on the output.
# You can try a few different inputs that somewhat resemble the movie category
# clusters from Ed's example and you should get consistent output.
user_input = [[0,0,0,1,1,0]]
pp rbm.run_visible(user_input)

# And now let it dream for a bit. It starts with a random input and then
# converges on patterns it's familiar with through training.
pp rbm.daydream(10)

About RBMs

What's fascinating about RBMs is that they use two phases of information processing for learning: a wake phase where they take inputs and translate them to hidden output states, and a dream phase where they generate visible inputs based on what they have learned. When you look at the representations of these inputs, they look a lot like dream images: We can recognize certain patterns, however they are re-composed in new and unexpected ways.

The other interesting aspect is that an RBM becomes better (meaning more robust) by adding a probabilistic element to its learning. A similar thing happens in the human brain where the signals from one Neuron to the next can have a random delay.

Further exploration

You can try your own input data samples. I ran an experiment with variations of these two patterns:

training_data = [
  [1,0,1,0,1,0],
  [0,1,0,1,0,1]
]

This one is cool for daydreaming. Run a few daydream cycles and look at the samples you'll see variations of the two patterns above. The nature of the variations depends on how much variation there is in your training data.

I think it would be cool to work with a more graphic representation of the test data. I started thinking about this and added some graphic test data to the training_data.rb file, however I haven't gotten to using it yet.