Scala Akka demo
Scala JavaScript CSS
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Hanuman - Scala / Akka / BlueEyes / sbt demo

This is an old project, using an ancient version of Scala, Akka and a web framework that is not longer used. I provide a modern version in the Introduction to Actors lecture of the Introduction to Scala course at

This project demonstrates the BlueEyes Scala framework, Akka Actors and Refs; it can be run on any computer with Java installed, including Heroku.

The BlueEyes framework encapsulates Netty, which acts as a web container. The Hanuman application services are defined by the BlueEyes DSL, including services for HTML GET, JSON GET and POST, and MongoDB. HanumanService acts as a front-end for a hierarchy of Akka Actors, including Hanuman (the mythological monkey god), WorkVisor (an Akka supervisor) and WorkCells (which each contain a Monkey and a Critic).

This application simulates the adage that a large number of monkeys typing long enough should eventually reproduce any given document. Monkey instances generate pages of semi-random text, and their Critics compare the generated text to a target document. WorkVisor actors supervise the WorkCells for a simulation. Because HanumanService can support a multiplicity of simulaneous simulations, a Hanuman actor supervises WorkVisors.

The simulation is sequenced by ticks. Monkey actors generate a page (by default, 1000 characters) of random text per tick, in the hope that they can match some portion of the target document. To start a simulation, a client first requests a new simulation ID from HanumanService. TODO provide the ability to upload the document that the Monkeys are to attempt to replicate for a simulation. Before generating random text, Monkeys are first trained with a LetterProbabilities map of Char->probability when they are constructed. A simulation terminates when a maximum number of ticks have occurred, or the target document has been replicated.

Monkeys are unaware of the document they are attempting to replicate, and they are unaware of how the Critic works. Likewise, Critics are unaware of how Monkeys work. One might imagine ever-more-sophisticated Monkey and Critic implementations. For example, Monkeys working from a dictionary should outperform Monkeys that just type random characters.

Critics send a TextMatch message to the WorkCell supervisor whenever a Monkey's generated text has a better match than before to a passage in the target document. Each simulation is managed by a WorkVisor actor/supervisor. Hanuman stores the most recent TextMatch for the WorkCell in a TextMatchMap, which is defined as Map of Actor.Uuid -> TextMatch. Akka Refs are passed into each Hanuman and WorkCell actor/ supervisor, which sets/gets result values atomically using shared-nothing state.

Critics are computationally more expensive than Monkeys are. Because matches are statistally unlikely, and become exponentially less likely for longer sequences, results per WorkCell are few. This means that it is inexpensive to transmit results from a Critic in a WorkCell to its supervising WorkVisor via an Akka message. Clients that might poll for results require a different strategy; a result cache is returned to them, and the cache is updated on each tick by the Hanuman actor supervisor.

The HanumanService creates the Hanuman actor/supervisor, and the Hanuman constructor accepts an Akka Ref to a Simulations instance, which is a Map of simulationID -> TextMatchMap. Getting values from the Ref and and setting new values are performed within an implicit software transaction, which is computationally expensive and presents a possible choke point. Marshalling changes via a sequence of ticks reduces potential conflicts.

Run Locally

This project requires static assets to be served from one domain (preferably a content delivery network) and the application to be served from another domain. You can debug both servers locally if you wish.

  1. Clone this git repo.

  2. Compile the app and create a start script:

     sbt stage
  3. Set the PORT environment variable for the static content server:

     export PORT=8080
  4. Start the local static content server (only needed when debugging it):

     target/start net.interdoodle.example.HttpStaticFileServer
  5. Set the environment variables for the app server:

     export PORT=8585
     export MONGOLAB_URI=mongodb://
     export CONTENT_URL=http://localhost:8080/
  6. Start the app server:

     target/start net.interdoodle.example.AppServer
  7. Run the app:

     sbt run

Run Clients Against Local or Remote Service Instances

  1. JSON service (without the correct Content-Type header there will be no response).

     curl --header "Content-Type:application/json" http://localhost:8585/json
  2. The test script fully exercises the Hanuman web API. It can work against a local Hanuman service instance or a remote service instance at a specified URL. Sample usages:


Run on Heroku

Mike Slinn has deployed the app to

You can deploy it to your own Heroku app instance this way:

  1. You can set up AWS CloudFront to act as the content repository, or use another CDN.

  2. Clone the git repo.

  3. Install the Heroku client and set up ssh keys.

  4. Authenticate with Heroku:

     heroku login
  5. Create your new app instance on Heroku:

     heroku create --stack cedar
  6. Add your Heroku app instance as a remote git repository. Substitute your Heroku app instance for hollow-winter-3011:

     git remote add heroku
  7. Set up a Heroku environment variable called CONTENT_URL and point it to your content delivery network. See the Heroku docs

     heroku config:add CONTENT_URL=
  8. Push the Hanuman app to Heroku; it will automatically be (re)built and run.

     git push heroku master

You can also manually run the sbt console on Heroku:

heroku run sbt console