Permalink
Browse files

Fixed more formatting issues.

  • Loading branch information...
1 parent 7231cd2 commit 8397735da45a1335cddacfff694f737f9cfac32b @amcnamara committed Feb 20, 2012
Showing with 2 additions and 2 deletions.
  1. +2 −2 README.md
View
@@ -1,7 +1,7 @@
Hangman
=======
-This is a basic predictive engine for [Hangman]("http://en.wikipedia.org/wiki/Hangman_(game)") guessing strategies.
+This is a basic predictive engine for [Hangman](http://en.wikipedia.org/wiki/Hangman\_\(game\)) guessing strategies.
Analysis
--------
@@ -16,7 +16,7 @@ The strategy is based on the following formula, where:
<img src="http://cloud.github.com/downloads/amcnamara/Hangman/CodeCogsEqn.gif">
-The idea is to pick a guess character that will return the most possible information (based on returned character maskings, and the possible subset of words that each of those groups can eliminate). There are two ways to attack this problem: the first is to get as even a distribution as possible across the possible sub-maskings (for t=100 and d=4, n_(1..4)=25,25,25,25 is far more valuable than n_(1..4)=97,1,1,1), and second is to give preferrential weighting to groups which are more granular (higher d). The summation in the formula above will tend towards zero as the sub-groups tend towards an even and constant distribution, and the weighting on the right will tend towards c as the granularity increases; the guess for which this function is minimal will be the optimal pick.
+The idea is to pick a guess character that will return the most possible information (based on returned character maskings, and the possible subset of words that each of those groups can eliminate). There are two ways to attack this problem: the first is to get as even a distribution as possible across the possible sub-maskings (for t=100 and d=4, n\_(1...4)=25,25,25,25 is far more valuable than n\_(1...4)=97,1,1,1), and second is to give preferrential weighting to groups which are more granular (higher d). The summation in the formula above will tend towards zero as the sub-groups tend towards an even and constant distribution, and the weighting on the right will tend towards c as the granularity increases; the guess for which this function is minimal will be the optimal pick.
Improvemonts
------------

0 comments on commit 8397735

Please sign in to comment.