Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

removing trailing spaces

  • Loading branch information...
commit 31ac0e580c9df236e66dd804d98d04b85d4924c0 1 parent 248104d
@mmaker mmaker authored
Showing with 4,469 additions and 4,510 deletions.
  1. +13 −13 LICENSE
  2. +3 −3 README.TXT
  3. +12 −12 docs/code2tut.py
  4. BIN  docs/documentation.pdf
  5. BIN  docs/html/_images/dataprocessing_flowchart.jpg
  6. BIN  docs/html/_images/rl.png
  7. +11 −11 docs/html/_sources/advanced/fast-pybrain.txt
  8. +63 −63 docs/html/_sources/advanced/ode.txt
  9. +1 −1  docs/html/_sources/api/datasets/importancedatasets.txt
  10. +3 −3 docs/html/_sources/api/datasets/sequentialdataset.txt
  11. +23 −23 docs/html/_sources/api/optimization/optimization.txt
  12. +1 −2  docs/html/_sources/api/rl/actionvalues.txt
  13. +5 −5 docs/html/_sources/api/rl/agents.txt
  14. +3 −4 docs/html/_sources/api/rl/explorers.txt
  15. +14 −14 docs/html/_sources/api/rl/learners.txt
  16. +1 −1  docs/html/_sources/api/rl/tasks.txt
  17. +8 −8 docs/html/_sources/api/structure/connections.txt
  18. +1 −1  docs/html/_sources/api/structure/evolvables.txt
  19. +2 −2 docs/html/_sources/api/supervised/knn/lsh/nearoptimal.txt
  20. +3 −4 docs/html/_sources/api/supervised/trainers.txt
  21. +5 −5 docs/html/_sources/index.txt
  22. +11 −11 docs/html/_sources/quickstart/dataset.txt
  23. +16 −16 docs/html/_sources/quickstart/network.txt
  24. +8 −8 docs/html/_sources/quickstart/training.txt
  25. +28 −28 docs/html/_sources/tutorial/datasets.txt
  26. +28 −28 docs/html/_sources/tutorial/extending-structure.txt
  27. +22 −22 docs/html/_sources/tutorial/fnn.txt
  28. +13 −13 docs/html/_sources/tutorial/intro.txt
  29. +23 −23 docs/html/_sources/tutorial/netmodcon.txt
  30. +30 −30 docs/html/_sources/tutorial/optimization.txt
  31. +20 −20 docs/html/_sources/tutorial/reinforcement-learning.txt
  32. +6 −6 docs/html/advanced/fast-pybrain.html
  33. +6 −6 docs/html/advanced/ode.html
  34. +11 −11 docs/html/api/datasets/classificationdataset.html
  35. +6 −6 docs/html/api/datasets/importancedatasets.html
  36. +11 −11 docs/html/api/datasets/sequentialdataset.html
  37. +11 −11 docs/html/api/datasets/superviseddataset.html
  38. +13 −13 docs/html/api/optimization/optimization.html
  39. +6 −6 docs/html/api/rl/actionvalues.html
  40. +8 −8 docs/html/api/rl/agents.html
  41. +7 −7 docs/html/api/rl/experiments.html
  42. +15 −15 docs/html/api/rl/explorers.html
  43. +16 −16 docs/html/api/rl/learners.html
  44. +12 −12 docs/html/api/rl/tasks.html
  45. +11 −11 docs/html/api/structure/connections.html
  46. +7 −7 docs/html/api/structure/evolvables.html
  47. +7 −7 docs/html/api/structure/modules.html
  48. +9 −9 docs/html/api/structure/networks.html
  49. +9 −9 docs/html/api/supervised/knn/lsh/nearoptimal.html
  50. +19 −19 docs/html/api/supervised/svm.html
  51. +18 −18 docs/html/api/supervised/trainers.html
  52. +17 −17 docs/html/api/tools.html
  53. +8 −8 docs/html/api/utilities.html
  54. +9 −9 docs/html/genindex.html
  55. +6 −6 docs/html/index.html
  56. +7 −7 docs/html/modindex.html
  57. +6 −6 docs/html/quickstart/dataset.html
  58. +6 −6 docs/html/quickstart/network.html
  59. +6 −6 docs/html/quickstart/training.html
  60. +9 −9 docs/html/search.html
  61. +6 −6 docs/html/tutorial/datasets.html
  62. +6 −6 docs/html/tutorial/extending-structure.html
  63. +6 −6 docs/html/tutorial/fnn.html
  64. +6 −6 docs/html/tutorial/intro.html
  65. +6 −6 docs/html/tutorial/netmodcon.html
  66. +6 −6 docs/html/tutorial/optimization.html
  67. +6 −6 docs/html/tutorial/reinforcement-learning.html
  68. +11 −11 docs/sphinx/advanced/fast-pybrain.txt
  69. +63 −63 docs/sphinx/advanced/ode.txt
  70. +1 −1  docs/sphinx/api/datasets/importancedatasets.txt
  71. +3 −3 docs/sphinx/api/datasets/sequentialdataset.txt
  72. +23 −23 docs/sphinx/api/optimization/optimization.txt
  73. +1 −2  docs/sphinx/api/rl/actionvalues.txt
  74. +5 −5 docs/sphinx/api/rl/agents.txt
  75. +3 −4 docs/sphinx/api/rl/explorers.txt
  76. +14 −14 docs/sphinx/api/rl/learners.txt
  77. +1 −1  docs/sphinx/api/rl/tasks.txt
  78. +8 −8 docs/sphinx/api/structure/connections.txt
  79. +1 −1  docs/sphinx/api/structure/evolvables.txt
  80. +2 −2 docs/sphinx/api/supervised/knn/lsh/nearoptimal.txt
  81. +3 −4 docs/sphinx/api/supervised/trainers.txt
  82. +1 −1  docs/sphinx/autodoc_hack.py
  83. +5 −5 docs/sphinx/index.txt
  84. BIN  docs/sphinx/pics/dataprocessing_flowchart.jpg
  85. BIN  docs/sphinx/pics/rl.png
  86. +11 −11 docs/sphinx/quickstart/dataset.txt
  87. +16 −16 docs/sphinx/quickstart/network.txt
  88. +8 −8 docs/sphinx/quickstart/training.txt
  89. +28 −28 docs/sphinx/tutorial/datasets.txt
  90. +28 −28 docs/sphinx/tutorial/extending-structure.txt
  91. +22 −22 docs/sphinx/tutorial/fnn.txt
  92. +13 −13 docs/sphinx/tutorial/intro.txt
  93. +23 −23 docs/sphinx/tutorial/netmodcon.txt
  94. +30 −30 docs/sphinx/tutorial/optimization.txt
  95. +20 −20 docs/sphinx/tutorial/reinforcement-learning.txt
  96. +18 −18 docs/tutorials/blackboxoptimization.py
  97. +21 −21 docs/tutorials/fnn.py
  98. +10 −10 docs/tutorials/networks.py
  99. +5 −5 docs/tutorials/rl.py
  100. +1 −1  examples/optimization/benchmarkplots.py
  101. +2 −2 examples/optimization/multiobjective/nsga2.py
  102. +15 −15 examples/optimization/optimizerinterface.py
  103. +2 −2 examples/optimization/optimizers_for_rl.py
  104. +2 −2 examples/rl/environments/capturegame/evolvingplayer.py
  105. +1 −1  examples/rl/environments/capturegame/minitournament.py
  106. +1 −2  examples/rl/environments/capturegame/pente.py
  107. +1 −1  examples/rl/environments/cartpole/cart_all.py
  108. +4 −4 examples/rl/environments/cartpole/play_cartpole.py
  109. +1 −1  examples/rl/environments/maze/td.py
  110. +2 −2 examples/rl/environments/ode/acrobot_pgpe.py
  111. +2 −2 examples/rl/environments/ode/ccrl_plate_pgpe.py
  112. +3 −3 examples/rl/environments/ode/johnnie_pgpe.py
  113. +2 −2 examples/rl/environments/ode/johnnie_reinforce.py
  114. +1 −1  examples/rl/environments/shipsteer/shipbench_pgpe.py
  115. +4 −4 examples/rl/valuebased/nfq.py
  116. +1 −1  examples/rl/valuebased/td.py
  117. +3 −3 examples/supervised/backprop/backpropanbncn.py
  118. +3 −3 examples/supervised/backprop/backpropxor.py
  119. +3 −4 examples/supervised/backprop/datasets/anbncn.py
  120. +1 −1  examples/supervised/backprop/datasets/parity.py
  121. +1 −2  examples/supervised/backprop/datasets/xor.py
  122. +5 −5 examples/supervised/backprop/parityrnn.py
  123. +5 −5 examples/supervised/neuralnets+svm/datasets/datagenerator.py
  124. +4 −4 examples/supervised/neuralnets+svm/example_svm.py
  125. +6 −6 examples/unsupervised/gp.py
  126. +5 −5 examples/unsupervised/kohonen.py
  127. +3 −3 examples/unsupervised/rbm.py
  128. +50 −50 pybrain/auxiliary/gaussprocess.py
  129. +37 −38 pybrain/auxiliary/gradientdescent.py
  130. +10 −10 pybrain/auxiliary/importancemixing.py
  131. +3 −3 pybrain/auxiliary/kmeans.py
  132. +12 −12 pybrain/auxiliary/pca.py
  133. +50 −50 pybrain/datasets/classification.py
  134. +68 −68 pybrain/datasets/dataset.py
  135. +5 −5 pybrain/datasets/importance.py
  136. +8 −8 pybrain/datasets/reinforcement.py
  137. +39 −39 pybrain/datasets/sequential.py
  138. +25 −26 pybrain/datasets/supervised.py
  139. +8 −9 pybrain/datasets/unsupervised.py
  140. +1 −1  pybrain/optimization/__init__.py
  141. +10 −10 pybrain/optimization/distributionbased/cmaes.py
  142. +14 −15 pybrain/optimization/distributionbased/distributionbased.py
  143. +48 −48 pybrain/optimization/distributionbased/fem.py
  144. +19 −19 pybrain/optimization/distributionbased/nes.py
  145. +67 −67 pybrain/optimization/distributionbased/ves.py
  146. +50 −51 pybrain/optimization/distributionbased/xnes.py
  147. +12 −12 pybrain/optimization/finitedifference/fd.py
  148. +1 −1  pybrain/optimization/finitedifference/pgpe.py
  149. +18 −18 pybrain/optimization/finitedifference/spsa.py
  150. +19 −20 pybrain/optimization/hillclimber.py
  151. +1 −1  pybrain/optimization/memetic/innerinversememetic.py
  152. +3 −3 pybrain/optimization/memetic/innermemetic.py
  153. +1 −2  pybrain/optimization/memetic/inversememetic.py
  154. +12 −13 pybrain/optimization/memetic/memetic.py
  155. +8 −9 pybrain/optimization/neldermead.py
  156. +1 −2  pybrain/optimization/optimizer.py
  157. +1 −1  pybrain/optimization/populationbased/__init__.py
  158. +49 −49 pybrain/optimization/populationbased/coevolution/coevolution.py
  159. +17 −17 pybrain/optimization/populationbased/coevolution/competitivecoevolution.py
  160. +10 −11 pybrain/optimization/populationbased/coevolution/multipopulationcoevolution.py
  161. +26 −26 pybrain/optimization/populationbased/es.py
  162. +9 −9 pybrain/optimization/populationbased/evolution.py
  163. +20 −21 pybrain/optimization/populationbased/ga.py
  164. +14 −14 pybrain/optimization/populationbased/multiobjective/nsga2.py
  165. +31 −31 pybrain/optimization/populationbased/pso.py
  166. +7 −8 pybrain/optimization/randomsearch.py
  167. +7 −7 pybrain/rl/agents/agent.py
  168. +24 −24 pybrain/rl/agents/learning.py
  169. +21 −21 pybrain/rl/agents/logging.py
  170. +1 −1  pybrain/rl/environments/cartpole/__init__.py
  171. +40 −40 pybrain/rl/environments/cartpole/balancetask.py
  172. +29 −29 pybrain/rl/environments/cartpole/cartpole.py
  173. +9 −9 pybrain/rl/environments/cartpole/doublepole.py
  174. +5 −5 pybrain/rl/environments/cartpole/fast_version/cartpole.cpp
  175. +4 −4 pybrain/rl/environments/cartpole/fast_version/cartpolecompile.py
  176. +19 −19 pybrain/rl/environments/cartpole/fast_version/cartpoleenv.py
  177. +11 −11 pybrain/rl/environments/cartpole/fast_version/cartpolewrap.pyx
  178. +9 −9 pybrain/rl/environments/cartpole/nonmarkovdoublepole.py
  179. +7 −7 pybrain/rl/environments/cartpole/nonmarkovpole.py
  180. +15 −15 pybrain/rl/environments/cartpole/renderer.py
  181. +18 −18 pybrain/rl/environments/environment.py
  182. +17 −17 pybrain/rl/environments/episodic.py
  183. +5 −6 pybrain/rl/environments/fitnessevaluator.py
  184. +31 −31 pybrain/rl/environments/flexcube/environment.py
  185. +28 −28 pybrain/rl/environments/flexcube/objects3d.py
  186. +8 −8 pybrain/rl/environments/flexcube/sensors.py
  187. +29 −29 pybrain/rl/environments/flexcube/tasks.py
  188. +2 −2 pybrain/rl/environments/flexcube/viewer.py
  189. +10 −11 pybrain/rl/environments/functions/function.py
  190. +23 −23 pybrain/rl/environments/functions/multimodal.py
  191. +20 −21 pybrain/rl/environments/functions/multiobjective.py
  192. +15 −15 pybrain/rl/environments/functions/transformations.py
  193. +9 −10 pybrain/rl/environments/functions/unbounded.py
  194. +15 −16 pybrain/rl/environments/functions/unimodal.py
  195. +8 −8 pybrain/rl/environments/graphical.py
  196. +22 −22 pybrain/rl/environments/mazes/maze.py
  197. +7 −7 pybrain/rl/environments/mazes/polarmaze.py
  198. +5 −5 pybrain/rl/environments/mazes/tasks/cheesemaze.py
  199. +11 −11 pybrain/rl/environments/mazes/tasks/maze.py
  200. +12 −12 pybrain/rl/environments/mazes/tasks/maze4x3.py
  201. +3 −4 pybrain/rl/environments/mazes/tasks/maze89state.py
  202. +8 −8 pybrain/rl/environments/mazes/tasks/mdp.py
  203. +9 −9 pybrain/rl/environments/mazes/tasks/pomdp.py
  204. +18 −18 pybrain/rl/environments/mazes/tasks/shuttle.py
  205. +9 −10 pybrain/rl/environments/mazes/tasks/tiger.py
  206. +10 −10 pybrain/rl/environments/mazes/tasks/tmaze.py
  207. +1 −1  pybrain/rl/environments/ode/__init__.py
  208. +11 −11 pybrain/rl/environments/ode/actuators.py
  209. +57 −57 pybrain/rl/environments/ode/environment.py
  210. +7 −7 pybrain/rl/environments/ode/instances/acrobot.py
  211. +17 −17 pybrain/rl/environments/ode/instances/ccrl.py
  212. +8 −8 pybrain/rl/environments/ode/instances/johnnie.py
  213. +12 −12 pybrain/rl/environments/ode/models/acrobot.xode
  214. +12 −12 pybrain/rl/environments/ode/models/acroside.xode
  215. +12 −12 pybrain/rl/environments/ode/models/acrotop.xode
  216. +415 −415 pybrain/rl/environments/ode/models/arm.xode
  217. +3 −3 pybrain/rl/environments/ode/models/box-sphere.xode
  218. +9 −9 pybrain/rl/environments/ode/models/crawler.xode
  219. +361 −361 pybrain/rl/environments/ode/models/hand.xode
  220. +7 −7 pybrain/rl/environments/ode/models/johnnie.xode
  221. +6 −6 pybrain/rl/environments/ode/models/octacrawl.xode
  222. +3 −3 pybrain/rl/environments/ode/models/sphere-walker.xode
  223. +5 −5 pybrain/rl/environments/ode/sensors.py
  224. +5 −5 pybrain/rl/environments/ode/tasks/acrobot.py
  225. +12 −12 pybrain/rl/environments/ode/tasks/ccrl.py
  226. +30 −30 pybrain/rl/environments/ode/tasks/johnnie.py
  227. +2 −2 pybrain/rl/environments/ode/tools/configgrab.py
  228. +6 −6 pybrain/rl/environments/ode/tools/xmltools.py
  229. +1 −1  pybrain/rl/environments/ode/tools/xodetools.py
  230. +46 −46 pybrain/rl/environments/ode/viewer.py
  231. +12 −12 pybrain/rl/environments/ode/xode_changes/body.py
  232. +13 −14 pybrain/rl/environments/renderer.py
  233. +8 −8 pybrain/rl/environments/serverinterface.py
  234. +13 −13 pybrain/rl/environments/shipsteer/northwardtask.py
  235. +30 −30 pybrain/rl/environments/shipsteer/shipsteer.py
  236. +49 −49 pybrain/rl/environments/shipsteer/viewer.py
  237. +9 −9 pybrain/rl/environments/simple/environment.py
  238. +19 −19 pybrain/rl/environments/simple/renderer.py
  239. +7 −7 pybrain/rl/environments/simple/tasks.py
  240. +4 −4 pybrain/rl/environments/simplerace/simplecontroller.py
  241. +9 −9 pybrain/rl/environments/simplerace/simpleracetask.py
  242. +13 −13 pybrain/rl/environments/simplerace/simpleracetcp.py
  243. +19 −19 pybrain/rl/environments/task.py
  244. +45 −45 pybrain/rl/environments/twoplayergames/capturegame.py
  245. +3 −3 pybrain/rl/environments/twoplayergames/capturegameplayers/captureplayer.py
  246. +8 −8 pybrain/rl/environments/twoplayergames/capturegameplayers/clientwrapper.py
  247. +4 −4 pybrain/rl/environments/twoplayergames/capturegameplayers/killing.py
  248. +9 −9 pybrain/rl/environments/twoplayergames/capturegameplayers/moduledecision.py
  249. +2 −2 pybrain/rl/environments/twoplayergames/capturegameplayers/nonsuicide.py
  250. +1 −1  pybrain/rl/environments/twoplayergames/capturegameplayers/randomplayer.py
  251. +30 −30 pybrain/rl/environments/twoplayergames/gomoku.py
  252. +2 −3 pybrain/rl/environments/twoplayergames/gomokuplayers/gomokuplayer.py
  253. +3 −3 pybrain/rl/environments/twoplayergames/gomokuplayers/killing.py
  254. +9 −9 pybrain/rl/environments/twoplayergames/gomokuplayers/moduledecision.py
  255. +1 −1  pybrain/rl/environments/twoplayergames/gomokuplayers/randomplayer.py
  256. +20 −20 pybrain/rl/environments/twoplayergames/pente.py
  257. +26 −26 pybrain/rl/environments/twoplayergames/tasks/capturetask.py
  258. +26 −26 pybrain/rl/environments/twoplayergames/tasks/gomokutask.py
  259. +29 −29 pybrain/rl/environments/twoplayergames/tasks/handicaptask.py
  260. +2 −3 pybrain/rl/environments/twoplayergames/tasks/pentetask.py
  261. +22 −23 pybrain/rl/environments/twoplayergames/tasks/relativegomokutask.py
  262. +26 −26 pybrain/rl/environments/twoplayergames/tasks/relativetask.py
  263. +11 −12 pybrain/rl/environments/twoplayergames/twoplayergame.py
  264. +3 −3 pybrain/rl/experiments/continuous.py
  265. +7 −8 pybrain/rl/experiments/episodic.py
  266. +6 −7 pybrain/rl/experiments/queued.py
  267. +20 −20 pybrain/rl/experiments/tournament.py
  268. +10 −10 pybrain/rl/explorers/continuous/normal.py
  269. +14 −14 pybrain/rl/explorers/continuous/sde.py
  270. +11 −11 pybrain/rl/explorers/discrete/boltzmann.py
  271. +6 −6 pybrain/rl/explorers/discrete/discrete.py
  272. +9 −10 pybrain/rl/explorers/discrete/discretesde.py
  273. +6 −6 pybrain/rl/explorers/discrete/egreedy.py
  274. +2 −2 pybrain/rl/explorers/explorer.py
  275. +3 −3 pybrain/rl/learners/directsearch/directsearch.py
  276. +5 −5 pybrain/rl/learners/directsearch/enac.py
  277. +5 −5 pybrain/rl/learners/directsearch/gpomdp.py
  278. +36 −36 pybrain/rl/learners/directsearch/policygradient.py
  279. +6 −6 pybrain/rl/learners/directsearch/reinforce.py
  280. +45 −45 pybrain/rl/learners/directsearch/rwr.py
  281. +19 −19 pybrain/rl/learners/learner.py
  282. +9 −9 pybrain/rl/learners/meta/levinsearch.py
  283. +13 −13 pybrain/rl/learners/valuebased/interface.py
  284. +10 −10 pybrain/rl/learners/valuebased/nfq.py
  285. +12 −12 pybrain/rl/learners/valuebased/q.py
  286. +7 −7 pybrain/rl/learners/valuebased/qlambda.py
  287. +10 −10 pybrain/rl/learners/valuebased/sarsa.py
  288. +7 −7 pybrain/rl/learners/valuebased/valuebased.py
  289. +23 −23 pybrain/structure/connections/connection.py
  290. +7 −7 pybrain/structure/connections/full.py
  291. +6 −6 pybrain/structure/connections/fullnotself.py
  292. +3 −3 pybrain/structure/connections/identity.py
  293. +5 −5 pybrain/structure/connections/linear.py
  294. +4 −4 pybrain/structure/connections/permutation.py
  295. +21 −21 pybrain/structure/connections/shared.py
  296. +4 −4 pybrain/structure/connections/subsampling.py
  297. +26 −27 pybrain/structure/evolvables/cheaplycopiable.py
  298. +8 −9 pybrain/structure/evolvables/evolvable.py
  299. +13 −14 pybrain/structure/evolvables/maskedmodule.py
  300. +22 −23 pybrain/structure/evolvables/maskedparameters.py
Sorry, we could not display the entire diff because too many files (422) changed.
View
26 LICENSE
@@ -4,22 +4,22 @@ All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
- * Redistributions of source code must retain the above copyright notice,
+ * Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
+ * Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
- * Neither the name of PyBrain nor the names of its contributors
- may be used to endorse or promote products derived from this software
+ * Neither the name of PyBrain nor the names of its contributors
+ may be used to endorse or promote products derived from this software
without specific prior written permission.
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
-ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
View
6 README.TXT
@@ -7,15 +7,15 @@ INSTALLATION
Quick answer: make sure you have SciPy installed, then
python setup.py install
-Longer answer: (if the above was any trouble) we keep more
-detailed installation instructions (including those
+Longer answer: (if the above was any trouble) we keep more
+detailed installation instructions (including those
for the dependencies) up-to-date in a wiki at:
http://wiki.github.com/pybrain/pybrain/installation
DOCUMENTATION
-------------
-Please read
+Please read
docs/documentation.pdf
or browse
docs/html/*
View
24 docs/code2tut.py
@@ -3,20 +3,20 @@
Synopsis:
code2tut.py basename
-
+
Output:
Will convert tutorials/basename.py into sphinx/basename.txt
-
+
Conventions:
1. All textual comments must be enclosed in triple quotation marks.
2. First line of file is ignored, second line of file shall contain title in "",
the following lines starting with # are ignored.
-3. Lines following paragraph-level markup (e.g. .. seealso::) must be indented.
+3. Lines following paragraph-level markup (e.g. .. seealso::) must be indented.
Paragraph ends with a blank line.
4. If the code after a comment starts with a higher indentation level, you have
- to manually edit the resulting file, e.g. by inserting " ..." at the
+ to manually edit the resulting file, e.g. by inserting " ..." at the
beginning of these sections.
-
+
See tutorials/fnn.py for example.
"""
@@ -31,7 +31,7 @@
f_out = file(os.path.join("sphinx",sys.argv[1])+".txt", "w+")
# write the header
-f_out.write(".. _"+sys.argv[1]+":\n\n")
+f_out.write(".. _"+sys.argv[1]+":\n\n")
f_in.readline() # ######################
line = f_in.readline()
line= line.split('"')[1] # # PyBrain Tutorial "Classification ..."
@@ -52,13 +52,13 @@
continue
elif begin:
begin = False
-
+
if '"""' in line:
for i in range(line.count('"""')):
comment = 1 - comment
if line.count('"""')==2:
linecomment = True
-
+
line = line.replace('"""','')
if comment==0:
line += '::'
@@ -67,7 +67,7 @@
elif comment==0 and line!='\n':
line = " "+line
-
+
if line.startswith('..'):
inblock = True
elif line=="\n":
@@ -75,14 +75,14 @@
if (comment or linecomment) and not inblock:
line = line.strip()+"\n"
-
+
if line.endswith("::"):
line +='\n\n'
elif line.endswith("::\n"):
line +='\n'
-
+
f_out.write(line)
-
+
f_in.close()
f_out.close()
View
BIN  docs/documentation.pdf
Binary file not shown
View
BIN  docs/html/_images/dataprocessing_flowchart.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/html/_images/rl.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
22 docs/html/_sources/advanced/fast-pybrain.txt
@@ -8,14 +8,14 @@ Writing a neural networking framework in Python imposes certain speed
restrictions upon it. Python is just not as fast as languages such as C++
or Fortran.
-Due to this, PyBrain has its own spin-off called arac, which is a
-re-implementation of its neural networking facilities that integrates
+Due to this, PyBrain has its own spin-off called arac, which is a
+re-implementation of its neural networking facilities that integrates
transparently with it.
Depending on the configuration of your system, speedups of **5-10x faster** can
-be expected. This speedup might be even higher (ranging into the hundreds) if
+be expected. This speedup might be even higher (ranging into the hundreds) if
you use sophisticated topologies such as MDRNNs. If you want some numbers on
-your system, there is a comparison script shipped with PyBrain at
+your system, there is a comparison script shipped with PyBrain at
``examples/arac/benchmark.py``.
@@ -50,13 +50,13 @@ As you can see by examining the network, it is a special class:
>>> n
<_FeedForwardNetwork '_FeedForwardNetwork-8'>
-which is prefixed with an underscore, the Python convention for naming C
-implementations of already existing classes. We can import these classes
+which is prefixed with an underscore, the Python convention for naming C
+implementations of already existing classes. We can import these classes
directly from arac and use them in the same way as PyBrain classes:
>>> from arac.pybrainbridge import _FeedForwardNetwork, _RecurrentNetwork
-The third method is to construct a network as a PyBrain network and call the
+The third method is to construct a network as a PyBrain network and call the
method ``convertToFastNetwork`` afterwards:
>>> n = buildNetwork(2, 3, 1, fast=False)
@@ -70,10 +70,10 @@ are not reflected in the arac network.
Limitations
-----------
-Since arac is implemented in C++ and currently maintained by only a single
+Since arac is implemented in C++ and currently maintained by only a single
person, arac development is likely to be slower than PyBrain's. This might lead
-to certain features (e.g. layer types) to be implemented later than the pure
-python versions. This also applies to custom layer types. As soon as you have
-your layer type, you will not be able to use fast networks anymore -- except if
+to certain features (e.g. layer types) to be implemented later than the pure
+python versions. This also applies to custom layer types. As soon as you have
+your layer type, you will not be able to use fast networks anymore -- except if
you chose to also implement them for arac yourself.
View
126 docs/html/_sources/advanced/ode.txt
@@ -36,16 +36,16 @@ You can test if all your settings are ok by starting following example:
python viewer.py
-.. note::
+.. note::
On Linux, if that gives rise to a segmentation fault, try installing ``xorg-driver-fglrx``
Existing ODE Environments that are tested are:
- * Johnnie (a biped humanoid robot modeled after the real
+ * Johnnie (a biped humanoid robot modeled after the real
robot Johnnie (http://www.amm.mw.tum.de)
- * CCRL (a robot with two 7 DoF Arms and simple grippers, modeled
+ * CCRL (a robot with two 7 DoF Arms and simple grippers, modeled
after the real robot at the CCRL of TU Munich. (http://www.lsr.ei.tum.de/)
- * PencilBalancer (a robot that balances pencils in a 2D way, modeled
+ * PencilBalancer (a robot that balances pencils in a 2D way, modeled
after the real robot from Jörg Conradt. (http://www.ini.uzh.ch/~conradt/Projects/PencilBalancer/)
.. ToDo: check the rest of the environments.
@@ -56,42 +56,42 @@ Existing ODE Environments that are tested are:
Creating your own learning task in an existing ODE environment
-----------------------------------------------------------------------
-This tutorial walks you through the process of setting up a
+This tutorial walks you through the process of setting up a
new task within an existing ODE Environment.
It assumes that you have taken the steps described in the section :ref:`existingode`.
For all ODE environments there can be found a standard task in
``pybrain/rl/environments/ode/tasks``
-We take as an example again the Johnnie environment. You will find
+We take as an example again the Johnnie environment. You will find
that the first class in the johnnie.py file in the above described location is named
JohnnieTask and inherits from EpisodicTask.
The necessary methods that you need to define your own task are described already in that basic class:
* ``__init__(self, env)`` - the constructor
- * ``performAction(self, action)`` - processes and filters the output from the controller
+ * ``performAction(self, action)`` - processes and filters the output from the controller
and communicates it to the environment.
- * ``isFinished(self)`` - checks if the maximum number of timesteps has been reached
+ * ``isFinished(self)`` - checks if the maximum number of timesteps has been reached
or if other break condition has been met.
* ``res(self)`` - resets counters rewards and similar.
-If we take a look at the StandingTask (the next class in the file) we see
+If we take a look at the StandingTask (the next class in the file) we see
that only little has to be done to create an own task.
First of all the class must inherit from JohnnieTask.
-Then, the constructor has to be overwritten to declare some variables and
-constants for the specific task. In this case there were some additional
+Then, the constructor has to be overwritten to declare some variables and
+constants for the specific task. In this case there were some additional
position sensors added and normalized for reward calculation.
-As normally last step the getReward Method has to be overwritten, because
-the reward definition is normally what defines the task. In this case just
-the vertical head position is returned (with some clipping to prevent the
-robot from jumping to get more reward). That is already enough to create a
-task that is sufficiently defined to make a proper learning method (like
-PGPE in the above mentioned and testable example johnnie_pgpe.py) learn a
+As normally last step the getReward Method has to be overwritten, because
+the reward definition is normally what defines the task. In this case just
+the vertical head position is returned (with some clipping to prevent the
+robot from jumping to get more reward). That is already enough to create a
+task that is sufficiently defined to make a proper learning method (like
+PGPE in the above mentioned and testable example johnnie_pgpe.py) learn a
controller that let the robot stand complete upright without falling.
-For some special cases you maybe are forced to rewrite the performAction
+For some special cases you maybe are forced to rewrite the performAction
method and the isFinished method, but that special cases are out of scope of this HowTo.
-If you need to make such changes and encounter problems please feel
+If you need to make such changes and encounter problems please feel
free to contact the PyBrain mailing list.
@@ -107,46 +107,46 @@ and have taken the necessary steps explained there.
If you want to your own environment you need the following:
* Environment that inherits from ODEEnvironment
- * Agent that inherits from OptimizationAgent
+ * Agent that inherits from OptimizationAgent
* Tasks that inherit from EpisodicTask
For all ODE environments, an instance can be found in ``pybrain/rl/environments/ode/instances/``
-We take as an example again the Johnnie environment. You will find
+We take as an example again the Johnnie environment. You will find
that the first class in the ``johnnie.py`` file in the location described above is named
:class:`JohnnieEnvironment` and inherits from :class:`ODEEnvironment`.
You will see that were is not much to do on the PyBrain side to generate the environment class.
-First loading the corresponding XODE file is necessary to
+First loading the corresponding XODE file is necessary to
provide PyBrain with the specification of the simulation.
How to generate the corresponding XODE file will be shown later in this HowTo.
-Then the standard sensors are added like the JointSensors, the corresponding
+Then the standard sensors are added like the JointSensors, the corresponding
JointVelocitySensors and also the actuators for every joint.
-Because this kind of sensors and actuators are needed in every simulation
+Because this kind of sensors and actuators are needed in every simulation
they are already added in the environment and assumed to exist by later stages of PyBrain.
The next part is a bit more involved.
-First, member variables that state the number
+First, member variables that state the number
of action dimensions and number of sensors have to be set.
::
self.actLen = self.getActionLength()
self.obsLen = len(self.getSensors())
-
-
-Next, 3 lists are generated for every action dimension. The first list
-is called :attr:`torqueList` and states the fraction of
+
+
+Next, 3 lists are generated for every action dimension. The first list
+is called :attr:`torqueList` and states the fraction of
the global maximal force that can bee applied to the joints.
-The second list states the maximum angle, the third list states the
+The second list states the maximum angle, the third list states the
minimum angle for every joint. (:attr:`cHighList` and :attr:`cLowList`) For example:
::
self.tourqueList = array([0.2, 0.2, 0.2, 0.5, 0.5, 2.0, 2.0,2.0,2.0,0.5,0.5],)
self.cHighList = array([1.0, 1.0, 0.5, 0.5, 0.5, 1.5, 1.5,1.5,1.5,0.25,0.25],)
- self.cLowList = array([-0.5, -0.5, -0.5, 0.0, 0.0, 0.0, 0.0,0.0,0.0,-0.25,-0.25],)
+ self.cLowList = array([-0.5, -0.5, -0.5, 0.0, 0.0, 0.0, 0.0,0.0,0.0,-0.25,-0.25],)
-The last thing to do is how much simulation steps ODE should make
+The last thing to do is how much simulation steps ODE should make
before getting an update from the controller and sending new sensor values back, called stepsPerAction.
.. _createinstance:
@@ -155,34 +155,34 @@ Creating your own XODE instance
-----------------------------------------
Now we want to specify a instantiation in a XODE file.
-If you do not know ODE very well,
+If you do not know ODE very well,
you can use a script that is shipped with PyBrain and can be found in
``pybrain/rl/environments/ode/tools/xodetools.py``
-The first part of the file is responsible for parsing the simplified XODE
+The first part of the file is responsible for parsing the simplified XODE
code to a regular XODE file, that can be ignored.
For an example, look at the Johnnie definition by searching for ``class XODEJohnnie(XODEfile)``
-The instantiation of what you want to simulate in ODE is defined in this
+The instantiation of what you want to simulate in ODE is defined in this
tool as a class that inherits from :class:`XODEfile`.
The class consists only of a constructor. Here all parts of the simulated object are defined.
The parts are defined in an global coordinate system. For examples the row
::
- self.insertBody('arm_left','cappedCylinder',[0.25,7.5],5,pos=[2.06,-2.89,0],
+ self.insertBody('arm_left','cappedCylinder',[0.25,7.5],5,pos=[2.06,-2.89,0],
euler=[90,0,0], passSet=['total'], mass=2.473)
-creates the left arm (identifier 'arm_left') of Johnnie as an cylinder with round
-endings ('cappedCylinder') with a diameter of 0.25 and a length of 7.5 ([0.25,7.5])
-with a density of 5 (that will be overwritten if the optional value mass is given
-at the end of the command), an initial position of ``pos = [2.06,-2.89,0]``, turned
-by 90 degrees around the x-Axis (``euler = [90,0,0]``, all capped cylinders are by
-default aligned with the y-Axis) the passSet named 'total' (will be explained
+creates the left arm (identifier 'arm_left') of Johnnie as an cylinder with round
+endings ('cappedCylinder') with a diameter of 0.25 and a length of 7.5 ([0.25,7.5])
+with a density of 5 (that will be overwritten if the optional value mass is given
+at the end of the command), an initial position of ``pos = [2.06,-2.89,0]``, turned
+by 90 degrees around the x-Axis (``euler = [90,0,0]``, all capped cylinders are by
+default aligned with the y-Axis) the passSet named 'total' (will be explained
soon) and the optional mass of the part.
-"passSet" is used to define parts that can penetrate each other.
-That is especially necessary for parts that have a joint together,
-but can also be usable in other cases. All parts that are part of
+"passSet" is used to define parts that can penetrate each other.
+That is especially necessary for parts that have a joint together,
+but can also be usable in other cases. All parts that are part of
the same passSet can penetrate each other. Multiple passSet names can be given delimited by a ",".
Types that are understood by this tool are:
@@ -201,14 +201,14 @@ Types of joints that are understood by this tool are:
.. - ToDo, are there more?
-A joint between two parts is inserted in the model by insertJoint,
+A joint between two parts is inserted in the model by insertJoint,
giving the identifier of the first part, then the identifier of the second part.
-Next the type of joint is stated (e.g. 'hinge'). The axis around the joint will
-rotate is stated like ``axis={'x':1,'y':0,'z':0}`` and the anchor point in global
+Next the type of joint is stated (e.g. 'hinge'). The axis around the joint will
+rotate is stated like ``axis={'x':1,'y':0,'z':0}`` and the anchor point in global
coordinates is defined by something like ``anchor=(2.06,0.86,0)``.
Add all parts and joints for your model.
-Finally with ``centerOn(identifier)`` the camera position is fixed to that part and
+Finally with ``centerOn(identifier)`` the camera position is fixed to that part and
with ``insertFloor(y=??)`` a floor can be added.
Now go to the end of the file and state:
@@ -224,16 +224,16 @@ and execute the file with
And you have created an instantiation of your model that can be read in in the above environment.
-What is missing is a default task for the new environment. In the previous
-"HowTo create your own learning task in an existing ODE environment"
+What is missing is a default task for the new environment. In the previous
+"HowTo create your own learning task in an existing ODE environment"
we saw how such a standard task looks for the Johnnie environment.
-To create our own task we have to create a file with the name of our environment in
+To create our own task we have to create a file with the name of our environment in
``pybrain/rl/environments/ode/tasks/``
The new task has to import the following packages:
from pybrain.rl.environments import EpisodicTask
- from pybrain.rl.environments.ode.sensors import *
+ from pybrain.rl.environments.ode.sensors import *
And whatever is needed from scipy and similar.
@@ -249,7 +249,7 @@ It is important that the constructor of EpisodicTask is called.
The following member variables are mandatory:
::
- self.maxPower = 100.0 #Overall maximal torque - is multiplied with relative max
+ self.maxPower = 100.0 #Overall maximal torque - is multiplied with relative max
#torque for individual joint to get individual max torque
self.reward_history = []
self.count = 0 #timestep counter
@@ -268,7 +268,7 @@ Next the sensor and actuator limits must be set, usually between -1 and 1:
#Angle sensors
for i in range(self.env.actLen):
# Joint velocity sensors
- self.sensor_limits.append((self.env.cLowList[i], self.env.cHighList[i]))
+ self.sensor_limits.append((self.env.cLowList[i], self.env.cHighList[i]))
for i in range(self.env.actLen):
self.sensor_limits.append((-20, 20))
#Normalize all actor dimensions to (-1, 1)
@@ -278,20 +278,20 @@ The next method that is needed is the performAction method, the standard setting
::
def performAction(self, action):
- """ Filtered mapping towards performAction of the underlying environment """
+ """ Filtered mapping towards performAction of the underlying environment """
EpisodicTask.performAction(self, action)
If you want to control the wanted angels instead of the forces you may include this simple PD mechanism:
::
#The joint angles
- isJoints = self.env.getSensorByName('JointSensor')
+ isJoints = self.env.getSensorByName('JointSensor')
#The joint angular velocities
- isSpeeds = self.env.getSensorByName('JointVelocitySensor')
+ isSpeeds = self.env.getSensorByName('JointVelocitySensor')
#norm output to action interval
act = (action+1.0)/2.0*(self.env.cHighList-self.env.cLowList)+self.env.cLowList
- #simple PID
- action = tanh((act - isJoints - isSpeeds) * 16.0) * self.maxPower * self.env.tourqueList
+ #simple PID
+ action = tanh((act - isJoints - isSpeeds) * 16.0) * self.maxPower * self.env.tourqueList
Now we have to define the :meth:`isFinished` method:
::
@@ -316,12 +316,12 @@ Finally we define a :meth:`reset` method:
self.count = 0
self.reward_history.append(self.getTotalReward())
-We don't need a :meth:`getReward` function here, because the method from :class:`EpisodicTask`
+We don't need a :meth:`getReward` function here, because the method from :class:`EpisodicTask`
that returns always 0.0 is taken over. This is the default task that is used to create specific tasks.
Please take a look at :ref:`existinglearning` for how to create a task that gives actual reward.
-If you have done all steps right you now have a new ODE environment with a
+If you have done all steps right you now have a new ODE environment with a
corresponding task that you can test by creating an experiment.
-Or you can try to copy an existing example like the ``johnnie_pgpe.py`` and
+Or you can try to copy an existing example like the ``johnnie_pgpe.py`` and
replace the environment and the task definition with your new environment and task.
View
2  docs/html/_sources/api/datasets/importancedatasets.txt
@@ -6,5 +6,5 @@
.. automodule:: pybrain.datasets.importance
.. autoclass:: ImportanceDataSet
- :members:
+ :members:
:show-inheritance:
View
6 docs/html/_sources/api/datasets/sequentialdataset.txt
@@ -6,11 +6,11 @@
.. automodule:: pybrain.datasets.sequential
.. autoclass:: SequentialDataSet
- :members:
+ :members:
:show-inheritance:
-
+
.. note::
- This documentation comprises just a subjective excerpt of available methods.
+ This documentation comprises just a subjective excerpt of available methods.
See the source code for additional functionality.
View
46 docs/html/_sources/api/optimization/optimization.txt
@@ -7,16 +7,16 @@ The two base classes
.. automodule:: pybrain.optimization.optimizer
.. autoclass:: BlackBoxOptimizer
- :members: __init__, setEvaluator, learn,
- minimize,
+ :members: __init__, setEvaluator, learn,
+ minimize,
maxEvaluations, maxLearningSteps, desiredEvaluation,
verbose, storeAllEvaluations, storeAllEvaluated,
- numParameters
-
+ numParameters
+
.. autoclass:: ContinuousOptimizer
:members: __init__
:show-inheritance:
-
+
General Black-box optimizers
----------------------------
@@ -24,62 +24,62 @@ General Black-box optimizers
.. automodule:: pybrain.optimization
.. autoclass:: RandomSearch
- :members:
+ :members:
.. autoclass:: HillClimber
- :members:
+ :members:
.. autoclass:: StochasticHillClimber
:members: temperature
-
+
Continuous optimizers
---------------------
.. autoclass:: NelderMead
- :members:
+ :members:
.. autoclass:: CMAES
:members:
-
+
.. autoclass:: OriginalNES
:members:
.. autoclass:: ExactNES
:members:
-
+
.. autoclass:: FEM
:members:
-
+
Finite difference methods
^^^^^^^^^^^^^^^^^^^^^^^^^
-
+
.. autoclass:: FiniteDifferences
- :members:
+ :members:
.. autoclass:: PGPE
- :members:
+ :members:
:show-inheritance:
-
+
.. autoclass:: SimpleSPSA
- :members:
+ :members:
:show-inheritance:
Population-based
^^^^^^^^^^^^^^^^
.. autoclass:: ParticleSwarmOptimizer
- :members:
+ :members:
.. autoclass:: GA
- :members:
+ :members:
+
-
Multi-objective Optimization
----------------------------
.. autoclass:: MultiObjectiveGA
- :members:
-
-
+ :members:
+
+
View
3  docs/html/_sources/api/rl/actionvalues.txt
@@ -9,8 +9,7 @@
.. autoclass:: ActionValueTable
:members:
:show-inheritance:
-
+
.. autoclass:: ActionValueNetwork
:members:
:show-inheritance:
-
View
10 docs/html/_sources/api/rl/agents.txt
@@ -9,16 +9,16 @@
.. automodule:: pybrain.rl.agents.logging
.. autoclass:: LoggingAgent
- :members:
+ :members:
:show-inheritance:
.. automodule:: pybrain.rl.agents
.. autoclass:: LearningAgent
- :members:
+ :members:
:show-inheritance:
-
+
.. autoclass:: OptimizationAgent
- :members:
+ :members:
:show-inheritance:
-
+
View
7 docs/html/_sources/api/rl/explorers.txt
@@ -12,13 +12,13 @@ Continuous Explorers
.. automodule:: pybrain.rl.explorers.continuous
.. autoclass:: NormalExplorer
- :members:
+ :members:
.. automodule:: pybrain.rl.explorers.continuous.sde
.. autoclass:: StateDependentExplorer
:members:
-
+
Discrete Explorers
--------------------
@@ -27,7 +27,7 @@ Discrete Explorers
.. autoclass:: DiscreteExplorer
:members: _setModule
:show-inheritance:
-
+
.. automodule:: pybrain.rl.explorers.discrete
.. autoclass:: EpsilonGreedyExplorer
@@ -41,4 +41,3 @@ Discrete Explorers
.. autoclass:: DiscreteStateDependentExplorer
:members: activate, _forwardImplementation
:show-inheritance:
-
View
28 docs/html/_sources/api/rl/learners.txt
@@ -32,25 +32,25 @@ Value-based Learners
.. autoclass:: ValueBasedLearner
:members:
:show-inheritance:
-
+
.. automodule:: pybrain.rl.learners.valuebased
.. autoclass:: Q
:members:
:show-inheritance:
-
+
.. autoclass:: QLambda
:members:
- :show-inheritance:
+ :show-inheritance:
.. autoclass:: SARSA
:members:
- :show-inheritance:
+ :show-inheritance:
.. autoclass:: NFQ
:members:
- :show-inheritance:
-
+ :show-inheritance:
+
Direct-search Learners
------------------------
@@ -59,22 +59,22 @@ Direct-search Learners
.. autoclass:: PolicyGradientLearner
:members:
- :show-inheritance:
+ :show-inheritance:
.. automodule:: pybrain.rl.learners.directsearch.reinforce
.. autoclass:: Reinforce
:members:
- :show-inheritance:
+ :show-inheritance:
.. automodule:: pybrain.rl.learners.directsearch.enac
.. autoclass:: ENAC
:members:
- :show-inheritance:
-
-
-.. note::
+ :show-inheritance:
+
+
+.. note::
..
- Black-box optimization algorithms can also be seen as direct-search RL algorithms, but are not included here.
-
+ Black-box optimization algorithms can also be seen as direct-search RL algorithms, but are not included here.
+
View
2  docs/html/_sources/api/rl/tasks.txt
@@ -9,5 +9,5 @@
.. automodule:: pybrain.rl.environments.episodic
.. autoclass:: EpisodicTask
- :members:
+ :members:
:show-inheritance:
View
16 docs/html/_sources/api/structure/connections.txt
@@ -9,21 +9,21 @@
.. automodule:: pybrain.structure.connections
.. autoclass:: FullConnection
- :members:
+ :members:
:show-inheritance:
-
+
.. autoclass:: IdentityConnection
- :members:
+ :members:
:show-inheritance:
-
+
.. autoclass:: MotherConnection
:members:
:show-inheritance:
-
+
.. autoclass:: SharedConnection
:members: mother
- :show-inheritance:
-
+ :show-inheritance:
+
.. autoclass:: SharedFullConnection
:show-inheritance:
-
+
View
2  docs/html/_sources/api/structure/evolvables.txt
@@ -4,5 +4,5 @@
.. automodule:: pybrain.structure.evolvables.evolvable
.. autoclass:: Evolvable
- :members:
+ :members:
View
4 docs/html/_sources/api/supervised/knn/lsh/nearoptimal.txt
@@ -5,8 +5,8 @@
.. autoclass:: MultiDimHash
:members: __init__, findBall, insert, knn
-
-
+
+
.. seealso::
`Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions <http://web.mit.edu/andoni/www/papers/cSquared.pdf>`_
View
7 docs/html/_sources/api/supervised/trainers.txt
@@ -5,15 +5,14 @@
.. autoclass:: BackpropTrainer
:members: __init__, ds, module, setArgs, setData, testOnClassData, train, trainEpochs, trainOnDataset, trainUntilConvergence
-
+
.. note::
- This documentation comprises just a subjective excerpt of available methods. See the source code for additional functionality.
+ This documentation comprises just a subjective excerpt of available methods. See the source code for additional functionality.
.. autoclass:: RPropMinusTrainer
:members: __init__
-
+
.. note::
See the documentation of :class:`BackpropTrainer` for inherited methods.
-
View
10 docs/html/_sources/index.txt
@@ -14,7 +14,7 @@ Although the quickstart uses supervised learning with neural networks as an
example, this does not mean that that's it. PyBrain is not only about supervised
learning and neural networks.
-While the quickstart should be read sequentially, the tutorial chapters can
+While the quickstart should be read sequentially, the tutorial chapters can
mostly be read independently of each other.
In case this does not suffice, we also have an API reference, the
@@ -36,10 +36,10 @@ Quick answer:
$ git clone git://github.com/pybrain/pybrain.git
$ python setup.py install
-Long answer:
-We keep more detailed installation instructions (including dependencies)
+Long answer:
+We keep more detailed installation instructions (including dependencies)
up-to-date in a wiki at http://wiki.github.com/pybrain/pybrain/installation.
-
+
Quickstart
@@ -66,7 +66,7 @@ Tutorials
tutorial/optimization
tutorial/reinforcement-learning
tutorial/extending-structure
-
+
Advanced
--------
View
22 docs/html/_sources/quickstart/dataset.txt
@@ -3,7 +3,7 @@
Building a DataSet
==================
-In order for our networks to learn anything, we need a dataset that contains
+In order for our networks to learn anything, we need a dataset that contains
inputs and targets. PyBrain has the ``pybrain.dataset`` package for this, and we
will use the ``SupervisedDataSet`` class for our needs.
@@ -11,15 +11,15 @@ will use the ``SupervisedDataSet`` class for our needs.
A customized DataSet
--------------------
-The ``SupervisedDataSet`` class is used for standard supervised learning. It
-supports input and target values, whose size we have to specify on object
+The ``SupervisedDataSet`` class is used for standard supervised learning. It
+supports input and target values, whose size we have to specify on object
creation::
>>> from pybrain.datasets import SupervisedDataSet
>>> ds = SupervisedDataSet(2, 1)
-
-Here we have generated a dataset that supports two dimensional inputs and one
-dimensional targets.
+
+Here we have generated a dataset that supports two dimensional inputs and one
+dimensional targets.
Adding samples
@@ -32,22 +32,22 @@ build a dataset for this. We can do this by just adding samples to the dataset:
>>> ds.addSample((0, 1), (1,))
>>> ds.addSample((1, 0), (1,))
>>> ds.addSample((1, 1), (0,))
-
+
Examining the dataset
---------------------
-
+
We now have a dataset that has 4 samples in it. We can check that with python's
idiomatic way of checking the size of something::
>>> len(ds)
4
-
+
We can also iterate over it in the standard way::
>>> for inpt, target in ds:
... print inpt, target
- ...
+ ...
[ 0. 0.] [ 0.]
[ 0. 1.] [ 1.]
[ 1. 0.] [ 1.]
@@ -65,7 +65,7 @@ We can access the input and target field directly as arrays::
[ 1.],
[ 1.],
[ 0.]])
-
+
It is also possible to clear a dataset again, and delete all the values from it:
>>> ds.clear()
View
32 docs/html/_sources/quickstart/network.txt
@@ -7,26 +7,26 @@ To go through the quickstart interactively, just fire up Python and we will make
the interpreter::
$ python
- Python 2.5.2 (r252:60911, Sep 17 2008, 11:21:23)
+ Python 2.5.2 (r252:60911, Sep 17 2008, 11:21:23)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
-In PyBrain, networks are composed of Modules which are connected with
+In PyBrain, networks are composed of Modules which are connected with
Connections. You can think of a network as a directed acyclic graph, where the
-nodes are Modules and the edges are Connections. This makes PyBrain very
-flexible but it is also not necessary in all cases.
+nodes are Modules and the edges are Connections. This makes PyBrain very
+flexible but it is also not necessary in all cases.
The buildNetwork Shortcut
-------------------------
-Thus, there is a simple way to create networks, which is the ``buildNetwork``
+Thus, there is a simple way to create networks, which is the ``buildNetwork``
shortcut::
>>> from pybrain.tools.shortcuts import buildNetwork
>>> net = buildNetwork(2, 3, 1)
-
+
This call returns a network that has two inputs, three hidden and a single
output neuron. In PyBrain, these layers are :class:`Module` objects and they are
already connected with :class:`FullConnection` objects.
@@ -40,15 +40,15 @@ output::
>>> net.activate([2, 1])
array([-0.98646726])
-
+
For this we use the ``.activate()`` method, which expects a list, tuple or an
-array as input.
+array as input.
Examining the structure
-----------------------
-How can we examine the structure of our network somewhat closer? In PyBrain,
+How can we examine the structure of our network somewhat closer? In PyBrain,
every part of a network has a name by which you can access it. When building
networks with the ``buildNetwork`` shortcut, the parts are named automatically::
@@ -64,7 +64,7 @@ The hidden layers have numbers at the end in order to distinguish between those.
More sophisticated Networks
---------------------------
-
+
Of course, we want more flexibility when building up networks. For instance, the
hidden layer is constructed with the sigmoid squashing function per default:
but in a lot of cases, this is not what we want. We can also supply different
@@ -74,22 +74,22 @@ types of layers::
>>> net = buildNetwork(2, 3, 1, hiddenclass=TanhLayer)
>>> net['hidden0']
<TanhLayer 'hidden0'>
-
-There is more we can do. For example, we can also set a different class for
+
+There is more we can do. For example, we can also set a different class for
the output layer::
>>> from pybrain.structure import SoftmaxLayer
>>> net = buildNetwork(2, 3, 2, hiddenclass=TanhLayer, outclass=SoftmaxLayer)
>>> net.activate((2, 3))
array([ 0.6656323, 0.3343677])
-
+
We can also tell the network to use a bias::
>>> net = buildNetwork(2, 3, 1, bias=True)
>>> net['bias']
<BiasUnit 'bias'>
-
-This approach has of course some restrictions: for example, we can only
-construct a feedforward topology. But it is possible to create very
+
+This approach has of course some restrictions: for example, we can only
+construct a feedforward topology. But it is possible to create very
sophisticated architectures with PyBrain, and it is also one of the library's
strength to do so.
View
16 docs/html/_sources/quickstart/training.txt
@@ -3,22 +3,22 @@
Training your Network on your Dataset
=====================================
-For adjusting parameters of modules in supervised learning, PyBrain has the
+For adjusting parameters of modules in supervised learning, PyBrain has the
concept of trainers. Trainers take a module and a dataset in order to train the
module to fit the data in the dataset.
-A classic example for training is backpropagation. PyBrain comes with
+A classic example for training is backpropagation. PyBrain comes with
backpropagation, of course, and we will use the ``BackpropTrainer`` here::
>>> from pybrain.supervised.trainers import BackpropTrainer
-
-We have already build a dataset for XOR and we have also learned to build
-networks that can handle such problems. Let's just connect the two with a
+
+We have already build a dataset for XOR and we have also learned to build
+networks that can handle such problems. Let's just connect the two with a
trainer::
>>> net = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer)
>>> trainer = BackpropTrainer(net, ds)
-
+
The trainer now knows about the network and the dataset and we can train the net
on the data::
@@ -31,6 +31,6 @@ method::
>>> trainer.trainUntilConvergence()
...
-
+
This returns a whole bunch of data, which is nothing but a tuple containing the
-errors for every training epoch.
+errors for every training epoch.
View
56 docs/html/_sources/tutorial/datasets.txt
@@ -3,39 +3,39 @@
Using Datasets
==============
-Datasets are useful for allowing comfortable access to training, test and
+Datasets are useful for allowing comfortable access to training, test and
validation data. Instead of having to mangle with arrays, PyBrain gives you a
more sophisticated datastructure that allows easier work with your data.
-For the different tasks that arise in machine learning, there is a special
-dataset type, possibly with a few sub-types. The different types share some
+For the different tasks that arise in machine learning, there is a special
+dataset type, possibly with a few sub-types. The different types share some
common functionality, which we'll discuss first.
-A dataset can be seen as a collection of named 2d-arrays, called `fields`
+A dataset can be seen as a collection of named 2d-arrays, called `fields`
in this context. For instance, if DS implements :class:`DataSet`::
inp = DS['input']
-
-returns the input field. The last dimension of this field corresponds to
+
+returns the input field. The last dimension of this field corresponds to
the input dimension, such that
::
inp[0,:]
-
+
would yield the first input vector. In most cases there is also a field named
-'target', which follows the same rules.
+'target', which follows the same rules.
However, didn't we say we will spare you the array mangling? Well, in most cases you
will want iterate over a dataset like so::
for inp, targ in DS:
...
-
+
Note that whether you get one, two, or more sample rows as a return depends on the number
-of `linked fields` in the DataSet: These are fields containing the same number of
+of `linked fields` in the DataSet: These are fields containing the same number of
samples and assumed to be used together, like the above 'input' and 'target' fields. You
can always check the DS.link property to see which fields are linked.
-Similarly, DataSets can be created by adding samples one-by-one -- the cleaner but slower
+Similarly, DataSets can be created by adding samples one-by-one -- the cleaner but slower
method -- or by assembling them from arrays.
::
@@ -49,7 +49,7 @@ method -- or by assembling them from arrays.
In the latter case DS cannot check the linked array dimensions for you, otherwise it would not be
possible to build a dataset from scratch.
-You may add your own linked or unlinked data to the dataset. However, note that many training algorithms
+You may add your own linked or unlinked data to the dataset. However, note that many training algorithms
iterate over the linked fields and may fail if their number has changed::
DS.addField('myfield')
@@ -68,7 +68,7 @@ A useful utility method for quick generation of randomly picked training and tes
:ref:`superviseddataset`
------------------------
-As the name says, this simplest form of a dataset is meant to be used with
+As the name says, this simplest form of a dataset is meant to be used with
supervised learning tasks. It is comprised of the fields 'input' and 'target', the pattern
size of which must be set upon creation::
@@ -79,13 +79,13 @@ size of which must be set upon creation::
1
>>> DS['input']
array([[ 1., 2., 3.]])
-
+
:ref:`sequentialdataset`
------------------------
-This dataset introduces the concept of ``sequences``. With this we are moving further away from the array
-mangling towards something more practical for sequence learning tasks. Essentially, its patterns are subdivided into
+This dataset introduces the concept of ``sequences``. With this we are moving further away from the array
+mangling towards something more practical for sequence learning tasks. Essentially, its patterns are subdivided into
sequences of variable length, that can be accessed via the methods
::
@@ -94,16 +94,16 @@ sequences of variable length, that can be accessed via the methods
getSequenceLength(index)
Creating a :class:`Sequentialdataset` is no different from its parent, since it still contains only 'input' and 'target' fields.
-:class:`Sequentialdataset` inherits from :class:`SupervisedDataSet`, which can be seen as a special
+:class:`Sequentialdataset` inherits from :class:`SupervisedDataSet`, which can be seen as a special
case with a sequence length of 1 for all sequences.
-To fill the dataset with content, it is advisable to call :meth:`newSequence` at the start of each sequence to be
+To fill the dataset with content, it is advisable to call :meth:`newSequence` at the start of each sequence to be
stored, and then add patterns by using :meth:`appendLinked` as above. This way, the class handles indexing and such
-transparently. One can theoretically construct a :class:`Sequentialdataset` directly from arrays, but messing with
+transparently. One can theoretically construct a :class:`Sequentialdataset` directly from arrays, but messing with
the index field is not recommended.
A typical way of iterating over a sequence dataset ``DS`` would be something like::
-
+
for i in range(DS.getNumSequences):
for input, target in DS.getSequenceIterator(i):
# do stuff
@@ -113,15 +113,15 @@ A typical way of iterating over a sequence dataset ``DS`` would be something lik
----------------------------
The purpose of this dataset is to facilitate dealing with classification problems, whereas the above are more
-geared towards regression. Its 'target' field is defined as integer, and it contains an extra field called 'class'
+geared towards regression. Its 'target' field is defined as integer, and it contains an extra field called 'class'
which is basically an automated backup of the targets, for reasons that we be apparent shortly. For the most part,
you don't have to bother with it. Initialization requires something like::
DS = ClassificationDataSet(inputdim, nb_classes=2, class_labels=['Fish','Chips'])
-The labels are optional, and mainly used for documentation. Target dimension is supposed to be 1. The targets
-are class labels starting from zero. If for some reason you don't know beforehand how many you have, or you
-fiddled around with the :meth:`setField` method, it is possible to regenerate the class information using
+The labels are optional, and mainly used for documentation. Target dimension is supposed to be 1. The targets
+are class labels starting from zero. If for some reason you don't know beforehand how many you have, or you
+fiddled around with the :meth:`setField` method, it is possible to regenerate the class information using
:meth:`assignClasses`, or :meth:`calculateStatistics`::
>>> DS = ClassificationDataSet(2, class_labels=['Urd', 'Verdandi', 'Skuld'])
@@ -143,7 +143,7 @@ fiddled around with the :meth:`setField` method, it is possible to regenerate th
>>> print DS.getField('target').transpose()
[[0 1 1 1 2 2]]
-When doing classification, many algorithms work better if classes are encoded into one output unit per class,
+When doing classification, many algorithms work better if classes are encoded into one output unit per class,
that takes on a certain value if the class is present. As an advanced feature, :class:`ClassificationDataSet`
does this conversion automatically::
@@ -167,9 +167,9 @@ the features of this class and the :class:`Sequentialdataset`.
:ref:`importancedataset`
------------------------
-This is another extension of :class:`Sequentialdataset` that allows assigning different weights to patterns. Essentially,
-it works like its parent, except comprising another linked field named 'importance', which should contain a value between
-0.0 and 1.0 for each pattern. A :class:`Sequentialdataset` is a special case with all weights equal to 1.0.
+This is another extension of :class:`Sequentialdataset` that allows assigning different weights to patterns. Essentially,
+it works like its parent, except comprising another linked field named 'importance', which should contain a value between
+0.0 and 1.0 for each pattern. A :class:`Sequentialdataset` is a special case with all weights equal to 1.0.
We have packed this functionality into a different class because it is rarely used and drains some computational resources.
So far, there is no corresponding non-sequential dataset class.
View
56 docs/html/_sources/tutorial/extending-structure.txt
@@ -3,7 +3,7 @@
Extending PyBrain's structure
=============================
-In :ref:`netmodcon` we have learned that networks can be constructed as a
+In :ref:`netmodcon` we have learned that networks can be constructed as a
directed acyclic graph, with :class:`Module` instances taking the role of
nodes and :class:`Connection` instances being the edges between those nodes.
@@ -29,18 +29,18 @@ and implement a specific new type afterwards::
class LinearLayer(NeuronLayer):
""" The simplest kind of module, not doing any transformation. """
-
+
def _forwardImplementation(self, inbuf, outbuf):
outbuf[:] = inbuf
-
+
def _backwardImplementation(self, outerr, inerr, outbuf, inbuf):
inerr[:] = outerr
As we can see, Layer class relies on two methods:
-:meth:`_forwardImplementation` and :meth:`_backwardImplementation`.
+:meth:`_forwardImplementation` and :meth:`_backwardImplementation`.
(Note the leading underscores which are a Python convention to indicate
-pseudo-private methods.)
+pseudo-private methods.)
The first method takes two arguments, ``inbuf`` and ``outbuf``. Both are Scipy
arrays of the size of the layer's input and output dimension respectively. The
@@ -48,15 +48,15 @@ arrays have already been created for us. The pybrain structure framework now
expects us to produce an output from the input and put it into ``outbuf`` in
place.
-.. note:: Manipulating an array in place with SciPy works via the ``[:]``
- operator. Given an array ``a``, the line ``a[:] = b`` will overwrite
+.. note:: Manipulating an array in place with SciPy works via the ``[:]``
+ operator. Given an array ``a``, the line ``a[:] = b`` will overwrite
``a``'s memory with the contents of ``b``.
The second method is used to calculate the derivative of the output error with
respect to the input. From standard texts on neural networks, we know that the
error of a unit (which is a field in a layer in our case) can be calculated
as the derivative of the unit's transfer function's applied to the units input
-multiplied with the error. In the case of the identity, the derivative is a
+multiplied with the error. In the case of the identity, the derivative is a
constant 1 and thus backpropagating the error is nothing but just copying it.
Thus, any :meth:`_backwardImplementation` implementation must fill ``inerror``
@@ -71,25 +71,25 @@ as a transfer function. The derivative is then given by :math:`f'(x) = 2x`.
Let's start out with the pure skeleton of the class::
-
+
from pybrain.structure.modules.neuronlayer import NeuronLayer
class QuadraticPolynomialLayer(NeuronLayer):
def _forwardImplementation(self, inbuf, outbuf):
pass
-
+
def _backwardImplementation(self, outerr, inerr, outbuf, inbuf):
pass
The current methods don't do anything, so let's implement one after the other.
Using SciPy, we can use most of Python's arithmetic syntax directly on the array
-and it is applied component wise to it. Thus, to get the square of an array
+and it is applied component wise to it. Thus, to get the square of an array
``a`` we can just call ``a**2``. Thus::
def _forwardImplementation(self, inbuf, outbuf):
- outbuf[:] = inbuf**2
+ outbuf[:] = inbuf**2
.. ** <--- making up for broken VIM syntax highlighting.
@@ -107,21 +107,21 @@ list for further help.
Connections
-----------
-A :class:`~pybrain.structure.connections.connection.Connection` works similarly to a Layer in many ways.
+A :class:`~pybrain.structure.connections.connection.Connection` works similarly to a Layer in many ways.
The key difference is that a Connection
processes data it does not "own", meaning that their primary task is to shift data
-from one node of the network graph to another.
+from one node of the network graph to another.
The API is similar to that of Layers, having :meth:`_forwardImplementation`
and :meth:`_backwardImplementation` methods. However, sanity checks are
-performed in the constructors mostly to assure that the modules connected
+performed in the constructors mostly to assure that the modules connected
can actually be connected.
As an example, we will now have a look at the :class:`IdentityConnection` and
afterwards implement a trivial example.
-IdentityConnections are just pushing the output of one module into the input of
+IdentityConnections are just pushing the output of one module into the input of
another. They do not transform it in any way.
.. code-block:: python
@@ -132,7 +132,7 @@ another. They do not transform it in any way.
class IdentityConnection(Connection):
"""Connection which connects the i'th element from the first module's output
buffer to the i'th element of the second module's input buffer."""
-
+
def __init__(self, *args, **kwargs):
Connection.__init__(self, *args, **kwargs)
assert self.indim == self.outdim, \
@@ -141,7 +141,7 @@ another. They do not transform it in any way.
def _forwardImplementation(self, inbuf, outbuf):
outbuf += inbuf
-
+
def _backwardImplementation(self, outerr, inerr, inbuf):
inerr += outerr
@@ -152,8 +152,8 @@ connection) are actually compatible, which means that the input dimension of the
outgoing module equals the output dimension of the incoming module.
The :meth:`_forwardImplementation` is called with the output buffer of the
-incoming module, depicted as ``inbuf``, and the input buffer of the outgoing
-module, called ``outbuf``. You can think of the two as a source and a sink.
+incoming module, depicted as ``inbuf``, and the input buffer of the outgoing
+module, called ``outbuf``. You can think of the two as a source and a sink.
Mind that in line #14, we actually do not overwrite the buffer but instead
perform an addition. This is because there might be other modules connected to
the outgoing module. The buffers will be overwritten with by the
@@ -186,19 +186,19 @@ most common in neural networks.
from connection import Connection
from pybrain.structure.parametercontainer import ParameterContainer
-
+
class FullConnection(Connection, ParameterContainer):
-
+
def __init__(self, *args, **kwargs):
Connection.__init__(self, *args, **kwargs)
ParameterContainer.__init__(self, self.indim*self.outdim)
-
+
def _forwardImplementation(self, inbuf, outbuf):
outbuf += dot(reshape(self.params, (self.outdim, self.indim)), inbuf)
-
+
def _backwardImplementation(self, outerr, inerr, inbuf):
inerr += dot(reshape(self.params, (self.outdim, self.indim)).T, outerr)
- self.derivs += outer(inbuf, outerr).T.flatten()
+ self.derivs += outer(inbuf, outerr).T.flatten()
In line 10 and 11 we can see how the superclasses' constructors are called.
:class:`ParameterContainer` expects an integer argument *N*, which depicts the
@@ -209,9 +209,9 @@ Due this, the constructor of ParameterContainer gives the object two fields:
:attr:`params` and :attr:`derivs` which are two arrays of size *N*. These are used
to hold parameters and possibly derivatives.
-In the case of backpropagation, learning happens during calls to
+In the case of backpropagation, learning happens during calls to
:meth:`_backwardImplementation`. In line 18, we can see how the field
-:attr:`derivs` is modified.
+:attr:`derivs` is modified.
Checking for correctness
@@ -239,5 +239,5 @@ So let's check our new QuadraticPolynomialLayer.
First we do the necessary imports in line 1 and 2. Then we build a network with
our special class in line 4. To initialize the weights of the network, we
randomize its parameters in line 5 and call our gradient check in line 6. If we
-have done everything right, we will be rewarded with the output
+have done everything right, we will be rewarded with the output
``Perfect gradient``.
View
44 docs/html/_sources/tutorial/fnn.txt
@@ -18,7 +18,7 @@ First we need to import the necessary components from PyBrain.
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SoftmaxLayer
-Furthermore, pylab is needed for the graphical output.
+Furthermore, pylab is needed for the graphical output.
::
from pylab import ion, ioff, figure, draw, contourf, clf, show, hold, plot
@@ -27,7 +27,7 @@ Furthermore, pylab is needed for the graphical output.
To have a nice dataset for visualization, we produce a set of
points in 2D belonging to three different classes. You could also
-read in your data from a file, e.g. using pylab.load().
+read in your data from a file, e.g. using pylab.load().
::
@@ -40,7 +40,7 @@ read in your data from a file, e.g. using pylab.load().
alldata.addSample(input, [klass])
Randomly split the dataset into 75% training and 25% test data sets. Of course, we
-could also have created two different datasets to begin with.
+could also have created two different datasets to begin with.
::
tstdata, trndata = alldata.splitWithProportion( 0.25 )
@@ -53,16 +53,16 @@ targets and stores them in an (integer) field named 'class'.
trndata._convertToOneOfMany( )
tstdata._convertToOneOfMany( )
-Test our dataset by printing a little information about it.
+Test our dataset by printing a little information about it.
::
print "Number of training patterns: ", len(trndata)
print "Input and output dimensions: ", trndata.indim, trndata.outdim
print "First sample (input, target, class):"
- print trndata['input'][0], trndata['target'][0], trndata['class'][0]
+ print trndata['input'][0], trndata['target'][0], trndata['class'][0]
Now build a feed-forward network with 5 hidden units. We use the shortcut
-:func:`buildNetwork` for this. The input and output layer size must match the
+:func:`buildNetwork` for this. The input and output layer size must match the
dataset's input and target dimension. You could add additional hidden layers by
inserting more numbers giving the desired layer sizes.
@@ -70,16 +70,16 @@ The output layer uses a softmax function because we are doing classification.
There are more options to explore here, e.g. try changing the hidden layer transfer
function to linear instead of (the default) sigmoid.
-.. seealso:: Description :func:`buildNetwork` for more info on options, and the
- Network tutorial :ref:`netmodcon` for info on how to build your own
- non-standard networks.
+.. seealso:: Description :func:`buildNetwork` for more info on options, and the
+ Network tutorial :ref:`netmodcon` for info on how to build your own
+ non-standard networks.
::
fnn = buildNetwork( trndata.indim, 5, trndata.outdim, outclass=SoftmaxLayer )
Set up a trainer that basically takes the network and training dataset as input.
-For a list of trainers, see :mod:`trainers`. We are using a
+For a list of trainers, see :mod:`trainers`. We are using a
:class:`BackpropTrainer` for this.
::
@@ -98,7 +98,7 @@ Therefore the target values for this data set can be ignored.
griddata.addSample([X.ravel()[i],Y.ravel()[i]], [0])
griddata._convertToOneOfMany() # this is still needed to make the fnn feel comfy
-Start the training iterations.
+Start the training iterations.
::
for i in range(20):
@@ -109,15 +109,15 @@ but for visualization purposes we do this one epoch at a time.
...
trainer.trainEpochs( 1 )
-
+
Evaluate the network on the training and test data. There are several ways to do this - check
-out the :mod:`pybrain.tools.validation` module, for instance. Here we let the trainer do the test.
+out the :mod:`pybrain.tools.validation` module, for instance. Here we let the trainer do the test.
::
...
- trnresult = percentError( trainer.testOnClassData(),
+ trnresult = percentError( trainer.testOnClassData(),
trndata['class'] )
- tstresult = percentError( trainer.testOnClassData(
+ tstresult = percentError( trainer.testOnClassData(
dataset=tstdata ), tstdata['class'] )
print "epoch: %4d" % trainer.totalepochs, \
@@ -125,15 +125,15 @@ out the :mod:`pybrain.tools.validation` module, for instance. Here we let the tr
" test error: %5.2f%%" % tstresult
Run our grid data through the FNN, get the most likely class
-and shape it into a square array again.
+and shape it into a square array again.
::
...
out = fnn.activateOnDataset(griddata)
out = out.argmax(axis=1) # the highest output activation gives the class
- out = out.reshape(X.shape)
-
-Now plot the test data and the underlying grid as a filled contour.
+ out = out.reshape(X.shape)
+
+Now plot the test data and the underlying grid as a filled contour.
::
...
@@ -148,9 +148,9 @@ Now plot the test data and the underlying grid as a filled contour.
contourf(X, Y, out) # plot the contour
ion() # interactive graphics on
draw() # update the plot
-
-Finally, keep showing the plot until user kills it.
+
+Finally, keep showing the plot until user kills it.
::
ioff()
- show()
+ show()
View
26 docs/html/_sources/tutorial/intro.txt
@@ -4,26 +4,26 @@ Introduction
============
PyBrain's concept is to encapsulate different data processing algorithms in what
-we call a :class:`Module`. A minimal Module contains a forward implementation
-depending on a collection of free parameters that can be adjusted, usually
-through some machine learning algorithm.
+we call a :class:`Module`. A minimal Module contains a forward implementation
+depending on a collection of free parameters that can be adjusted, usually
+through some machine learning algorithm.
-Modules have an input and an output buffer, plus corresponding error buffers
+Modules have an input and an output buffer, plus corresponding error buffers
which are used in error backpropagation algorithms.
-They are assembled into objects of the class :class:`Network` and are
-connected via :class:`Connection` objects. These may contain a number of
-adjustable parameters themselves, such as weights.
+They are assembled into objects of the class :class:`Network` and are
+connected via :class:`Connection` objects. These may contain a number of
+adjustable parameters themselves, such as weights.
-Note that a Network itself is again a Module, such that it is easy to build
-hierarchical networks as well. Shortcuts exist for building the most common
-network architectures, but in principle this system allows almost arbitrary
+Note that a Network itself is again a Module, such that it is easy to build
+hierarchical networks as well. Shortcuts exist for building the most common
+network architectures, but in principle this system allows almost arbitrary
connectionist systems to be assembled, as long as they form a directed acyclic
graph.
The free parameters of the Network are adjusted by means of a :class:`Trainer`,
-which uses a :class:`Dataset` to learn the optimum parameters from examples.
-For reinforcement learning experiments, a simulation environment with an
-associated optimization task is used instead of a Dataset.
+which uses a :class:`Dataset` to learn the optimum parameters from examples.
+For reinforcement learning experiments, a simulation environment with an
+associated optimization task is used instead of a Dataset.
.. image:: ../pics/dataprocessing_flowchart.*
View
46 docs/html/_sources/tutorial/netmodcon.txt
@@ -3,7 +3,7 @@
Building Networks with Modules and Connections
==============================================
-This chapter will guide you to use PyBrain's most basic structural elements:
+This chapter will guide you to use PyBrain's most basic structural elements:
the :class:`FeedForwardNetwork` and :class:`RecurrentNetwork` classes and with
them the :class:`Module` class and the :class:`Connection` class. We
have already seen how to create networks with the ``buildNetwork`` shortcut - but
@@ -13,13 +13,13 @@ networks from the ground up.
Feed Forward Networks
---------------------
-We will start with a simple example, building a multi layer perceptron.
+We will start with a simple example, building a multi layer perceptron.
First we make a new :class:`FeedForwardNetwork` object::
>>> from pybrain.structure import FeedForwardNetwork
>>> n = FeedForwardNetwork()
-
+
Next, we're constructing the input, hidden and output layers::
>>> from pybrain.structure import LinearLayer, SigmoidLayer
@@ -40,7 +40,7 @@ We can actually add multiple input and output modules. The net has to know
which of its modules are input and output modules, in order to forward propagate
input and to back propagate errors.
-It still needs to be explicitly determined how they should be connected. For
+It still needs to be explicitly determined how they should be connected. For
this we use the most common connection type, which produces a full connectivity
between layers, by connecting each neuron of one layer with each neuron of the
other. This is implemented by the :class:`FullConnection` class::
@@ -49,16 +49,16 @@ other. This is implemented by the :class:`FullConnection` class::
>>> in_to_hidden = FullConnection(inLayer, hiddenLayer)
>>> hidden_to_out = FullConnection(hiddenLayer, outLayer)
-As with modules, we have to explicitly add them to the network::
+As with modules, we have to explicitly add them to the network::
>>> n.addConnection(in_to_hidden)
>>> n.addConnection(hidden_to_out)
-All the elements are in place now, so we can do the final step that makes our
+All the elements are in place now, so we can do the final step that makes our
MLP usable, which is to call the ``.sortModules()`` method::
>>> n.sortModules()
-
+
This call does some internal initialization which is necessary before the net
can finally be used: for example, the modules are sorted topologically.
@@ -75,21 +75,21 @@ We can actually print networks and examine their structure::
Connections:
[<FullConnection 'FullConnection-4': 'LinearLayer-3' -> 'SigmoidLayer-7'>, <FullConnection 'FullConnection-5': 'SigmoidLayer-7' -> 'LinearLayer-8'>]
-Note that the output on your machine will not necessarily be the same.
+Note that the output on your machine will not necessarily be the same.
One way of using the network is to call its 'activate()' method with an input to
be transformed::
>>> n.activate([1, 2])
array([-0.11302355])
-
-Again, this might look different on your machine - the weights of the
-connections have already been initialized randomly. To have a look at those
+
+Again, this might look different on your machine - the weights of the
+connections have already been initialized randomly. To have a look at those
parameters, just check the ``.params`` field of the connections:
-We can access the trainable parameters (weights) of a connection directly, or
+We can access the trainable parameters (weights) of a connection directly, or
read all weights of the network at once::
-
+
>>> in_to_hidden.params
array([ 1.37751406, 1.39320901, -0.24052686, -0.67970042, -0.5999425 , -1.27774679])
>>> hidden_to_out.params
@@ -103,15 +103,15 @@ check them out here::
-1.27774679, -0.32156782, 1.09338421, 0.48784924])
As you can see, the last three parameters of the network equal the parameters of
-the second connection.
+the second connection.
Naming your Networks structure
------------------------------
-In some settings it makes sense to give the parts of a network explicit
-identifiers. The structural components are derive from the :class:`Named`
-class, which means that they have an attribute `.name` by which you can
+In some settings it makes sense to give the parts of a network explicit
+identifiers. The structural components are derive from the :class:`Named`
+class, which means that they have an attribute `.name` by which you can
identify it by. If no name is given, a new name will be generated automatically.
Subclasses can also be named by passing the `name` argument on initialization::
@@ -127,7 +127,7 @@ Subclasses can also be named by passing the `name` argument on initialization::
enforced by the library.
By using names for your networks, printouts look more concise and readable. They
-also ensure that your network components are named in the same way every time
+also ensure that your network components are named in the same way every time
you run your program.
@@ -135,9 +135,9 @@ Using Recurrent Networks
------------------------
In order to allow recurrency, networks have to be able to "look back in time".
-Due to this, the :class:`RecurrentNetwork` class is different from the
-:class:`FeedForwardNetwork` class in the substantial way, that the complete
-history is saved. This is actually memory consuming, but necessary for some
+Due to this, the :class:`RecurrentNetwork` class is different from the
+:class:`FeedForwardNetwork` class in the substantial way, that the complete
+history is saved. This is actually memory consuming, but necessary for some
learning algorithms.
To create a recurrent network, just do as with feedforward networks but use the
@@ -154,7 +154,7 @@ We will quickly build up a network that is the same as in the example above:
>>> n.addConnection(FullConnection(n['in'], n['hidden'], name='c1'))
>>> n.addConnection(FullConnection(n['hidden'], n['out'], name='c2'))
-The :class:`RecurrentNetwork` class has one additional method,
+The :class:`RecurrentNetwork` class has one additional method,
``.addRecurrentConnection()``, which looks back in time one timestep. We can
add one from the hidden to the hidden layer::
@@ -180,7 +180,7 @@ the `reset` method::
array([-0.19623716])
>>> n.activate((2, 2))
array([-0.19675801])
-
+
After the call to ``.reset()``, we are getting the same outputs as just after
the objects creation.
View
60 docs/html/_sources/tutorial/optimization.txt
@@ -6,74 +6,74 @@ Black-box Optimization
This tutorial will illustrate how to use the optimization algorithms in PyBrain.
-Very many practical problems can be framed as optimization problems: finding the best settings for a controller,
+Very many practical problems can be framed as optimization problems: finding the best settings for a controller,
minimizing the risk of an investment portfolio, finding a good strategy in a game, etc.
It always involves determining a certain number of *variables* (the *problem dimension*),
-each of them chosen from a set,
+each of them chosen from a set,
that maximizing (or minimize) a given *objective function*.
-The main categories of optimization problems are based
+The main categories of optimization problems are based
on the kinds of sets the variables are chosen from:
* all real numbers: continuous optimization
- * real numbers with bounds: constrained optimization
+ * real numbers with bounds: constrained optimization
* integers: integer programming
* combinations of the above
* others, e.g. graphs
-These can be further classified according to properties of the objective function
-(e.g. continuity, explicit access to partial derivatives, quadratic form, etc.).
-In black-box optimization the objective function is a black box,
-i.e. there are no conditions about it.
+These can be further classified according to properties of the objective function
+(e.g. continuity, explicit access to partial derivatives, quadratic form, etc.).
+In black-box optimization the objective function is a black box,
+i.e. there are no conditions about it.
The optimization tools that PyBrain provides are all for the most general, black-box case.
They fall into 2 groups:
- * :class:`~pybrain.optimization.optimizer.BlackBoxOptimizer` are applicable to all kinds of variable sets
+ * :class:`~pybrain.optimization.optimizer.BlackBoxOptimizer` are applicable to all kinds of variable sets
* :class:`~pybrain.optimization.optimizer.ContinuousOptimizer` can only be used for continuous optimization
-We will introduce the optimization framework for the more restrictive kind first,
+We will introduce the optimization framework for the more restrictive kind first,
because that case is simpler.
Continuous optimization
-------------------------
+------------------------
-Let's start by defining a simple objective function for (:mod:`numpy` arrays of) continuous variables,
+Let's start by defining a simple objective function for (:mod:`numpy` arrays of) continuous variables,
e.g. the sum of squares:
- >>> def objF(x): return sum(x**2)
+ >>> def objF(x): return sum(x**2)
and an initial guess for where to start looking:
>>> x0 = array([2.1, -1])
-Now we can initialize one of the optimization algorithms,
+Now we can initialize one of the optimization algorithms,
e.g. :class:`~pybrain.optimization.distributionbased.cmaes.CMAES`:
>>> from pybrain.optimization import CMAES
>>> l = CMAES(objF, x0)
-By default, all optimization algorithms *maximize* the objective function,
+By default, all optimization algorithms *maximize* the objective function,
but you can change this by setting the :attr:`minimize` attribute:
>>> l.minimize = True
-.. note::
+.. note::
We could also have done that upon construction:
``CMAES(objF, x0, minimize = True)``
Stopping criteria can be algorithm-specific, but in addition,
it is always possible to define the following ones:
- * maximal number of evaluations