Skip to content

A sample workflow with the REPL

Felix Andrews edited this page Dec 7, 2015 · 18 revisions

Let's use the REPL to run the directional-steps-1d demo (blog post, viz).

As described in README.md, start a REPL:

lein repl

Now, in the REPL, let's load the demo code and a couple other namespaces that we'll use.

(require '[org.nfrac.comportex.demos.directional-steps-1d :as demo]
         '[org.nfrac.comportex.core :as core]
         '[org.nfrac.comportex.protocols :as p]
         '[org.nfrac.comportex.repl])
;= nil
(org.nfrac.comportex.repl/truncate-large-data-structures)
;= nil
(use 'clojure.pprint)
;= nil

We're now equipped to create a model:

(def model (demo/n-region-model 2))
;= #'user/model

You can explore the model.

(pprint model)
;{:ff-deps {:rgn-0 (:input), :rgn-1 [:rgn-0]},
; :fb-deps {:input (:rgn-0), :rgn-0 (:rgn-1)},
; :strata [#{:input} #{:rgn-0} #{:rgn-1}],
; :sensors
; {:input
;  [{:selectors ([0] [1])}
;   {:encoders
;    ({:topo {:size 60}, :value->index {:down 0, :up 1}}
;     {:topo {:size 240}, :n-active 30, :lower 0, :upper 7})}]},
; :senses
; {:input {:topo {:size 300}, :bits (), :sensory? true, :motor? nil}},
; :regions
; {:rgn-0
;  {:layer-3
;   {:spec
;    {:global-inhibition? false,
;     :ff-perm-init-hi 0.25,
;     :activation-level-max 0.1,
;     :ff-perm-init-lo 0.1,
;     :boost-active-duty-ratio 0.001,
;     :lateral-synapses? true,
;     :temporal-pooling-max-exc 50.0,
;     :random-seed 42,
;     :column-dimensions [800],
;     :distal-vs-proximal-weight 0.0,
;     :ff-init-frac 0.3,
;     :distal
;     {:perm-connected 0.2,
;      :perm-punish 0.002,
;      :max-synapse-count 22,
;      :max-segments 5,
;      :perm-init 0.16,
;      :new-synapse-count 12,
;      :stimulus-threshold 9,
;      :punish? true,
;      :learn? true,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.05,
;      :perm-stable-inc 0.05},
;     :distal-topdown-dimensions [500 4],
;     :use-feedback? false,
;     :distal-motor-dimensions [0],
;     :boost-active-every 10000,
;     :max-boost 1.5,
;     :ff-potential-radius 0.2,
;     :temporal-pooling-amp 3.0,
;     :activation-level 0.04,
;     :proximal
;     {:perm-connected 0.2,
;      :max-synapse-count 300,
;      :max-segments 1,
;      :perm-init 0.25,
;      :new-synapse-count 12,
;      :stimulus-threshold 2,
;      :learn? true,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.04,
;      :perm-stable-inc 0.15},
;     :input-dimensions [300],
;     :depth 4,
;     :inhibition-base-distance 1,
;     :apical
;     {:perm-connected 0.2,
;      :perm-punish 0.002,
;      :max-synapse-count 22,
;      :max-segments 5,
;      :perm-init 0.16,
;      :new-synapse-count 12,
;      :stimulus-threshold 9,
;      :punish? true,
;      :learn? false,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.05,
;      :perm-stable-inc 0.05},
;     :duty-cycle-period 1000,
;     :dominance-margin 4,
;     :spontaneous-activation? false,
;     :stable-inbit-frac-threshold 0.5,
;     :inh-radius-every 1000,
;     :temporal-pooling-fall 5.0},
;    :rng
;    #object[clojure.test.check.random.JavaUtilSplittableRandom 0x144ef52c "clojure.test.check.random.JavaUtilSplittableRandom@144ef52c"],
;    :topology {:size 800},
;    :input-topology {:size 300},
;    :inh-radius 58,
;    :boosts [1.0 1.0 1.0 ...],
;    :active-duty-cycles [0.0 0.0 0.0 ...],
;    :proximal-sg
;    {:int-sg
;     {:syns-by-target
;      [{59 0.10041165533726164,
;        20 0.16542851002042397,
;        27 0.19569892410718986,
;        ...}
;       {7 0.11686642709596072,
;        27 0.19165585533096308,
;        46 0.16890485385725607,
;        ...}
;       {7 0.20230515952348632,
;        20 0.17223111730521,
;        55 0.1857668384536414,
;        ...}
;       ...],
;      :targets-by-source
;      [#{62 128 85 ...} #{110 86 154 ...} #{27 33 146 ...} ...],
;      :pcon 0.2,
;      :cull-zeros? false},
;     :n-cols nil,
;     :depth 1,
;     :max-segs 1},
;    :distal-sg
;    {:int-sg
;     {:syns-by-target [{} {} {} ...],
;      :targets-by-source [#{} #{} #{} ...],
;      :pcon 0.2,
;      :cull-zeros? true},
;     :n-cols nil,
;     :depth 4,
;     :max-segs 5},
;    :apical-sg
;    {:int-sg
;     {:syns-by-target [{} {} {} ...],
;      :targets-by-source [#{} #{} #{} ...],
;      :pcon 0.2,
;      :cull-zeros? true},
;     :n-cols nil,
;     :depth 4,
;     :max-segs 5},
;    :state
;    {:in-ff-bits nil,
;     :in-stable-ff-bits nil,
;     :out-ff-bits nil,
;     :out-stable-ff-bits nil,
;     :col-overlaps nil,
;     :matching-ff-seg-paths {},
;     :temporal-pooling-exc {},
;     :active-cols #{},
;     :burst-cols nil,
;     :col-active-cells nil,
;     :active-cells #{},
;     :timestep 0},
;    :distal-state
;    {:on-bits #{},
;     :on-lc-bits nil,
;     :cell-exc {},
;     :pred-cells #{},
;     :matching-seg-paths {},
;     :prior-active-cells nil,
;     :timestep 0},
;    :prior-distal-state
;    {:on-bits #{},
;     :on-lc-bits nil,
;     :cell-exc {},
;     :pred-cells #{},
;     :matching-seg-paths {},
;     :prior-active-cells nil,
;     :timestep 0},
;    :apical-state nil,
;    :prior-apical-state nil,
;    :learn-state
;    {:col-winners {},
;     :winner-seg nil,
;     :learning-cells #{},
;     :learning {},
;     :punishments {},
;     :timestep 0}}},
;  :rgn-1
;  {:layer-3
;   {:spec
;    {:global-inhibition? false,
;     :ff-perm-init-hi 0.25,
;     :activation-level-max 0.1,
;     :ff-perm-init-lo 0.1,
;     :boost-active-duty-ratio 0.001,
;     :lateral-synapses? true,
;     :temporal-pooling-max-exc 50.0,
;     :random-seed 42,
;     :column-dimensions [500],
;     :distal-vs-proximal-weight 0.0,
;     :ff-init-frac 0.3,
;     :distal
;     {:perm-connected 0.2,
;      :perm-punish 0.002,
;      :max-synapse-count 22,
;      :max-segments 5,
;      :perm-init 0.16,
;      :new-synapse-count 12,
;      :stimulus-threshold 9,
;      :punish? true,
;      :learn? true,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.05,
;      :perm-stable-inc 0.05},
;     :distal-topdown-dimensions [0],
;     :use-feedback? false,
;     :distal-motor-dimensions [0],
;     :boost-active-every 10000,
;     :max-boost 1.5,
;     :ff-potential-radius 0.2,
;     :temporal-pooling-amp 3.0,
;     :activation-level 0.04,
;     :proximal
;     {:perm-connected 0.2,
;      :max-synapse-count 300,
;      :max-segments 5,
;      :perm-init 0.25,
;      :new-synapse-count 12,
;      :stimulus-threshold 2,
;      :learn? true,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.04,
;      :perm-stable-inc 0.15},
;     :input-dimensions [800 4],
;     :depth 4,
;     :inhibition-base-distance 1,
;     :apical
;     {:perm-connected 0.2,
;      :perm-punish 0.002,
;      :max-synapse-count 22,
;      :max-segments 5,
;      :perm-init 0.16,
;      :new-synapse-count 12,
;      :stimulus-threshold 9,
;      :punish? true,
;      :learn? false,
;      :perm-dec 0.01,
;      :learn-threshold 7,
;      :perm-inc 0.05,
;      :perm-stable-inc 0.05},
;     :duty-cycle-period 1000,
;     :dominance-margin 4,
;     :spontaneous-activation? false,
;     :stable-inbit-frac-threshold 0.5,
;     :inh-radius-every 1000,
;     :temporal-pooling-fall 5.0},
;    :rng
;    #object[clojure.test.check.random.JavaUtilSplittableRandom 0x1fa6ec21 "clojure.test.check.random.JavaUtilSplittableRandom@1fa6ec21"],
;    :topology {:size 500},
;    :input-topology {:width 800, :height 4},
;    :inh-radius 44,
;    :boosts [1.0 1.0 1.0 ...],
;    :active-duty-cycles [0.0 0.0 0.0 ...],
;    :proximal-sg
;    {:int-sg
;     {:syns-by-target
;      [{558 0.12306034486375303,
;        453 0.12056667955931462,
;        530 0.18422135953952962,
;        ...}
;       {}
;       {}
;       ...],
;      :targets-by-source
;      [#{205 360 415 ...} #{275 370 15 ...} #{85 445 195 ...} ...],
;      :pcon 0.2,
;      :cull-zeros? false},
;     :n-cols nil,
;     :depth 1,
;     :max-segs 5},
;    :distal-sg
;    {:int-sg
;     {:syns-by-target [{} {} {} ...],
;      :targets-by-source [#{} #{} #{} ...],
;      :pcon 0.2,
;      :cull-zeros? true},
;     :n-cols nil,
;     :depth 4,
;     :max-segs 5},
;    :apical-sg
;    {:int-sg
;     {:syns-by-target [{} {} {} ...],
;      :targets-by-source [],
;      :pcon 0.2,
;      :cull-zeros? true},
;     :n-cols nil,
;     :depth 4,
;     :max-segs 5},
;    :state
;    {:in-ff-bits nil,
;     :in-stable-ff-bits nil,
;     :out-ff-bits nil,
;     :out-stable-ff-bits nil,
;     :col-overlaps nil,
;     :matching-ff-seg-paths {},
;     :temporal-pooling-exc {},
;     :active-cols #{},
;     :burst-cols nil,
;     :col-active-cells nil,
;     :active-cells #{},
;     :timestep 0},
;    :distal-state
;    {:on-bits #{},
;     :on-lc-bits nil,
;     :cell-exc {},
;     :pred-cells #{},
;     :matching-seg-paths {},
;     :prior-active-cells nil,
;     :timestep 0},
;    :prior-distal-state
;    {:on-bits #{},
;     :on-lc-bits nil,
;     :cell-exc {},
;     :pred-cells #{},
;     :matching-seg-paths {},
;     :prior-active-cells nil,
;     :timestep 0},
;    :apical-state nil,
;    :prior-apical-state nil,
;    :learn-state
;    {:col-winners {},
;     :winner-seg nil,
;     :learning-cells #{},
;     :learning {},
;     :punishments {},
;     :timestep 0}}}}}
;= nil

There are also generic functions in the protocols namespace to work with some of these values.

Use column-state-freqs for a summary.

(core/column-state-freqs (first (core/region-seq model)))
;= {:size 800, :timestep 0, :active-predicted 0, :active 0, :predicted 0}

Before we move on and run the model, let's examine the input.

demo/initial-input-val
;= [:up 0]
(demo/input-transform demo/initial-input-val)
;= [:up 1]
(demo/input-transform (demo/input-transform demo/initial-input-val))
;= [:up 0]

Note that input-transform is nondeterministic. Let's look at sequences of transforms.

(def the-inputs (iterate demo/input-transform demo/initial-input-val))
#'user/the-inputs
(pprint (take 10 the-inputs))
; ([:up 0]
;  [:down 1]
;  [:down 0]
;  [:up 0]
;  [:down 1]
;  [:up 0]
;  [:up 1]
;  [:up 2]
;  [:down 3]
;  [:up 2]
;  [:down 3])
;= nil

Now let's see how one of these inputs is encoded.

(def encoder (second demo/sensor))
(sort (p/encode encoder demo/initial-input-val))
(30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89)
;= nil

Let's run the model.

(def model-t1 (p/htm-step model demo/initial-input-val))
;= #'user/model-t1

You can probe into model-t1, similar to above.

Let's create a lazy sequence to represent the simulation.

(def simulation
  (reductions p/htm-step model the-inputs))
(def model-t2000 (nth simulation 2000))
;= #'user/model-t2000

Wait a minute for it to compute. And then explore.

(core/column-state-freqs (first (core/region-seq model-t2000)))
;= {:size 800, :timestep 2000, :active-predicted 25, :active 0, :predicted 16}

;; These keywords and formats may change! In fact, some already have.
;; The point is, feel free to explore.
(pprint (-> model-t2000 :regions :rgn-0 :layer-3 :state :active-cells))
; #{[347 1] [188 3] [40 3] [284 3] [271 1] [332 3] [227 2] [203 0] [94 1]
;   [131 1] [299 0] [136 1] [50 0] [103 0] [255 1] [311 1] [87 3] [0 2]
;   [26 3] [34 2] [67 1] [221 2] [10 3] [373 2] [247 3]}
;= nil

Every active cell was predicted in the previous timestep (among others), so no columns were bursting.

Hopefully this page has helped you:

  1. Get an idea of where to start with Comportex
  2. See how useful it is to have visualizations like Sanity, and not just a REPL :)
Clone this wiki locally