Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

NUP-2397: rename TP* to TM* #3555

Merged
merged 14 commits into from
Apr 26, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@
* Updated SDR classifier internals
* calculate raw anomly score in KNNAnomalyClassifier
* removes anomaly.py dependency in network_api_demo.py
* changes how TPRegion computes prevPredictdColumns and updates clamodel
* changes how TMRegion computes prevPredictdColumns and updates clamodel
* Install pip from local copy, other simplifications
* Fixup PYTHONPATH to properly include previously-defined PYTHONPATH
* adds pseudocode to core functions
Expand Down Expand Up @@ -250,8 +250,8 @@
* Change temporalImp to tm_py for both networks and add comment about it being a temporary value until C++ TM is implemented
* Refactored to remove common code between network_checkpoint_test.py and temporal_memory_compatibility_test.py
* Use named constants from nupic.data.fieldmeta in aggregator module instead of naked constants.
* Fix AttributeError: 'TPShim' object has no attribute 'topDownCompute'
* Support more parameters in TPShim
* Fix AttributeError: 'TMShim' object has no attribute 'topDownCompute'
* Support more parameters in TMShim
* Serialize remaining fields in CLAModel using capnproto
* Enforce pyproj==1.9.3 in requirements.txt
* Use FastCLAClassifier read class method instead of instance method
Expand Down Expand Up @@ -394,12 +394,12 @@
* Merge remote-tracking branch 'upstream/master'
* Rename testconsoleprinter_output.txt so as to not be picked up by py.test as a test during discovery
* likelihood test: fix raw-value must be int
* Fix broken TPShim
* Revert "Fix TP Shim"
* Fix broken TMShim
* Revert "Fix TM Shim"
* Anomaly serialization verify complex anomaly instance
* Likelihood pickle serialization test
* MovingAverage pickle serialization test
* Fix TP Shim
* Fix TM Shim
* Removed stripUnlearnedColumns-from-SPRegion
* Updated comment describing activeArray paramater of stripUnlearnedColumns method in SP
* Revert "MovingAvera: remove unused pickle serialization method"
Expand Down Expand Up @@ -482,7 +482,7 @@
* Remove FDRCSpatial2.py
* Replace the use of FDRCSpatial2 to SpatialPooler
* SP profile implemented from tp_large
* TP profile: can use args from command-line, random data used
* TM profile: can use args from command-line, random data used
* Adds AnomalyRegion for computing the raw anomaly score. Updates the network api example to use the new anomaly region. Updates PyRegion to have better error messages.
* Remove FlatSpatialPooler
* Add delete segment/synapse functionality to Connections data structure
Expand Down
8 changes: 4 additions & 4 deletions ci/travis/script-run-examples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,14 @@ python ${NUPIC}/examples/bindings/sparse_matrix_how_to.py || exit
# examples/opf (run at least 1 from each category)
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/anomaly/spatial/2field_few_skewed/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/anomaly/temporal/saw_200/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/classification/category_TP_1/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/classification/category_TM_1/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/missing_record/simple_0/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/multistep/hotgym/ || exit
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/opfrunexperiment_test/simpleOPF/hotgym_1hr_agg/ || exit

# opf/experiments/params - skip now
python ${NUPIC}/scripts/run_opf_experiment.py ${NUPIC}/examples/opf/experiments/spatial_classification/category_1/ || exit

# examples/tp
python ${NUPIC}/examples/tp/hello_tm.py || exit
python ${NUPIC}/examples/tp/tp_test.py || exit
# examples/tm
python ${NUPIC}/examples/tm/hello_tm.py || exit
python ${NUPIC}/examples/tm/tm_test.py || exit
6 changes: 3 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,12 +135,12 @@ nupic
│   ├── SPRegion.py [TODO]
│   ├── SVMClassifierNode.py [TODO]
│   ├── Spec.py [TODO]
│   ├── TPRegion.py [TODO]
│   ├── TMRegion.py [TODO]
│   ├── TestRegion.py [TODO]
│   └─── UnimportableNode.py [TODO]
├── research
│   ├── TP.py [TODO]
│   ├── TP10X2.py [TODO]
│   ├── BacktrackingTM.py [TODO]
│   ├── BacktrackingTMCPP.py [TODO]
│   ├── TP_shim.py [TODO]
│   ├── connections.py [TODO]
│   ├── fdrutilities.py [TODO]
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/network/complete-example.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def createNetwork(dataSource):

# Add SP and TM regions.
network.addRegion("SP", "py.SPRegion", json.dumps(modelParams["spParams"]))
network.addRegion("TM", "py.TPRegion", json.dumps(modelParams["tmParams"]))
network.addRegion("TM", "py.TMRegion", json.dumps(modelParams["tmParams"]))

# Add a classifier region.
clName = "py.%s" % modelParams["clParams"].pop("regionName")
Expand Down
4 changes: 2 additions & 2 deletions docs/source/api/network/regions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ SPRegion
:members:
:show-inheritance:

TPRegion
TMRegion
^^^^^^^^^^^^^

.. autoclass:: nupic.regions.TPRegion.TPRegion
.. autoclass:: nupic.regions.TMRegion.TMRegion
:members:
:show-inheritance:

Expand Down
10 changes: 5 additions & 5 deletions docs/source/guides/anomaly-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ This technical note describes how the anomaly score is implemented and incorpora

The anomaly score enables the CLA to provide a metric representing the degree to which each record is predictable. For example, if you have temporal anomaly model that is predicting the energy consumption of a building, each record will have an anomaly score between zero and one. A zero represents a completely predicted value whereas a one represents a completely anomalous value.

The anomaly score feature of CLA is implemented on top of the core spatial and temporal pooler, and don’t require any spatial pooler and temporal pooler algorithm changes.
The anomaly score feature of CLA is implemented on top of the core spatial and temporal memory, and don’t require any spatial pooler and temporal memory algorithm changes.

## TemporalAnomaly model

### Description

The user must specify the model as a TemporalAnomaly type to have the model report the anomaly score. The anomaly score uses the temporal pooler to detect novel points in sequences. This will detect both novel input patterns (because they have not been seen in any sequence) as well as old spatial patterns that occur in a novel context.
The user must specify the model as a TemporalAnomaly type to have the model report the anomaly score. The anomaly score uses the temporal memory to detect novel points in sequences. This will detect both novel input patterns (because they have not been seen in any sequence) as well as old spatial patterns that occur in a novel context.

### Computation

A TemporalAnomaly model calculates the anomaly score based on the correctness of the previous prediction. This is calculated as the percentage of active spatial pooler columns that were incorrectly predicted by the temporal pooler.
A TemporalAnomaly model calculates the anomaly score based on the correctness of the previous prediction. This is calculated as the percentage of active spatial pooler columns that were incorrectly predicted by the temporal memory.

The algorithm for the anomaly score is as follows:

Expand Down Expand Up @@ -59,7 +59,7 @@ There were also some attempts at adding anomaly detection that are "non-temporal

### Computation

Since NontemporalAnomaly models have no temporal pooler, the anomaly score is based on the state within the spatial pooler.
Since NontemporalAnomaly models have no temporal memory, the anomaly score is based on the state within the spatial pooler.

To compute the nontemporal anomaly score, we first compute the "match" score for each winning column after inhibition

Expand All @@ -77,4 +77,4 @@ The purpose of this anomaly score was to detect input records that represented n

### Results

This algorithm was run on some artificial datasets. However, the results were not very promising, and this approach was abandoned. From a theoretical perspective the temporal anomaly detection technique is a superset of this technique. If a static pattern by itself is novel, by definition the temporal pooler won't make good predictions and hence the temporal anomaly score should be high. As such there was not too much interest in pursuing this route.
This algorithm was run on some artificial datasets. However, the results were not very promising, and this approach was abandoned. From a theoretical perspective the temporal anomaly detection technique is a superset of this technique. If a static pattern by itself is novel, by definition the temporal memory won't make good predictions and hence the temporal anomaly score should be high. As such there was not too much interest in pursuing this route.
2 changes: 1 addition & 1 deletion docs/source/guides/swarming/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Swarming
Swarming is a process that automatically determines the best model for a
given dataset. By "best", we mean the model that most accurately produces
the desired output. Swarming figures out which optional components should go
into a model (encoders, spatial pooler, temporal pooler, classifier, etc.),
into a model (encoders, spatial pooler, temporal memory, classifier, etc.),
as well as the best parameter values to use for each component.

We have plans to replace the current swarming library with a more universal
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/swarming/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This document contains detailed instructions for configuring and running swarms. Please see the document [Swarming Algorithm](Swarming-Algorithm) for a description of the underlying swarming algorithm.

Swarming is a process that automatically determines the best model for a given dataset. By "best", we mean the model that most accurately produces the desired output. Swarming figures out which optional components should go into a model (encoders, spatial pooler, temporal pooler, classifier, etc.), as well as the best parameter values to use for each component.
Swarming is a process that automatically determines the best model for a given dataset. By "best", we mean the model that most accurately produces the desired output. Swarming figures out which optional components should go into a model (encoders, spatial pooler, temporal memory, classifier, etc.), as well as the best parameter values to use for each component.

When you run a swarm, you provide the following information:
* A dataset to optimize over (a .csv file containing the inputs and desired output).
Expand Down
Loading