New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TemporalMemory: replace phases with "columnSegmentWalk". Much faster. #917

Merged
merged 9 commits into from May 3, 2016

Conversation

Projects
None yet
6 participants
@mrcslws
Contributor

mrcslws commented Apr 12, 2016

To describe this change, I'll start by describing the current code.

Currently the TemporalMemory algorithm is implemented as a series of phases:

  • Activate the correctly predictive cells.
  • Burst unpredicted columns.
  • Perform learning by adapting segments.
  • Compute predictive cells due to lateral input on distal dendrites

Through all of this, it's maintaining a central set of data: activeColumns, activeCells, winnerCells, predictedActiveColumns, predictedInactiveCells, predictiveCells, matchingCells, learningSegments, activeSegments, matchingSegments.

Phases 1 and 2 each append to the data. Phases 1, 2, and 3 do lookups in the data.

Phase 1 walks the predicted cells, checking which are active, doing one set lookup per predicted cell. Similarly, it walks the matching cells, doing one set lookup per matching cell. For each predicted/matching cell, it does a set insertion.

Phase 2 does a set difference to figure out the unpredicted active columns. For each, it does a set insertion. It scans all segments in the column, looks at all their synapses, figures out the best, and it saves a learning segment.

Phase 3 combines all of the active and learning segments. For each segment, it does two set lookups to figure out whether the segment is from a correctly predicted cell or learning bursting cell. Then, for each matching segment it does a set lookup to see if it's in an inactive column.

Phase 4 uses the active cells and connections to figure out the new cell excitations.

You could optimize this, kinda. Some of those set lookups aren't necessary. You could use the matchingSegments during Phase 3 to avoid repeating work from Phase 4 of the previous step. But this "phases" approach will always do two bad things:

  • It throws away context, then has to infer it again later. (e.g. set lookups). When Phase 1 finds a segment, the code has a lot of context, e.g. the fact that the segment correctly predicted a column. Later when Phase 3 finds this segment, it has to reinfer the same context.
  • It builds central collections of transient data. That's a lot of memory allocations to do per timestep. It's accessing memory all over the place. There will be cache misses.

This change merges phases 1 - 3 , eliminates every set lookup/insert/difference that I mentioned above, and abolishes the central bundle of transient data. With a single O(n) pass, it finds every bursting column, non-bursting column, and predicted inactive column. To accomplish this, once per timestep it does a O(n log n) sort of the active columns, the active segments, and the matching segments.

The segment learning all happens during the O(n) pass. Now the learning segments are selected in O(nMatchingSegments). For each learning segment, learning is still O(nSynapsesOnSegment log nActiveCells). So, selection is much faster, but the actual learning is the same.

The only data that's maintained during TemporalMemory::compute is the non-transient data, i.e. the data that we serialize. The CPU spends its time accessing the stack and sequentially accessing memory on the heap (a few vectors). We do all needed processing for a column, then we move on, so there's no need to preserve information for future phases. This is all designed around abolishing cache misses.

I call this O(n) pass the columnSegmentWalk. By organizing it this way, TemporalMemory::compute is very readable. You can basically read the entire TemporalMemory algorithm by reading TemporalMemory::compute. The code is less complicated because it no longer has to infer the same context multiple times.

The existing TemporalMemory unit tests don't call TemporalMemory::compute, they instead call phase methods. So I had to create a fresh set of unit tests. They call compute, so they'll survive future changes.

Fixes #916

Here's the output for:

python $NUPIC/scripts/temporal_memory_performance_benchmark.py

Reference:
TP10X2: 1.684118s

Before:
TM (C++): 1.555487s

After:
TM (C++): 0.269989s

TemporalMemory: replace phases with "columnSegmentWalk". Much faster.
To describe this change, I'll start by describing the current code.

Currently the TemporalMemory algorithm is implemented as a series of phases:

- Activate the correctly predictive cells.
- Burst unpredicted columns.
- Perform learning by adapting segments.
- Compute predictive cells due to lateral input on distal dendrites

Through all of this, it's maintaining a central set of data: activeColumns,
activeCells, winnerCells, predictedActiveColumns, predictedInactiveCells,
predictiveCells, matchingCells, learningSegments, activeSegments,
matchingSegments.

Phases 1 and 2 each append to the data. Phases 1, 2, and 3 do lookups in the
data.

Phase 1 walks the predicted cells, checking which are active, doing one set
lookup per predicted cell. Similarly, it walks the matching cells, doing one set
lookup per matching cell. For each predicted/matching cell, it does a set
insertion.

Phase 2 does a set difference to figure out the unpredicted active columns. For
each, it does a set insertion. It scans all segments in the column, looks at all
their synapses, figures out the best, and it saves a learning segment.

Phase 3 combines all of the active and learning segments. For each segment, it
does two set lookups to figure out whether the segment is from a correctly
predicted cell or learning bursting cell. Then, for each matching segment it
does a set lookup to see if it's in an inactive column.

Phase 4 uses the active cells and connections to figure out the new cell
excitations.


You could optimize this, kinda. Some of those set lookups aren't necessary. You
could use the matchingSegments during Phase 3 to avoid repeating work from Phase
4 of the previous step. But this "phases" approach will always do two bad things:

- It throws away context, then has to infer it again later. (e.g. set lookups).
  When Phase 1 finds a segment, the code has a lot of context, e.g. the fact
  that the segment correctly predicted a column. Later when Phase 3 finds this
  segment, it has to reinfer the same context.
- It builds central collections of transient data. That's a lot of memory
  allocations to do per timestep. It's accessing memory all over the
  place. There will be cache misses.

This change merges phases 1 - 3 , eliminates every set lookup/insert/difference
that I mentioned above, and abolishes the central bundle of transient data. With
a single O(n) pass, it finds every bursting column, non-bursting column, and
predicted inactive column. To accomplish this, once per timestep it does a
O(n log n) sort of the active columns, the active segments, and the matching
segments.

The segment learning all happens during the O(n) pass. Now the learning segments
are selected in O(nMatchingSegments). For each learning segment, learning is
still O(nSynapsesOnSegment log nActiveCells). So, selection is much faster, but
the actual learning is the same.

The only data that's maintained during TemporalMemory::compute is the
non-transient data, i.e. the data that we serialize. The CPU spends its time
accessing the stack and sequentially accessing memory on the heap (a few
vectors). We do all needed processing for a column, then we move on, so there's
no need to preserve information for future phases. This is all designed around
abolishing cache misses.


I call this O(n) pass the `columnSegmentWalk`. By organizing it this way,
TemporalMemory::compute is very readable. You can basically read the entire
TemporalMemory algorithm by reading `TemporalMemory::compute`. The code is less
complicated because it no longer has to infer the same context multiple times.


The existing TemporalMemory unit tests don't call TemporalMemory::compute, they
instead call phase methods. So I had to create a fresh set of unit tests. They
call `compute`, so they'll survive future changes.

Here's the output for:

~~~
python $NUPIC/scripts/temporal_memory_performance_benchmark.py
~~~

Reference:
TP10X2: 1.684118s

Before:
TM (C++): 1.555487s

After:
TM (C++): 0.269989s
@numenta-ci

This comment has been minimized.

Show comment
Hide comment
@numenta-ci

numenta-ci Apr 12, 2016

By analyzing the blame information on this pull request, we identified @rcrowder, @chetan51 and @SaganBolliger to be potential reviewers

numenta-ci commented Apr 12, 2016

By analyzing the blame information on this pull request, we identified @rcrowder, @chetan51 and @SaganBolliger to be potential reviewers

@mrcslws

This comment has been minimized.

Show comment
Hide comment
@mrcslws
Contributor

mrcslws commented Apr 12, 2016

@cogmission

This comment has been minimized.

Show comment
Hide comment
@cogmission

cogmission Apr 12, 2016

If accepted, will this be ported to Python?

cogmission commented Apr 12, 2016

If accepted, will this be ported to Python?

@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy Apr 12, 2016

Contributor

@cogmission most likely, we will discuss that after we do a design review and agree this is a good change

Contributor

scottpurdy commented Apr 12, 2016

@cogmission most likely, we will discuss that after we do a design review and agree this is a good change

@mrcslws

This comment has been minimized.

Show comment
Hide comment
@mrcslws

mrcslws Apr 13, 2016

Contributor

We've talked a little bit about whether the lambdas are a good idea, e.g. here: https://github.com/mrcslws/nupic.core/blob/columnSegmentWalk/src/nupic/algorithms/TemporalMemory.cpp#L311

I think this approach has two main virtues:

  • It makes the columnSegmentWalk method reusable.
  • It makes the Temporal Memory algorithm super readable.

Reusable

When we call code "reusable", we're really saying that we've structured it in a way that it would make sense to reuse. When code is structured like that, it's easier to think about, even if you don't actually reuse it.

But I actually do intend to reuse it. My initial version of this code made two columnSegmentWalk passes. The first pass just figured out the number of active cells and winner cells, and then called vector::reserve with these numbers, then ran the second pass which populated the vectors. This is still a potential easy optimization. I removed it because I don't see vector::push_back operations standing out in perf traces, so I decided it didn't belong in this initial change.

You could make it reusable without lambdas. You'd create a base class:

// Abstract
class ColumnSegmentHandler
{
public:
  void onUnpredictedColumnActive(
    UInt column,
    vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
    vector<SegmentOverlap>::const_iterator matchingSegmentsEnd) = 0;
  ...
};

class CellActivationSegmentLearningHandler : public ColumnSegmentHandler
{
  CellActivationSegmentLearningHandler(
    TemporalMemory& tm_,
    vector<Cell>& activeCells_,
    vector<Cell>& winnerCells_)
    :tm_(tm), activeCells_(activeCells), winnerCells_(winnerCells)
    {}

  void onUnpredictedColumnActive(
    UInt column,
    vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
    vector<SegmentOverlap>::const_iterator matchingSegmentsEnd) override
    {
      // Change activeCells_
      // Call tm_.adaptSegment_(), etc.
    }
private:
  TemporalMemory& tm_;
  vector<Cell>& activeCells_;
  vector<Cell>& winnerCells_;
};

The columnSegmentWalk function would take in a ColumnSegmentHandler and call its event handlers.

This would work, but I think it'd make our code really confusing. This helps illustrate why closures are wonderful.

Readable

Another approach would be to forget about reusability, and just add 4 methods:

void TemporalMemory::columnSegmentWalk_(...);
void TemporalMemory::onPredictedColumnActive_(...);
void TemporalMemory::onUnpredictedColumnActive_(...);
void TemporalMemory::onPredictedColumnsInactive_(...);

The columnSegmentWalk_ would simply call these methods. We'd plumb the prevActiveCells and prevWinnerCells down to the methods.

Or there are other similar strategies that try to have purer functions.

Either way, we'd have helper functions calling helper functions. I don't like how that would spread the logic all over the place. In general I don't think good classes have grandchild methods. This is the second wonderful thing about lambdas. They solve the "my grandchildren are too far away" problem. The callbacks live in the caller, so you make the caller very expressive. I'm a fan of how readable the TemporalMemory::compute method is.

Contributor

mrcslws commented Apr 13, 2016

We've talked a little bit about whether the lambdas are a good idea, e.g. here: https://github.com/mrcslws/nupic.core/blob/columnSegmentWalk/src/nupic/algorithms/TemporalMemory.cpp#L311

I think this approach has two main virtues:

  • It makes the columnSegmentWalk method reusable.
  • It makes the Temporal Memory algorithm super readable.

Reusable

When we call code "reusable", we're really saying that we've structured it in a way that it would make sense to reuse. When code is structured like that, it's easier to think about, even if you don't actually reuse it.

But I actually do intend to reuse it. My initial version of this code made two columnSegmentWalk passes. The first pass just figured out the number of active cells and winner cells, and then called vector::reserve with these numbers, then ran the second pass which populated the vectors. This is still a potential easy optimization. I removed it because I don't see vector::push_back operations standing out in perf traces, so I decided it didn't belong in this initial change.

You could make it reusable without lambdas. You'd create a base class:

// Abstract
class ColumnSegmentHandler
{
public:
  void onUnpredictedColumnActive(
    UInt column,
    vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
    vector<SegmentOverlap>::const_iterator matchingSegmentsEnd) = 0;
  ...
};

class CellActivationSegmentLearningHandler : public ColumnSegmentHandler
{
  CellActivationSegmentLearningHandler(
    TemporalMemory& tm_,
    vector<Cell>& activeCells_,
    vector<Cell>& winnerCells_)
    :tm_(tm), activeCells_(activeCells), winnerCells_(winnerCells)
    {}

  void onUnpredictedColumnActive(
    UInt column,
    vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
    vector<SegmentOverlap>::const_iterator matchingSegmentsEnd) override
    {
      // Change activeCells_
      // Call tm_.adaptSegment_(), etc.
    }
private:
  TemporalMemory& tm_;
  vector<Cell>& activeCells_;
  vector<Cell>& winnerCells_;
};

The columnSegmentWalk function would take in a ColumnSegmentHandler and call its event handlers.

This would work, but I think it'd make our code really confusing. This helps illustrate why closures are wonderful.

Readable

Another approach would be to forget about reusability, and just add 4 methods:

void TemporalMemory::columnSegmentWalk_(...);
void TemporalMemory::onPredictedColumnActive_(...);
void TemporalMemory::onUnpredictedColumnActive_(...);
void TemporalMemory::onPredictedColumnsInactive_(...);

The columnSegmentWalk_ would simply call these methods. We'd plumb the prevActiveCells and prevWinnerCells down to the methods.

Or there are other similar strategies that try to have purer functions.

Either way, we'd have helper functions calling helper functions. I don't like how that would spread the logic all over the place. In general I don't think good classes have grandchild methods. This is the second wonderful thing about lambdas. They solve the "my grandchildren are too far away" problem. The callbacks live in the caller, so you make the caller very expressive. I'm a fan of how readable the TemporalMemory::compute method is.

@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy Apr 14, 2016

Contributor

Discussed offline and my recommendations are:

  1. Move the logic in segmentUpdateWalk into separate functions (like getSegmentsBeforeColumn that is called for both active and matching segments and returns the set of segments before the specified column, and then a similar getSegmentsForColumn). This makes segmentUpdateWalk an "imperative shell" [1] with the new functions comprising the "functional core" [1].
  2. Move the logic for each lambda into a named function. These functions will take additional parameters for the variables needed from the compute scope. In order to avoid passing these additional parameters to segmentUpdateWalk, you can use std::bind as you had suggested.

There are two additional possible changes. I haven't decided if I think they are better or worse than what we have.

A. Move the outer loop in segmentUpdateWalk into compute and get rid of segmentUpdateWalk completely. This eliminates the need for passing functions around, creating a flatter stack trace, but it requires that compute understand some of logic around activeColumns, activeSegments, and matchingSegments.

B. Move the logic functions extracted from segmentUpdateWalk (in recommendation 1 at the very beginning of this comment), along with the corresponding data structures, into a separate class that is responsible for understanding the structure of the data structes and the relationship between them. We probably don't want to do A if we do this change.

[1] Boundaries

Contributor

scottpurdy commented Apr 14, 2016

Discussed offline and my recommendations are:

  1. Move the logic in segmentUpdateWalk into separate functions (like getSegmentsBeforeColumn that is called for both active and matching segments and returns the set of segments before the specified column, and then a similar getSegmentsForColumn). This makes segmentUpdateWalk an "imperative shell" [1] with the new functions comprising the "functional core" [1].
  2. Move the logic for each lambda into a named function. These functions will take additional parameters for the variables needed from the compute scope. In order to avoid passing these additional parameters to segmentUpdateWalk, you can use std::bind as you had suggested.

There are two additional possible changes. I haven't decided if I think they are better or worse than what we have.

A. Move the outer loop in segmentUpdateWalk into compute and get rid of segmentUpdateWalk completely. This eliminates the need for passing functions around, creating a flatter stack trace, but it requires that compute understand some of logic around activeColumns, activeSegments, and matchingSegments.

B. Move the logic functions extracted from segmentUpdateWalk (in recommendation 1 at the very beginning of this comment), along with the corresponding data structures, into a separate class that is responsible for understanding the structure of the data structes and the relationship between them. We probably don't want to do A if we do this change.

[1] Boundaries

@mrcslws

This comment has been minimized.

Show comment
Hide comment
@mrcslws

mrcslws Apr 20, 2016

Contributor

There's not really a reason to use std::bind. It builds up a custom function at runtime, whereas lambdas dodge that runtime complexity. Might be helpful: http://stackoverflow.com/a/17545183

In other words, we wouldn't do this:

auto f = std::bind(onPredictedColumnActive,
                   std::cref(prevActiveCells),
                   std::cref(prevWinnerCells),
                   std::ref(activeCells),
                   std::ref(winnerCells),
                   std::ref(connections),
                   std::placeholders::_1,
                   std::placeholders::_2,
                   std::placeholders::_3,
                   std::placeholders::_4,
                   std::placeholders::_5);

We'd do this:

auto f = [&](UInt column,
             vector<SegmentOverlap>::const_iterator activeSegmentsBegin,
             vector<SegmentOverlap>::const_iterator activeSegmentsEnd,
             vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
             vector<SegmentOverlap>::const_iterator matchingSegmentsEnd)
{
  onPredictedColumnActive(prevActiveCells,
                          prevWinnerCells,
                          activeCells,
                          winnerCells,
                          connections,
                          column,
                          activeSegmentsBegin,
                          activeSegmentsEnd,
                          matchingSegmentsBegin,
                          matchingSegmentsEnd);
}

This avoids making the code talk about "placeholders" and such. It's using language features, not STL features.

(Though I think the code is further improved by adding more logic to the lambdas)

Contributor

mrcslws commented Apr 20, 2016

There's not really a reason to use std::bind. It builds up a custom function at runtime, whereas lambdas dodge that runtime complexity. Might be helpful: http://stackoverflow.com/a/17545183

In other words, we wouldn't do this:

auto f = std::bind(onPredictedColumnActive,
                   std::cref(prevActiveCells),
                   std::cref(prevWinnerCells),
                   std::ref(activeCells),
                   std::ref(winnerCells),
                   std::ref(connections),
                   std::placeholders::_1,
                   std::placeholders::_2,
                   std::placeholders::_3,
                   std::placeholders::_4,
                   std::placeholders::_5);

We'd do this:

auto f = [&](UInt column,
             vector<SegmentOverlap>::const_iterator activeSegmentsBegin,
             vector<SegmentOverlap>::const_iterator activeSegmentsEnd,
             vector<SegmentOverlap>::const_iterator matchingSegmentsBegin,
             vector<SegmentOverlap>::const_iterator matchingSegmentsEnd)
{
  onPredictedColumnActive(prevActiveCells,
                          prevWinnerCells,
                          activeCells,
                          winnerCells,
                          connections,
                          column,
                          activeSegmentsBegin,
                          activeSegmentsEnd,
                          matchingSegmentsBegin,
                          matchingSegmentsEnd);
}

This avoids making the code talk about "placeholders" and such. It's using language features, not STL features.

(Though I think the code is further improved by adding more logic to the lambdas)

@mrcslws

This comment has been minimized.

Show comment
Hide comment
@mrcslws

mrcslws Apr 20, 2016

Contributor

It's useful to think of columnSegmentWalk as a custom foreach.

Contributor

mrcslws commented Apr 20, 2016

It's useful to think of columnSegmentWalk as a custom foreach.

Walk the columns and segments via ExcitedColumns iterator
The "columnSegmentWalk" passes different parameter sets into each of its
callbacks. This change uses a common denominator for the callbacks -- it's
simply fetching segments for each column, and denoting whether the column is
active.

Then, this change moves away from callbacks, and instead exposes the walk via an
iterator.
Show outdated Hide outdated src/nupic/algorithms/TemporalMemory.cpp
bestCell = leastUsedCell(cells, _connections);
foundCell = true;
foundSegment = false;
return Iterator(activeColumns_.end(),

This comment has been minimized.

@mrcslws

mrcslws Apr 22, 2016

Contributor

Typo: two spaces after return

@mrcslws

mrcslws Apr 22, 2016

Contributor

Typo: two spaces after return

@breznak

This comment has been minimized.

Show comment
Hide comment
@breznak

breznak Apr 23, 2016

Member

Reference:
TP10X2: 1.684118s
Before:
TM (C++): 1.555487s
After:
TM (C++): 0.269989s

That's an amazing performance gain, @mrcslws ! 👏 💯

Member

breznak commented Apr 23, 2016

Reference:
TP10X2: 1.684118s
Before:
TM (C++): 1.555487s
After:
TM (C++): 0.269989s

That's an amazing performance gain, @mrcslws ! 👏 💯

@cogmission

This comment has been minimized.

Show comment
Hide comment
@cogmission

cogmission Apr 23, 2016

Awesome! 👍 Nice job! (Don't forget to share this with "the little guys" by putting this into the python version so it can be ported)?

@mrcslws @breznak Just curious, how are you guys verifying the algorithm consistency? Are you getting the same test results? (I think Marcus mentioned that you can't really because of the different amount or order of RNG calls). ?? But... How do you verify this? For HTM.java, I make sure I get the same output... For RNG differences, I substitute the return vals for my RNG into Python to make sure I get my answers... (tedious but it makes definitely sure)

cogmission commented Apr 23, 2016

Awesome! 👍 Nice job! (Don't forget to share this with "the little guys" by putting this into the python version so it can be ported)?

@mrcslws @breznak Just curious, how are you guys verifying the algorithm consistency? Are you getting the same test results? (I think Marcus mentioned that you can't really because of the different amount or order of RNG calls). ?? But... How do you verify this? For HTM.java, I make sure I get the same output... For RNG differences, I substitute the return vals for my RNG into Python to make sure I get my answers... (tedious but it makes definitely sure)

@rhyolight

This comment has been minimized.

Show comment
Hide comment
@rhyolight
Member

rhyolight commented Apr 25, 2016

Show outdated Hide outdated src/nupic/algorithms/TemporalMemory.cpp
{
bestSegment = _connections.createSegment(bestCell);
foundSegment = true;
NTA_ASSERT(!finished_);

This comment has been minimized.

@scottpurdy

scottpurdy Apr 26, 2016

Contributor

What happens with a release build? Do we just get some undefined behavior?

@scottpurdy

scottpurdy Apr 26, 2016

Contributor

What happens with a release build? Do we just get some undefined behavior?

This comment has been minimized.

@mrcslws

mrcslws Apr 26, 2016

Contributor

Correct. The only way that could happen is if someone in this file screws up, so I think an assert makes sense. I think of NTA_CHECK as being more for checking parameters of the public interface.

@mrcslws

mrcslws Apr 26, 2016

Contributor

Correct. The only way that could happen is if someone in this file screws up, so I think an assert makes sense. I think of NTA_CHECK as being more for checking parameters of the public interface.

Show outdated Hide outdated src/nupic/algorithms/TemporalMemory.cpp
{
if (excitedColumn.activeSegmentsBegin != excitedColumn.activeSegmentsEnd)
{
// Predicted active column.

This comment has been minimized.

@scottpurdy

scottpurdy Apr 26, 2016

Contributor

I'd break each of the three cases out into functions. That will make this function easier to read.

@scottpurdy

scottpurdy Apr 26, 2016

Contributor

I'd break each of the three cases out into functions. That will make this function easier to read.

mrcslws added some commits Apr 29, 2016

Move algorithm details out of the compute method
This uses C-like functions rather than private methods. This explicitly shows
what parameters and state will be used / modified.
Protect all public member variables (except connections)
It's confusing to get active cells via ".activeCells" and predictive cells via
".getPredictiveCells()". It's confusing that ".activeSegments" returns this
weird "SegmentOverlap" struct. And it's weird that the TemporalMemory's methods
are accessing some unseen "activeCells" variable, rather than "activeCells_".

This change makes things a lot more consistent.
vector<SegmentOverlap>::const_iterator activeSegmentsEnd_;
vector<SegmentOverlap>::const_iterator matchingSegment_;
vector<SegmentOverlap>::const_iterator matchingSegmentsEnd_;
const UInt cellsPerColumn_;

This comment has been minimized.

@scottpurdy

scottpurdy May 3, 2016

Contributor

UInt32 to avoid old serialization issues between platforms.

@scottpurdy

scottpurdy May 3, 2016

Contributor

UInt32 to avoid old serialization issues between platforms.

This comment has been minimized.

@scottpurdy

scottpurdy May 3, 2016

Contributor

Actually it looks like this may not ever be serialized so may be fine. May not hurt to get in the habit of using UInt32 explicitly though.

@scottpurdy

scottpurdy May 3, 2016

Contributor

Actually it looks like this may not ever be serialized so may be fine. May not hurt to get in the habit of using UInt32 explicitly though.

@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy May 3, 2016

Contributor

nit: make sure copyright dates in headers end in -2016

Contributor

scottpurdy commented May 3, 2016

nit: make sure copyright dates in headers end in -2016

@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy May 3, 2016

Contributor

@mrcslws let me know when you're happy with the state of this PR. I think it's overdue for merge at this point.

Contributor

scottpurdy commented May 3, 2016

@mrcslws let me know when you're happy with the state of this PR. I think it's overdue for merge at this point.

@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy May 3, 2016

Contributor

@cogmission fyi, this is going to be merged soon. We will at some point update TemporalMemory.py in nupic to match this more closely. And we will update the segment deletion/reuse logic in both implementations to match what we do in TP.py/TP10X2.py.

Contributor

scottpurdy commented May 3, 2016

@cogmission fyi, this is going to be merged soon. We will at some point update TemporalMemory.py in nupic to match this more closely. And we will update the segment deletion/reuse logic in both implementations to match what we do in TP.py/TP10X2.py.

@mrcslws

This comment has been minimized.

Show comment
Hide comment
@mrcslws

mrcslws May 3, 2016

Contributor

@scottpurdy ready for merge. We should also merge numenta/nupic#3104 . It doesn't require this change, but we'll need it after we merge this.

Contributor

mrcslws commented May 3, 2016

@scottpurdy ready for merge. We should also merge numenta/nupic#3104 . It doesn't require this change, but we'll need it after we merge this.

@scottpurdy scottpurdy merged commit b9b1f22 into numenta:master May 3, 2016

2 of 4 checks passed

continuous-integration/appveyor/pr Waiting for AppVeyor build to complete
Details
continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
Contributor Validator mrcslws signed the Contributor License
Details
Fixes Issue Validator PR is properly linked to an issue
Details
@scottpurdy

This comment has been minimized.

Show comment
Hide comment
@scottpurdy

scottpurdy May 3, 2016

Contributor

😮

Contributor

scottpurdy commented May 3, 2016

😮

@cogmission

This comment has been minimized.

Show comment
Hide comment
@cogmission

cogmission May 3, 2016

@scottpurdy Thanks for the heads-up! I hope it is prioritized highly to get this into Python! ;-)

cogmission commented May 3, 2016

@scottpurdy Thanks for the heads-up! I hope it is prioritized highly to get this into Python! ;-)

@rhyolight

This comment has been minimized.

Show comment
Hide comment
@rhyolight

rhyolight May 3, 2016

Member

We will at some point update TemporalMemory.py in nupic to match this more closely. And we will update the segment deletion/reuse logic in both implementations to match what we do in TP.py/TP10X2.py.

@scottpurdy @mrcslws Is there an existing issue for this?

Member

rhyolight commented May 3, 2016

We will at some point update TemporalMemory.py in nupic to match this more closely. And we will update the segment deletion/reuse logic in both implementations to match what we do in TP.py/TP10X2.py.

@scottpurdy @mrcslws Is there an existing issue for this?

@cogmission

This comment has been minimized.

Show comment
Hide comment
@cogmission

cogmission commented May 3, 2016

@rhyolight @scottpurdy @mrcslws There is now...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment