You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I like to use the old SparseFeatures code (that was used in v1.0) and update it such that it can be used alongside the current Chunk, Delay and STDP code (v1.1). There is however an interesting difference between v1.0 and v1.1: In 1.1, class FeatureHierarchy method simStep, layers are activated with the hidden context, while in the initial version the layers would be activated with the hidden state.
line 62: FeatureHierarchy.cpp v1.1: inputsUse.push_back(_layers.front()._sf->getHiddenContext());
Could you explain why you choose not to use bilateral inhibition? or did i miss something.
Regards HJ!
The text was updated successfully, but these errors were encountered:
getHiddenContext was added as a way to customize the recurrent connections for an encoder. It actually defaults to just being getHiddenStates, but it may in future versions be some other form of recurrent information.
Sorry, my fault, I incorrectly copied getHiddenContext from SparseFeaturesChunk to SparseFeatures. I still need to think why removing this source of non-linearity (by not taking the k-largest activities) will work well for this chunk encoder.
I like to use the old SparseFeatures code (that was used in v1.0) and update it such that it can be used alongside the current Chunk, Delay and STDP code (v1.1). There is however an interesting difference between v1.0 and v1.1: In 1.1, class FeatureHierarchy method simStep, layers are activated with the hidden context, while in the initial version the layers would be activated with the hidden state.
line 62: FeatureHierarchy.cpp v1.1:
inputsUse.push_back(_layers.front()._sf->getHiddenContext());
Could you explain why you choose not to use bilateral inhibition? or did i miss something.
Regards HJ!
The text was updated successfully, but these errors were encountered: