Replies: 1 comment
-
Thanks Mayank!
Right, this is an indirect definition of match, used on the 1st stage of vision because correlation between intensity of reflected light and stability of reflecting object is very low.
Think of it as a 1D version of 2D 3x3 kernel: match is defined for central pixel relative to surrounding pixels.
Right.
Feedback is updating filters by higher-level average of input deviation from current value of these filters. ave_deviation = (summed P.M / summed P.L) / decay with distance from lower-level input if abs ave_deviation > aave # higher-order filter or filter-filter I elaborate on feedback in part 3 of my intro, though it's a hard read
For the same inputs, there will be optional reprocessing of their buffers in higher-level Ps: each P contains dert_, which is a buffer of their elements. This is related to imagination discussion in part 3, nothing in the code yet.
It's pipeline, each level is supposed to continuously receive new inputs from lower level, starting from raw images.
Text is labels of generalized representations: patterns. It should not be used as initial input because current hierarchy is empty: there are no patterns to associate these labels with. When we do have sufficiently general patterns, they can be associated with their labels, establishing corresponding short-cuts to the level that recognizes labels themselves. That's pretty distant from current work.
I don't think so, it should be the same algorithm, just tuned to pay more attention (lower filters). |
Beta Was this translation helpful? Give feedback.
-
Here are my thoughts,
From what I understand, at each level, you cluster the input into patterns and these patterns serve as input for the next level.
I agree that neural networks take a lot of iterations to generate accurate representations and in neural networks With each added layer, the output that ultimately drives learning contains exponentially smaller fraction of original information. Your cross comparison is not random and and easy to refine as well.
However I still would like to fully understand the process of cross-comparison and feedback. It has been a bit difficult for me to understand as mostly I have worked with deep learning and less with conventional techniques. As per my understanding, you add unique set of operations at each level, compare the inputs and cluster them into patterns such that derivatives match. You have two terms loss and match that are clear by names (I mean you would like to minimise the loss) but I am not clear about the exact process.
Beta Was this translation helpful? Give feedback.
All reactions