Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicitly suggesting that P in scan_P_ is a partial pattern, while P_ and _P_ are complete #4

Closed
wants to merge 1 commit into from

Conversation

Twenkid
Copy link
Collaborator

@Twenkid Twenkid commented Feb 3, 2018

Regarding scan_P_:
Todor: it's reasonable to mention that P, the input parameter, is a partial pattern, send by form_P:

P = pri_s, I, D, Dy, M, My, G, alt_rdn, e_
While the P taken from the P_ or _P_ are complete patterns, with a different sequence:
P = s, ix, x, I, D, Dy, M, My, G, alt_rdn, e_, alt_ 

That's confusing on first read, because by default the same name suggests a list of the same type,
however then below scan_P_ reads P[0][1] with a comment as ix, i.e. a different type.
Yes, it becomes clear when studying more and seeing thatP_is filled later etc., also by keeping in mind
that the above P in form_P is commented as "partial". However, using the same name confuses and I think suggesting
the difference more explicitly at least with a comment would speed up code understanding.
Alternatively, mnemonically suggesting that partial patterns are such, for example pP, Pp or something in the input parameter of the function or in form_P also.

Regarding scan_P_:
# Todor: it's reasonable to mention that P, the input parameter, is a partial pattern, send by form_P:
# P = pri_s, I, D, Dy, M, My, G, alt_rdn, e_
# While the P taken from the P_ or _P_ are complete patterns, with a different sequence:
# P = s, ix, x, I, D, Dy, M, My, G, alt_rdn, e_, alt_ 
# That's confusing on first read, because by default the same name suggests a list of the same type,
# however then below scan_P_ reads P[0][1] with a comment as ix, i.e. a different type. 
# Yes, it becomes clear when studying more and seeing that P_ is filled later etc., also by keeping in mind
# that the above P in form_P is commented as "partial". However, using the same name confuses and I think suggesting
# the difference more explicitly at least with a comment would speed up code understanding.
# Alternatively, mnemonically suggesting that partial patterns are such, for example pP, Pp or something in the input parameter of the function or in form_P_ also.
@boris-kz
Copy link
Owner

boris-kz commented Feb 3, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 3, 2018

I've read it, but yes, it's brief and when reading the code afterwards the detail that P is different is covered by the comments/expectations.

OK. Is this edition fine:

(...)
y-3: term_P2(P2_): P2s are evaluated for termination, re-orientation, and consolidation 

  Any given 2D function always accesses two lines: relatively higher and lower.
    
    postfix '_' denotes array name (vs. same-name element),
    prefix '_' as _P denotes prior-input: a higher-line pattern or variable. Notice that the
               contents depend on the line pattern belongs to: y, y-1, y-2, y-3, thus for example
               the variables of the _P patterns are different than the ones in P in scan_P_.
                  

Which comparison functions are 2D? All but the first: comp and ycomp? Or ycomp also counts as 2D?

Your additional explanations above may also be suggestive if included in the intro, should it be also included? Or there could be an additional file with notes about the code and its logic, potentially as wordy as is suitable?

(I realise that it's possibly explained somewhere in the CogAlg blog, but recently I've been keeping myself focused only in the pure code.)

@boris-kz
Copy link
Owner

boris-kz commented Feb 3, 2018

Yes, starting from ycomp. These explanations are specific to level_1_2D, so I think they should stay in the top comment. Which could be as long as we want. Also, I guess more initial comments in every function will help.

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 3, 2018

OK. (I removed "Notice that", it's redundant.)

@boris-kz
Copy link
Owner

boris-kz commented Feb 3, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 4, 2018

In ycomp():

1.) I noticed that differences branch of the gradient variables doesn't have a "filter" (initially - average).

dg = _d + fdy # d gradient
vg = _m + fmy - ave # v gradient

What's the reasoning, isn't it also supposed to be compared?
Zero is assumed as a fixed filter?

2.) The alt-value for alt_len is taken always from the dP branch of form_P.
(First vP branch is called, then dP, they use the same ident "alt", so the value of the last one is used)

if alt[0]:
dalt_.append(alt); valt_.append(alt)
alt_len, alt_vG, alt_dG = alt

Then both dalt_ and valt_ are updated with the same alt-tuple, which is the one computed in form_P( for the dP.

Further:

if alt_vG > alt_dG: # comp of alt_vG to alt_dG, == goes to alt_P or to vP: primary?
vP[7] += alt_len # alt_len is added to redundant overlap of lesser-oG- vP or dP
else:
dP[7] += alt_len

Both patterns are updated with the difference-pattern's alt_len.

Shouldn't these values be different? Alternative comparisons - expecting different results by default?
Is it correct and if so could you give more reasoning about that?

(I know that they are supposed to "overlap" and thus having some "redundancy", but I guess I'll reach to more clear understanding later)

3.) Then comes:
s = 1 if g > 0 else 0
if s != pri_s and x > rng + 2: #

The sign s marks the first "above/below"-filter comparison. The gradient "g" is the "positive match" and it's combined (summed): the match to previous pixel on the left, and on top: vg = _m + fmy - ave # v gradient.

Again referring to 1. - the lack of "ave" filter for difference - the filter is 0?

@boris-kz
Copy link
Owner

boris-kz commented Feb 4, 2018

1.) I noticed that differences branch of the gradient variables doesn't have a "filter" (initially - average).

dP is defined by comparison to shorter-range feedback: prior input,
vP is defined by comparison to higher (prior) - level feedback: filter.
These are are different orders of patterns

2.) The alt-value for alt_len is taken always from the dP branch of form_P.
(First vP branch is called, then dP, they use the same ident "alt", so the value of the last one is used)

They both use the same local variable alt. It is the same because it's an overlap between the two.
It's a measure of redundancy to eventually increment filter A for the weaker of the two.
But smaller-scale differences in value are minor and probably don't justify the costs of adjusting filter.
So, alt_len is summed and buffered until 2D patterns terminate, and then the filter for the weaker one is adjusted. This is still tentative, part of that redundancy adjustment problem.

if alt_vG > alt_dG: # comp of alt_vG to alt_dG, == goes to alt_P or to vP: primary?
vP[7] += alt_len # alt_len is added to redundant overlap of lesser-oG- vP or dP
else:
dP[7] += alt_len
Both patterns are updated with the difference-pattern's alt_len.

No, only the weaker (redundant) of the two is updated by shared alt_len, see above.

3.) Then comes:
s = 1 if g > 0 else 0
if s != pri_s and x > rng + 2: #
The sign s marks the first "above/below"-filter comparison. The gradient "g" is the "positive match" and it's combined (summed): the match to previous pixel on the left, and on top: vg = _m + fmy - ave # v gradient.
Again referring to 1. - the lack of "ave" filter for difference - the filter is 0?

The equivalent of filter for dP is prior input. It is defined by the sign of difference, not by the sign of value.

@boris-kz
Copy link
Owner

boris-kz commented Feb 5, 2018

Todor, I got rid of alt_. It was buffering individual alt_P overlaps to delete them in case stronger alt_P becomes relatively weaker on some higher level. I no longer think this is feasible, the patterns will diverge and it should be easier to reconstruct from their e_ buffers.
Also replaced alt tuple with olp vars: olp_len, olp_vG, olp_dG, and alt_rdn

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 5, 2018

OK.

For the sorting of fork_ elements by "crit" (on the first call is it form_oG ?), for max-min shouldn't it be:
fork_.sort(key = lambda fork: fork[0], reverse=True) # max-to-min crit, or sort and select at once:
https://www.programiz.com/python-programming/methods/list/sort

fork_ = [(10, 5), (6,4), (1,3), (15,2), (3,1), (245,99), (50,12)]
fork_.sort(key = lambda fork: fork[0], reverse=True)
print(fork_)

[(245, 99), (50, 12), (15, 2), (10, 5), (6, 4), (3, 1), (1, 3)]

@boris-kz
Copy link
Owner

boris-kz commented Feb 5, 2018

Yes, thanks! crit is determined by typ, which is received from scan_P_. Yes, initial crit is fork_oG.

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 7, 2018

Hi, I noticed there's a big update in the code and unspecified parts and I'm reading it, but currently it's a stagnation period about having something meaningful to say.

Just this, the if len(a_list) - after recheck, for lists if (a_list) returns False if it's empty, as you've expected, so it can be just:

if len(fork_):  # P is evaluated for inclusion into its fork _Ps on a higher line (y-1)
if fork_:

My correction stands for the tuples, in their case a_tuple = 0,0,0 returned True for if a_tuple:, either if it's all 0 or has 1 anywhere.

f = []
f.append(5)
f
[5]
if f: print("a")
...
a
f=[]
if f: print("1")
...
a = 0,0,0
if a: print("1")
...
1

@boris-kz
Copy link
Owner

boris-kz commented Feb 8, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 8, 2018

Not recently, but I did when testing Pypy. Console Python and any editor were fine for me for now.
( I opened it through PyCharm now, cloned from Git (VCS->...) it would ease this process, but I've been studying it directly from github anyway.)

@boris-kz
Copy link
Owner

boris-kz commented Feb 8, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 9, 2018

Yes. BTW, it seems that in this session they are less of a burden for my working memory.

However these days I've been thinking practically about a custom analysis-debug tool and its first desirable features. The beginning of the implementation could be these days, depending on my focus.

@boris-kz
Copy link
Owner

boris-kz commented Feb 9, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 9, 2018

Like I said, it's matter of practice.

Yes, but practice not only with your code, it was less of a burden from the first glance.

The rest: edited (censored).

Give an unambiguous, exhaustive and practical definition of "working on the algorithm" and prove and explain how and why it excludes certain approaches, including ones which you are not familiar with or have no idea.

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 10, 2018

BTW, I've been failing to detect in your code the so called "negative patterns"?, are they implemented in Le2 code and how do they "look"?

That implies also where are the other mythical "complemented" patterns "Neg" + Pos.?
And they are for both vP and dP (separately: (vP+, vP-)=complemented vP?; (dP+, dP-)=complemented dP ?).

Something about that? : (unlikely)
def form_pP(par, pP_): # forming parameter patterns within PP

2Le: DIV of multiples (L) to form ratio patterns, over additional distances = negative pattern length LL

?

Neg. are defined as having m < Aver.match.

In Le2 code AFAIK there are comparisons > A like:
while fork_ and (crit > A or ini == 1):

If it's not > A, the cycle is just not entered. (And I don't have yet the tech to trace it easy enough etc., see previous comments :) )

@boris-kz
Copy link
Owner

boris-kz commented Feb 10, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 10, 2018

Yes, it's legal:


>>> python
>>> e_ = [1,2,3]
>>> p = 25; g = 16; alt_g = 3
>>> e_.append((p, g, alt_g))
>>> e_
[1, 2, 3, (25, 16, 3)]

Complemented patterns would be formed in discontinuous comparison between
2D patterns, across opposite-sign P2s.

Then what is a continuous one and what's the difference between cont. and disc. comp?
Discont.: comparison between adjacent + and - patt, i.e. of "different types", that's why it's discont?
Continuous: comp of the same type patt? (+ +); vP, vP; dP,dP ?

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 10, 2018

I see now - the negative patterns are just ones with s = 0?,
s = 1 if g > 0 else 0 # g = 0 is negative?
Anyway, I think that it should should be marked more explicitly in the code in order to refer to the text. It's said "same-sign gradient", but not "negative/positive patterns".

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 10, 2018

Also the ave-coeff. are still not completely defined, their computation/update?

global ave; ave = 127  # filters, ultimately set by separate feedback, then ave *= rng
    global div_a; div_a = 127  # not justified
    global ave_k; ave_k = 0.25  # average V / I

Back to the olp logic discussed earlier in this thread, and the olp-code in ycomp

if olp:  # if vP x dP overlap len > 0, incomplete vg - ave / (rng / X-x)?
        odG *= ave_k; odG = odG.astype(int)  # ave_k = V / I, to project V of odG
        if ovG > odG:  # comp of olp vG and olp dG, == goes to vP: secondary pattern?
            dP[7] += olp  # overlap of lesser-oG vP or dP, or P = P, Olp?
        else:
            vP[7] += olp  # to form rel_rdn = alt_rdn / len(e_)
  • an earlier comment:

No, only the weaker (redundant) of the two is updated by shared alt_len, see above.

  • also

dP is defined by comparison to shorter-range feedback: prior input,
vP is defined by comparison to higher (prior) - level feedback: filter.
These are are different orders of patterns

I read the code as that the weight of the diff.grad is much larger than the one of the vG (if ave_k is not possible to turn into < 1 somehow), i.e. the initial ovG has to be ave_k times bigger than odG in order ovG > odG, thus the dP to be counted as "redundant" and its olp updated. Therefore dP is assumed as the more important pattern.

**What's the reasoning? (**or what was, if I've forgotten)

Shorter feedback is more powerful? ("Predictability decreases with distance")
Higher level feedback's cost has to be justified?

There might be many/increasing number of sources for higher level feedback, thus their weight has to be spread (divided) across many - at higher levels/higher derivatives/...?

However in the next stage, form_P, both grad. are multiplied with the same ave_k:


 if typ: alt_oG *= ave_k; alt_oG = alt_oG.astype(int)  # ave V / I, to project V of odG
        else: oG *= ave_k; oG = oG.astype(int)               # same for h_der and h_comp eval?

        if oG > alt_oG:  # comp between overlapping vG and dG
            Olp += olp  # olp is assigned to the weaker of P | alt_P, == -> P: local access
        else:
            alt_P[7] += olp

@boris-kz
Copy link
Owner

boris-kz commented Feb 10, 2018 via email

@boris-kz
Copy link
Owner

boris-kz commented Feb 10, 2018 via email

@boris-kz
Copy link
Owner

boris-kz commented Feb 10, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 10, 2018

i.e. the initial ovG has to be ave_k times bigger than odG in order ovG >
odG, thus the dP to be counted as "redundant" and its olp updated.
Therefore dP is assumed as the more important pattern.
No, D is less predictive (selective) than V, see above.

OK - I've mis-remembered ave = 127 with ave_k = 0.25, assuming ave_k = 127.

if typ: alt_oG *= ave_k; alt_oG = alt_oG.astype(int) # ave V / I, to project V of odG
else: oG *= ave_k; oG = oG.astype(int) # same for h_der and h_comp eval?

 if oG > alt_oG:  # comp between overlapping vG and dG
     Olp += olp  # olp is assigned to the weaker of P | alt_P, == -> P: local access
 else:
     alt_P[7] += olp

No, it's if typ: P = vP and alt_P = dP, else: reverse.
So, only dG * ave_k for both

I meant that in either case/branch of the if, oG is altered with the same coefficient (similarly to an earlier question about alt+=...), while in ycomp there are two kinds of overlap gradients etc.

However now I realize why - it's because in the form_P stage the two kinds of gradients v/d are merged into a common G, oG.

@boris-kz
Copy link
Owner

So, I was thinking about +|- dPPs and vPPs, and realised that scan_P_ and fork_eval should only apply to blobs. That's because additional complexity of comp_P -> dPPs and vPPs is per blob. So, blobs should be evaluated for comp_P after their termination. Last update shows that blob-only scan_P_, which also includes blob-only fork_eval. Next, I will do term_blob, which will call comp_P.
Thanks!

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 11, 2018

Cool! :)

Complemented patterns would be formed in discontinuous comparison between
2D patterns, across opposite-sign P2s.
+
Then what is a continuous one and what's the difference between cont. and disc. comp?
Discont.: comparison between adjacent + and - patt, i.e. of "different types", that's why it's discont?
Continuous: comp of the same type patt? (+ +); vP, vP; dP,dP ?
+
No, +Ps and -Ps always alternate, so same-sign comp will be positionally
discontinuous.
And +P comparands will have a record of gap: intervening -Ps, that's why
they are complemented.

Right, because a +P pattern is terminated when its match is < filter and a -P is terminated when the match is > filter?

+

So, I was thinking about +|- dPPs and vPPs,
+
Complemented patterns would be formed in discontinuous comparison between
2D patterns, across opposite-sign P2s.
That would probably be 3Le, or 4Le for video.

Then do you mean that continuous and discontinuous comparison are not valid concepts at le_1_2D?

Or, more likely, that the discussed comparisons are all discontinuous, because of the inherent +P, -P sequences? Which seems as one of the basic pattern-creation key-points/schemes in your algorithm: these are the edges of the creation-termination cycles?

Thus "discontinuous" here is having a gap (>1) between the end-coordinate of the first and the start-coordinate of the following?

However, regarding +-PP. So they are supposed to be the same sign, thus the comparisons which created them are continuous, meaning no coordinate gaps - because the constituent patterns are at different lines?

@boris-kz
Copy link
Owner

boris-kz commented Feb 11, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 12, 2018

BTW, regarding the open questions you told me a while ago, the third one:

- how to project feedback (D_input, D_average, etc.) and combine it across multiple levels.

I don't know if it's connected, but I had a thought regarding those ave, ave_k etc. hiLe feedback vars, having yet unspecified dynamics.

The usage of the word-sense "average" suggests that its one value + yet they are technically constants.
However, I assume that these values are supposed to be very dynamic and adjustable per each item per pattern or at least per each constituent pattern (if not the initial p,d,m tuples), i.e. the hiLe should be able to feed different ave_k etc. for each coordinate/substituent sub-pattern?

You're working on how to balance the feeds, how to calculate/adjust the effect of deeper levels (not immediately next?), how different levels feedback values would interact to produce some "final feedback" value which would be used at the lower level stage?

@boris-kz
Copy link
Owner

boris-kz commented Feb 12, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 12, 2018

The most basic combination is of co-derived D and M. Feedback of D would be
adjusting bit-filters, to minimize overflow and underflow (this is not
implemented here but conceptually important).

OK, that may be appropriate for performance improvement at some very low-level implementation, such as FPGA/ASIC chip, but also for a higher level (like some sort of a smart decoder to Assembly, switching code for different cases) to utilise vector operations over 8-bit, 16-bit, 32-bit data, instead of using a general type, possibly float-32 bit or more which avoids overflows at the cost of a lot of computational redundancy.

But back-projected D and M compete, so adjusting factor is < D. And this
reduction should increase over projected distance, because match has
greater base range: it is common for 2 inputs, vs. 1 input for d.

For summation/average that 2:1 ratio makes sense given the definition, however respectively m has a lower resolution and the prediction is for two final coordinates.

Well, so this is the overlap and redundancy that's computed? And ave_k = 0.25 for a start, because m has two overlaps for the 2 dimensions = 4 common matches?

If that is correct, again I think explicit comments in the code would be helpful.

A) So, feedback should be = (*D / 2) / (proj L / DL). Or, considering slower
B) decay (longer base) of M: (D / 2) ^ 1/ (proj L / DL)? I am not sure. *

"Longer base" = two coordinates with conceptually defined common match, compared to 1 for difference?

proj L - projected L (length of patterns/coordinates to which the feedback has impact)?
DL - Lenght (number of elements) of a span of summed D?

What's ^ - power of? Thus - fractional exponent (if proj L/DL is >=1) or plain power if proj L/DL is less than 1.

D/2 > 1? (that's for sure?)
However:
div < 0. = * ++
pow < 0 = root --
pow > 0 = pow ++++ (if proj L/DL < 1, the result is > 1)
(i.e. if proj L/DL < 0 and D/2 > 1, B > A)

Should it be so?
Do you know what the values of these variables would be? could their >< vary? The magnitude relations between the two equations may vary too much in both direction

L as number of low(er) level input coordinates, for the immediate lower Le?

(However all higher Le project - if so, shouldn't the distance in the hierarchy, the depth of the levels, reflected in the formula or it's just projected level by level from top to bottom and accumulated?)

(*D / 2) /
I am not sure. *

"*" means you're not sure about D/2 part? (It's not clear)

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 12, 2018

or proj L is projected distance?
Well, clarifying the variables would explain the confusion B > A.

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 12, 2018

BTW, are you using graphical calculators?

https://www.desmos.com/calculator

For the above expressions:

y_1\ =\ \frac{\frac{D}{2}}{\frac{p}{L}}

y_2\ =\ \left(\frac{D}{2}\right)^{\left(\frac{1}{\frac{p}{L}}\right)}

@boris-kz
Copy link
Owner

boris-kz commented Feb 12, 2018 via email

@boris-kz
Copy link
Owner

boris-kz commented Feb 13, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 13, 2018

So, it's probably decay by division: feedback = (D / 2) / (proj_distance / D_span)).
vs.
(D / 2) / (proj L / DL).

BTW, semantically these identifiers proj_distance, D_span sound better - D both for summed difference and for distance is confusing.

To me the code usages implicitly suggest that "L" is a natural number - length of a list (array) of collected data items. While "distance" is more about an abstract distance, in the void, with a possible unit measure, scales. Span to me is also better reminding about the material that is covered by it, a span of inputs/patterns.

Yes, L is in bottom-Le coord, because all lower levels skip.

Thus the proj_distance and D_span have maximum values of the bottom level input resolution? (the camera, the lowest)

Note that only D is fed back, M doesn't update the filter.

This filter (which is for difference patterns?) or in general?

@boris-kz
Copy link
Owner

boris-kz commented Feb 13, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 27, 2018

form_P(...

I += p # pixels summed within P
D += d # lateral D, for P comp and P2 orientation
Dy += dy # vertical D, for P2 normalization
M += m # lateral D, for P comp and P2 orientation
My += my # vertical M, for P2 orientation
G += g # d or v gradient summed to define P value, or V = M - 2a * W?

It doesn't matter, but that's an old "mistake" in the comments, obviously lateral M.

@boris-kz
Copy link
Owner

Yes, thanks.

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 28, 2018

def scan_blob(typ, blob): # vertical scan of Ps in Py_ for comp_P, incr_PP, form_pP_?
...
What's the mnemonics of the S-vars?

vS_ders etc. - Scan?

Also in the yet-commented code:

if dw sign == ddx sign and min(dw, ddx) > a: _S /= cos (ddx) # to angle-normalize S vars for comp

( If I'm not mistaken you once mentioned "coSinus patterns"? cos (correlated to dot product) is an angle between vectors)

...
But then:

S = 1 if abs(D) + V + a * len(e_) > rrdn * aS else 0 # rep M = a*w, bi v!V, rdn I? '''

That sounds like "sign" or "type".

What are the purpose and goal of the orientation?

dimensionally reduced axis: vP PP or contour: dP PP; dxP is direction pattern

It sounds as traversing the adjacent border items of the patterns/blobs (that's a contour). The border items in the pattern-records are supposed to be the ones where sign changes - the first and the last with > av. m for scanning in both dimensions/direction.

I see there's width and height (w, H).

"dimensionally reduced" - in order to simplify further processing, to have it in one list and care only about the match, not the x,y dimensions?

@boris-kz
Copy link
Owner

boris-kz commented Feb 28, 2018 via email

@boris-kz
Copy link
Owner

boris-kz commented Feb 28, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 28, 2018

If S is always Sum, then what's the meaning of S = 1 vs 0 in the quoted line?
Is it a normalized sum, thus a maximum/minimum in [0,1] range?

Rescanning - thus it'd be with the same content, but different order?
I'm not sure what's the purpose, though - to have an alternative representation, or that representation is special (I guess, but don't see how exactly).

Or you mean - it's a straight line, linear scanning, following that computed angle. But it's not clear to me what's "axis under the angle orthogonal to the axis", what's "under an angle"?

Guesses:

Strong axis? - the axis is one of x and y, implying of patterns produced by lateral or vertical comparison;
strong is one in which the G/gradient is the bigger one - the gradient in the x/laterally compared pattern is bigger than the gradient of the vertically compared?

For contour as well, in decompressed language: stronger means having > G, adjusted with the compensation coefficients for the cost? (In all flavors of the gradient, depending on the stage of processing)

Therefore contours are selections? of the subpatterns, where the difference patterns are more predictive, while the fill-in area is where the value patterns are more-predictive (all according to the current "same span" filter for the vG and 0? for the dG, since they lacked a filter from the dg = _d + fdy?, and also adjusted by the cost-coefficients)?

@boris-kz
Copy link
Owner

boris-kz commented Feb 28, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Feb 28, 2018

I think I were afraid that I'd be unprepared to ask meaningful questions. If you don't mind - OK. I'll notify you on Skype, maybe on Friday or in the following days. If there are problems - Google+. ~ 19h-21h my time?

@boris-kz
Copy link
Owner

boris-kz commented Feb 28, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Mar 1, 2018

OK, what about today in ~16 h?

@boris-kz
Copy link
Owner

boris-kz commented Mar 1, 2018 via email

@Twenkid
Copy link
Collaborator Author

Twenkid commented Mar 1, 2018

ОК.

@boris-kz boris-kz closed this Apr 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants