Skip to content

Commit

Permalink
minor edits
Browse files Browse the repository at this point in the history
  • Loading branch information
boris-kz committed Mar 19, 2020
1 parent be187b4 commit f80c12f
Show file tree
Hide file tree
Showing 4 changed files with 116 additions and 49 deletions.
1 change: 1 addition & 0 deletions .idea/dictionaries/boris.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -3,7 +3,7 @@ CogAlg

Full introduction: <www.cognitivealgorithm.info>

Intelligence is a general cognitive ability, ultimately the ability to predict. That includes cognitive component of action: planning is technically a self-prediction. Any prediction is interactive projection of known patterns, hence primary cognitive process is pattern discovery. This perspective is well established, pattern recognition is a core of any IQ test. But there is no general and constructive definition of either pattern or recognition (quantified similarity). Below, I define similarity for the simplest inputs, then describe hierarchically recursive algorithm to search for similarity (patterns) among incrementally complex inputs (lower-level patterns).
Intelligence is a general cognitive ability, ultimately the ability to predict. That includes planning, which is technically a self-prediction (planning is the only cognitive component of action, the rest is plan decoding). Prediction is interactive projection of known patterns, so it must be secondary to discovering these patterns. This perspective is well established, pattern recognition is a core of any IQ test. But I couldn't find a general AND constructive definition of either pattern or recognition (quantified similarity). Below, I define similarity for the simplest inputs, then describe hierarchically recursive algorithm of searching for such similarity (clustered into patterns) among incrementally complex inputs (lower-level patterns).

For excellent popular introductions to cognition-as-prediction thesis see “On Intelligence” by Jeff Hawkins and “How to Create a Mind“ by Ray Kurzweil. But on a technical level, they and most current researchers implement artificial neural networks, which operate in a very coarse statistical fashion. Capsule Networks, recently introduced by Geoffrey Hinton et al, are more selective but still rely on Hebbian learning, coarse due to immediate input summation. My approach is outlined below, then compared to ANN, the brain, and CapsNet.

Expand Down
23 changes: 12 additions & 11 deletions frame_2D_alg/intra_blob_draft.py
Expand Up @@ -60,38 +60,39 @@
# functions, ALL WORK-IN-PROGRESS:


def intra_blob(blob, rdn, rng, fig, fca, fcr, input): # recursive input rng+ | der+ | angle cross-comp within a blob
def intra_blob(blob, rdn, rng, fig, fca, fcr, faga): # recursive input rng+ | der+ | angle cross-comp within a blob

if fca: # flag comp angle, input = g, dy, dx or ga, day, dax
# flags: fca: comp angle, faga: comp angle of ga | g, fig: input is gradient | pixel, fcr: comp over rng+ | der+
if fca:

dert__ = comp_a(blob['dert__'], input) # form ga blobs, evaluate for comp_aga | comp_g:
dert__ = comp_a(blob['dert__'], faga) # form ga blobs, evaluate for comp_aga | comp_g:
cluster_derts(blob, dert__, 1, rdn, 0, crit=5) # cluster by sign of crit=ga -> ga_sub_blobs

for sub_blob in blob['blob_']: # eval intra_blob: if disoriented g: comp_aga, else comp_g
if sub_blob['sign']:
if sub_blob['Dert']['Ga'] > aveB * rdn:
# +Ga -> comp_a -> adert = gaga=0, ga_day=0, ga_dax=0:
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=1, fcr=0, input=(5,6,7))
# +Ga -> comp_aga -> adert = gaga, ga_day, ga_dax:
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=1, fcr=0, faga=1)

elif -sub_blob['Dert']['Ga'] > aveB * rdn:
# -Ga -> comp_g -> gdert = g, 0, 0, 0, 0, ga, day, dax (g from dert, ga, day, dax from adert):
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=0, fcr=0, input=1)
# -Ga -> comp_g -> gdert = g, gg, gdy, gdx, gm, ga, day, dax:
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=0, fcr=0, faga=1) # faga passed to comp_agg
else:
if fcr: dert__ = comp_r(blob['dert__'], fig) # 1-sparse sampling to maintain t-to-1 comp overlap
else: dert__ = comp_g(blob['dert__'], odd=0) # sparse 3x3 if comp_gr

cluster_derts(blob, dert__, 1, rdn, fig, crit=1) # cluster by sign of crit=g -> g_sub_blobs
# feedback: blob['layer_'] += [[(lL, fig, fcr, rdn, rng, blob['blob_'])]] # 1st sub_layer
# feedback: root['layer_'] += [[(lL, fig, fcr, rdn, rng, blob['sub_blob_'])]] # 1st sub_layer

for sub_blob in blob['blob_']: # eval intra_blob comp_a | comp_rng if low gradient
if sub_blob['sign']:
if sub_blob['Dert']['G'] > aveB * rdn:
# +G -> comp_a -> adert = a, ga=0, day=0, dax=0:
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=1, fcr=0, input=(1,2,3))
intra_blob(sub_blob, rdn+1, rng=1, fig=1, fca=1, fcr=0, faga=0)

elif -sub_blob['Dert']['G'] > aveB * rdn:
# -G -> comp_r -> rdert = idert (2x2 ga, day, dax were not computed for -g derts):
intra_blob(sub_blob, rdn+1, rng+1, fig=fig, fca=0, fcr=1, input=0)
# -G -> comp_r -> rdert = idert (with accumulated derivatives):
intra_blob(sub_blob, rdn+1, rng+1, fig=fig, fca=0, fcr=1, faga=0) # faga passed to comp_agr
'''
also cluster_derts(crit=gi): abs_gg (no * cos(da)) -> abs_gblobs, no eval by Gi?
feedback per fork:
Expand Down
139 changes: 102 additions & 37 deletions frame_2D_alg/intra_comp.py
@@ -1,23 +1,42 @@
"""
Cross-comparison of pixels, angles, or gradients, in 2x2 or 3x3 kernels
"""

import numpy as np
import numpy.ma as ma


# -----------------------------------------------------------------------------
# Constants
# -----------------------------------------------------------------------------
# Functions

def comp_g(dert__, odd):
"""
cross-comp of g or ga, in 2x2 kernels unless root fork is comp_r: odd=TRUE
or odd: sparse 3x3, is also effectively 2x2 input, recombined from one-line-distant lines?
>>> dert = i, g, dy, dx
>>> adert = ga, day, dax
>>> odd = bool # initially FALSE, set to TRUE for comp_a and comp_g called from comp_r fork
# comparand = dert[1]
<<< gdert = g, gg, gdy, gdx, gm, ga, day, dax
Cross-comp of g or ga, in 2x2 kernels unless root fork is comp_r:
odd=True or odd: sparse 3x3, is also effectively 2x2 input,
recombined from one-line-distant lines?
Parameters
----------
dert__ : array-like
The structure is (i, g, dy, dx) for dert or (ga, day, dax) for adert.
odd : bool
Initially False, set to True for comp_a and comp_g called from
comp_r fork.
Returns
-------
gdert__ : masked_array
Output's structure is (g, gg, gdy, gdx, gm, ga, day, dax).
Examples
--------
>>> # actual python console code
>>> dert__ = 'specific value'
>>> odd = 'specific value'
>>> comp_g(dert__, odd)
'specific output'
Notes
-----
Comparand is dert[1]
"""
pass

Expand All @@ -31,54 +50,100 @@ def comp_r(dert__, fig):
alternating derts as a kernel-central dert at current comparison range,
which forms increasingly sparse input dert__ for greater range cross-comp,
while maintaining one-to-one overlap between kernels of compared derts.
With increasingly sparse input, unilateral rng (distance between central derts)
can only increase as 2^(n + 1), where n starts at 0:
With increasingly sparse input, unilateral rng (distance between
central derts) can only increase as 2^(n + 1), where n starts at 0:
rng = 1 : 3x3 kernel, skip orthogonally alternating derts as centrals,
rng = 2 : 5x5 kernel, skip diagonally alternating derts as centrals,
rng = 3 : 9x9 kernel, skip orthogonally alternating derts as centrals,
...
That means configuration of preserved (not skipped) derts will always be 3x3.
That means configuration of preserved (not skipped) derts will always
be 3x3.
Parameters
----------
dert__ : array-like
Array containing inputs.
dert's structure is (i, g, dy, dx, m).
fig : bool
Set to True if input is g or derived from g
True if input is g.
Returns
-------
output: masked_array
-------
>>> dert = i, g, dy, dx, m
<<< dert = i, g, dy, dx, m
# results are accumulated in the input dert
# comparand = dert[0]
rdert__ : masked_array
Output's structure is (i, g, dy, dx, m).
Examples
--------
>>> # actual python console code
>>> dert__ = 'specific value'
>>> fig = 'specific value'
>>> comp_r(dert__, fig)
'specific output'
Notes
-----
- Results are accumulated in the input dert.
- Comparand is dert[0].
"""
pass

def comp_a(dert__, odd, aga):
"""
cross-comp of a or aga, in 2x2 kernels unless root fork is comp_r: odd=TRUE
if aga:
>>> dert = g, gg, gdy, gdx, gm, iga, iday, idax
else:
>>> dert = i, g, dy, dx, m
<<< adert = ga, day, dax
cross-comp of a or aga, in 2x2 kernels unless root fork is
comp_r: odd=True.
Parameters
----------
dert__ : array-like
dert's structure is dependent to aga
odd : bool
Is True if root fork is comp_r.
aga : bool
If aga is True, dert's structure is interpreted as:
(g, gg, gdy, gdx, gm, iga, iday, idax)
Otherwise it is interpreted as:
(i, g, dy, dx, m)
Returns
-------
adert : masked_array
adert's structure is (ga, day, dax).
Examples
--------
>>> # actual python console code
>>> dert__ = 'specific value'
>>> odd = 'specific value'
>>> aga = 'specific value'
>>> comp_a(dert__, odd, aga)
'specific output'
"""
pass


def calc_a(dert__, inp):
"""Compute angles of gradient."""
return dert__[inp[1:]] / dert__[inp[0]]
# please add comments
def calc_a(dert__):
"""
Compute vector representation of gradient angle.
It is done by normalizing the vector (dy, dx).
Numpy broad-casting is a viable option when the
first dimension of input array (dert__) separate
different CogAlg parameter (like g, dy, dx).
Example
-------
>>> dert1 = np.array([0, 5, 3, 4])
>>> a1 = calc_a(dert1)
>>> print(a1)
array([0.6, 0.8])
>>> # 45 degrees angle
>>> dert2 = np.array([0, 450**0.5, 15, 15])
>>> a2 = calc_a(dert2)
>>> print(a2)
array([0.70710678, 0.70710678])
>>> print(np.degrees(np.arctan2(*a2)))
45.0
>>> # -30 (or 330) degrees angle
>>> dert3 = np.array([0, 10, -5, 75**0.5])
>>> a3 = calc_a(dert3)
>>> print(a3)
array([-0.5 , 0.8660254])
>>> print(np.rad2deg(np.arctan2(*a3)))
-29.999999999999996
"""
return dert__[[2, 3]] / dert__[1] # np.array([dy, dx]) / g


def calc_aga(dert__, inp):
"""Compute angles of angles of gradient."""
g__ = dert__[inp[1]]
day__ = np.arctan2(*dert__[inp[1:3]]) # please add comments
dax__ = np.arctan2(*dert__[inp[3:]]) # please add comments
return np.stack((day__, dax__)) / g__

# -----------------------------------------------------------------------------
# Utility functions
Expand Down

0 comments on commit f80c12f

Please sign in to comment.