Skip to content

Commit

Permalink
Updated index.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
root authored and root committed Aug 10, 2022
1 parent e272854 commit c9ffb0c
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,13 @@ isolate the meanings of common pictogram B in three ways by 1) Set intersection
3 3680 +
1 7360 +
Last row of the tableau always ends in 1 and right column is of the form 115 * (1 + 2^2 + 2^5 + 2^6) [a decimal factor multiplied by another binary factor written as exponential sum - 110011 is the binary for 101]. Hardness of inverting the tableau bottom-up (to get the factors) involves a non-trivial efficient guessing (from number theory estimates - quite similar to number theoretic ray shooting query optimization in NC-PRAM-BSP-Multicore NeuronRain Computational Geometric Factorization) of bottom row-right column(7360 above) and reversing the computation from bottom to top - Irreversibility of the tableau might imply an One-Way function and Hardness amplification.
63. Reduction could be arrived at between Question-Answering algorithms to compute Merit() function in non-boolean setting and Query complexity in classical and quantum boolean setting by defining Question-Answering as a Query complexity problem of computing function Merit(q1,q2,q3,...,qn) of an entity (People-Audio-Visuals-Texts) by series of queries - set of ordered pairs {(qi,ai)} of Question variables q1,q2,q3,...,qn and respective answer values a1,a2,a3,...,an. Query complexity model unifies all theoretical models of Question-Answering earlier by LTF,PTF,TQBF and Switching circuits. Adaptive Question-Answering which dynamically changes future questions depending on answers to past questions is Discrete Time Markov Chain wherein Question(n) depends on (Question(n-1),Answer(n-1)) and every adaptive Interview is a traversal from root-to-leaf of a decision tree whose vertices are questions and branching to a subtree of a question(n) is chosen depending on answer(n) to question(n). Generalized N*N version of Chess is PSPACE-complete though limited 50-move version of Chess is not - Alternating Questions and Answers simulate role of alternating moves by two players in Chess - interrogator and respondent. Decision tree or Game Tree (https://en.wikipedia.org/wiki/Game_tree) in 50-move Chess is of depth 50 and for maximum of m possible configurations per move (known as branching factor), size of Game tree is (m^50-1)/(m-1). AI Chess Engines solve Game tree by a minimax algorithm which predicts or looks-ahead certain number of moves (also known as "plies" - Deep Blue look ahead was 12) and alpha-beta pruning to maximize gain. Most games including Chess,Go,Hex,Generalized Geography among others have been shown to be PSPACE-complete in N*N versions by reduction from TQBF where formula could be an AND-of-OR - https://people.csail.mit.edu/rrw/6.045-2020/lec21-color.pdf . Of these Generalized Geography (which is about player2 trying to name geographic location which starts from last letter of previous name uttered by player1) fits well into adaptive Question-Answering as present geographic location (or a question(n) from player2) name depends on previous name (or answer(n-1) from player1) effectively reducing Question-Answering to an adversarial game alternated between two players. Multiple choice Question-Answering are easier to formulate as TQBF than open-ended Question-Answering - Thereshold Circuits, LTFs and PTFs are theoretical models of Two Choice (Boolean Yes-No) Question-Answering. TQBF Chess reduction earlier is an OR of limited multiple choices a player could make per move (Questions are move choices made by player1 and Answers are countermove choices made by player2 in response to player1 at round N), ANDed for all moves. Primitive automated Question-Answering could be devised by Recursive Lambda Function Growth and Recursive Gloss Overlap Meaning Representation Algorithms which extract a TextGraph from natural language texts - Dense subgraphs and high degree vertices of textgraphs for a "Question" (which are the crux of the question) obtained from these algorithms could be searched in a background intelligence database (search engine) and the text results could be unified into an "Answer" textgraph by Recursive Lambda Function Growth and Recursive Gloss Overlap algorithms. Multiple Choice Question-Answering could be simulated by Two-Choice Boolean Question-Answering gadegets - LTFs,PTFs and TQBFs - by allocating one LTF,PTF or TQBF per binary digit of the decimal answer choice. For example, usual Four-Choice Question-Answering convention followed by Admission Tests to educational institutions could be simulated by 2 LTFs,PTFs or TQBFs encoding 4 answer choices - 0(00),1(01),2(10),3(11) - LTF1 example: a11*x11 + a12*x12 + ... + a1n*x1n, LTF2 example: a21*x21 + a22*x22 + ... + a2n*x2n. For answer choice 4 for question 1, x11 and x21 are set to 1 to get binary string 11 corresponding to decimal 4. Such Admission Tests, which are non-STEM (e.g Medicine,Language), could be solved by a Question-Answering Bot which searches a corpus and matches the open-ended answer with one of the choices (similar to one implemented in NeuronRain for open-ended natural language answers). STEM Admission Tests require solving a given mathematical problem for which querying a corpus may not be sufficient. There are sensitivity upperbounds available for Polynomial Threshold Functions (PTFs) which are defined as f(x1,x2,...,xn)=sign(Degree d polynomial on x1,x2,....,xn) thanks to result by [Harsha-Klivans-Mekha] - Bounding the Sensitivity of Polynomial Threshold Functions - http://theoryofcomputing.org/articles/v010a001/v010a001.pdf, https://booleanzoo.weizmann.ac.il/index.php/Polynomial_threshold#:~:text=4%20References-,Definition,is%20the%20linear%20threshold%20function - "... • The average sensitivity of f is at most O(n^(1−1/(4d+6))). (We also give a combinatorial proof of the bound O(n^(1−1/2^d).) • The noise sensitivity of f with noise rate δ is at most O(δ^(1/(4d+6)))...". PTFs being polynomials of degree d over reals in n variables are best suited models of examinations/contests/interviews which map non-boolean Question-Answer transcript of length n to a boolean sign of the PTF. Upperbound of two sensitivity measures for PTFs earlier defined as "... the average sensitivity of a Boolean function f measures the expected number of bit positions that change the sign of f for a randomly chosen input, and the noise sensitivity of f measures the probability over a randomly chosen input x that f changes sign if each bit of x is flipped independently with probability δ ..." indirectly also bound the failure probabilities of realworld examinations/contests/interviews which could be written as polynomials over reals and are vulnerable to corruption of answer values input to question variables.
63. Reduction could be arrived at between Question-Answering algorithms to compute Merit() function in non-boolean setting and Query complexity in classical and quantum boolean setting by defining Question-Answering as a Query complexity problem of computing function Merit(q1,q2,q3,...,qn) of an entity (People-Audio-Visuals-Texts) by series of queries - set of ordered pairs {(qi,ai)} of Question variables q1,q2,q3,...,qn and respective answer values a1,a2,a3,...,an. Query complexity model unifies all theoretical models of Question-Answering earlier by LTF,PTF,TQBF and Switching circuits. Adaptive Question-Answering which dynamically changes future questions depending on answers to past questions is Discrete Time Markov Chain wherein Question(n) depends on (Question(n-1),Answer(n-1)) and every adaptive Interview is a traversal from root-to-leaf of a decision tree whose vertices are questions and branching to a subtree of a question(n) is chosen depending on answer(n) to question(n). Generalized N*N version of Chess is PSPACE-complete though limited 50-move version of Chess is not - Alternating Questions and Answers simulate role of alternating moves by two players in Chess - interrogator and respondent. Decision tree or Game Tree (https://en.wikipedia.org/wiki/Game_tree) in 50-move Chess is of depth 50 and for maximum of m possible configurations per move (known as branching factor), size of Game tree is (m^50-1)/(m-1). AI Chess Engines solve Game tree by a minimax algorithm which predicts or looks-ahead certain number of moves (also known as "plies" - Deep Blue look ahead was 12) and alpha-beta pruning to maximize gain. Most games including Chess,Go,Hex,Generalized Geography among others have been shown to be PSPACE-complete in N*N versions by reduction from TQBF where formula could be an AND-of-OR - https://people.csail.mit.edu/rrw/6.045-2020/lec21-color.pdf . Of these Generalized Geography (which is about player2 trying to name geographic location which starts from last letter of previous name uttered by player1) fits well into adaptive Question-Answering as present geographic location (or a question(n) from player2) name depends on previous name (or answer(n-1) from player1) effectively reducing Question-Answering to an adversarial game alternated between two players. Multiple choice Question-Answering are easier to formulate as TQBF than open-ended Question-Answering - Thereshold Circuits, LTFs and PTFs are theoretical models of Two Choice (Boolean Yes-No) Question-Answering. TQBF Chess reduction earlier is an OR of limited multiple choices a player could make per move (Questions are move choices made by player1 and Answers are countermove choices made by player2 in response to player1 at round N), ANDed for all moves. Primitive automated Question-Answering could be devised by Recursive Lambda Function Growth and Recursive Gloss Overlap Meaning Representation Algorithms which extract a TextGraph from natural language texts - Dense subgraphs and high degree vertices of textgraphs for a "Question" (which are the crux of the question) obtained from these algorithms could be searched in a background intelligence database (search engine) and the text results could be unified into an "Answer" textgraph by Recursive Lambda Function Growth and Recursive Gloss Overlap algorithms. Multiple Choice Question-Answering could be simulated by Two-Choice Boolean Question-Answering gadegets - LTFs,PTFs and TQBFs - by allocating one LTF,PTF or TQBF per binary digit of the decimal answer choice. For example, usual Four-Choice Question-Answering convention followed by Admission Tests to educational institutions could be simulated by 2 LTFs,PTFs or TQBFs encoding 4 answer choices - 0(00),1(01),2(10),3(11) - LTF1 example: a11*x11 + a12*x12 + ... + a1n*x1n, LTF2 example: a21*x21 + a22*x22 + ... + a2n*x2n. For answer choice 4 for question 1, x11 and x21 are set to 1 to get binary string 11 corresponding to decimal 4. Such Admission Tests, which are non-STEM (e.g Medicine,Language), could be solved by a Question-Answering Bot which searches a corpus and matches the open-ended answer with one of the choices (similar to one implemented in NeuronRain for open-ended natural language answers). STEM Admission Tests require solving a given mathematical problem for which querying a corpus may not be sufficient. There are sensitivity upperbounds available for Polynomial Threshold Functions (PTFs) which are defined as f(x1,x2,...,xn)=sign(Degree d polynomial on x1,x2,....,xn) thanks to result by [Harsha-Klivans-Mekha] - Bounding the Sensitivity of Polynomial Threshold Functions - http://theoryofcomputing.org/articles/v010a001/v010a001.pdf, https://booleanzoo.weizmann.ac.il/index.php/Polynomial_threshold#:~:text=4%20References-,Definition,is%20the%20linear%20threshold%20function - "... • The average sensitivity of f is at most O(n^(1−1/(4d+6))). (We also give a combinatorial proof of the bound O(n^(1−1/2^d).) • The noise sensitivity of f with noise rate δ is at most O(δ^(1/(4d+6)))...". PTFs being polynomials of degree d over reals in n variables are best suited models of examinations/contests/interviews which map non-boolean Question-Answer transcript of length n to a boolean sign of the PTF. Upperbound of two sensitivity measures for PTFs earlier defined as "... the average sensitivity of a Boolean function f measures the expected number of bit positions that change the sign of f for a randomly chosen input, and the noise sensitivity of f measures the probability over a randomly chosen input x that f changes sign if each bit of x is flipped independently with probability δ ..." indirectly also bound the failure probabilities of realworld examinations/contests/interviews which could be written as polynomials over reals and are vulnerable to corruption of answer values input to question variables. Possible ways of sabotageing a realworld examinations/contests/interviews include: (*) corrupted question (*) wrong question (*) corrupted answer (*) wrong answer which are bound to change the expected outcome of an admission test. Decision lists, Decision trees and DNFs can be computed by a PTF formed from Chebyshev polynomials which are valued between [-1,1] for interval [-1,1] and exponential outside [-1,1]- https://www.cs.utexas.edu/~klivans/f07lec5.pdf - Theorem 1: "If all c ∈ C have PTF degree d then C is learnable in the Mistake Bound model in time and mistake bound n^O(d)" implies admission tests as Question-Answering PTFs are Mistake Bound learnable in time degree-exponential in number of questions (or Question-Answering is exponentially hard). Average sensitivity bound in [Harsha-Klivans-Mekha] is directly proportional to degree of the PTF. If a non-STEM Multiple-Choice Question-Answering transcript is expressible as CNF (AND of OR constant depth) degree of such a CNF is lowerbounded in https://www.cs.cmu.edu/~odonnell/papers/ptf-degree.pdf - [O'Donnell-Servedio] - improving [Minsky-Papert] Perceptron result:"... We prove an “XOR lemma” for polynomial threshold function degree and use this lemma to obtain an Ω(n^1/3*logn^2d/3) lower bound on the degree of an explicit Boolean circuit of polynomial size and depth d + 2. This is the first improvement on Minsky and Papert’s Ω(n^1/3) lower bound for any constant-depth circuit. ...". Example: For following 2 questions and 4 answer choices CNF is formulated as:
Question1: Which is the largest city by area?
Options: a1) Chongqing b1) Tokyo c1) Sao Paulo d1) New York (Answer: a1)
Question2: Which is the largest country by area?
Options: a2) Brazil b2) Russia c2) USA d2) China (Answer: b2)
CNF for 2 QAs earlier: (a1 V !b1 V !c1 V !d1) /\ (!a2 V b2 V !c2 V !d2)
STEM multiple choice Question-Answering admission tests on the other hand would not depend on corpus queries but instead on theorem provers and equation solvers and are non-trivial AI problems. Combining upper and lower bounds from [Harsha-Klivans-Mekha] and [O'Donnell-Servedio] average sensitivity of CNF Question-Answering is lowerbounded by Omega(n^(1−1/(4(n^1/3*logn^2d/3)+6))). Degree of a monomial in PTF roughly corresponds to difficulty of a question for that monomial.
64. Mining patterns in Astronomy Datasets has been less studied in BigData - NeuronRain (originally intended to be an astronomy software) brings astronomy and cosmology datasets (Ephemeris data of celestial bodies, Gravitational pull and Correlation of Celestial N-Body choreographies to terrestrial extreme weather events, Climate analytics, Satellite weather GIS imagery, Space Telescope Deep Field Imagery of Cosmos) into machine learning and artificial intelligence mainstream. For example, Red-Green-Blue channel histogram analysis of Hubble Ultra Deep Field in NeuronRain seems to show an anamoly in percentage of Red (Farthest-Redshift), Green(Farther), Blue(Far) galaxies - ratio of Red:Green:Blue galaxies are 3:1:2 while intuition would suggest the contrary 1:2:3 (Deep Field is a light cone search and Red-Blue-Green channels of Deep Field are circular intersections of the light cone at different time points of past. As galaxies would appear more spread out proportionate to distance in expanding spacetime, Red-Green-Blue circular disks should theoretically contain increasing number of galaxies in order Red < Green < Blue). Possibly this contradiction could be explained by Einstein Field Equations - https://en.wikipedia.org/wiki/Einstein_field_equations - accounting for per body spacetime curvature and light cone of deep field is warped. Example Python RGB Analysis and Histogram plots of Hubble eXtreme Deep Field (2012) imagery is documented in https://scientific-python.readthedocs.io/en/latest/notebooks_rst/5_Image_Processing/02_Examples/Image_Processing_Tutorial_3.html


Expand Down

1 comment on commit c9ffb0c

@shrinivaasanka
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed

Please sign in to comment.