Skip to content

open-thought/system-2-research

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

OpenThought - System 2 Research Links

Here you find a collection of material (books, papers, blog-posts etc.) related to reasoning and cognition in AI systems. Specifically we want to cover agents, cognitive architectures, general problem solving strategies and self-improvement.

The term "System 2" in the page title refers to the slower, more deliberative, and more logical mode of thought as described by Daniel Kahneman in his book Thinking, Fast and Slow.

You know a great resource we should add? Please see How to contribute.

Cognitive Architectures

(looking for additional links & articles and summaries)

  • SOAR (State, Operator, And Result) by John Laird, Allen Newell, and Paul Rosenbloom
  • ACT-R (Adaptive Control of Thought-Rational) by John Anderson at CMU
  • SPAUN (Semantic Pointer Architecture Unified Network) by Chris Eliasmith at Waterloo, SPAUN 2.0 by Feng-Xuan Choo
  • ART (Adaptive resonance theory) by Stephen Grossberg and Gail Carpenter
  • CLARION (Connectionist Learning with Adaptive Rule Induction ON-line) by Ron Sun
  • EPIC (Executive Process/Interactive Control) by David Kieras and David Meyer
  • LIDA (Learning Intelligent Distribution Agent) by Stan Franklin, 2015 Paper
  • Sigma by Paul Rosenbloom
  • OpenCog by Ben Goertzel
  • NARS (Non-Axiomatic Reasoning System) by Pei Wang
  • Icarus by Pat Langley
  • MicroPsi by Joscha Bach
  • Thousand Brains Theory & HTM (Hierarchical Temporal Memory) by Jeff Hawkins
  • SPH (Sparse Predictive Hierarchie) by Eric Laukien
  • Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm), 2016 Paper by Randall O'Reilly
  • CogNGen (COGnitive Neural GENerative system) by Alexander Ororbia and Mary Alexandria Kelly, see also here and here
  • KIX (KIX: A Metacognitive Generalization Framework) by A. Kumar and Paul Schrater
  • ACE (Autonomous Cognitive Entity) by David Shapiro et al., gh: daveshap/ACE_Framework
  • Iterative Updating of Working Memory by Jared Reser, website, Video

Agent Papers

LLM Based

LLM Reasoning Improvements / Training on Synthetic Data

Direct o1 Replication Efforts

Reward Models (ORM/PRM)

RL

MCTS

Minecraft Agents

Massive Sampling / Generate-and-Test

World Models

Neuro-Symbolic Approaches

Math

Active Inference

Prompting Techniques

Negative results

Mechanistic Interpretability

Blog Posts / Presentations

Graph Neural Networks

Complex Logical Query Answering (CQLA)

Answering logical queries over Incomplete Knowledge Graphs. Aspirationally this requires combining sparse symbolic index collation (SQL, SPARQL, etc) and dense vector search, preferably in a differentiable manner.

Inductive Reasoning over Heterogeneous Graphs

Similar to the regular CQLA, but with the emphasis on the "Inductive Setting" - i.e. querying over new, unseen during training nodes, edge types or even entire graphs. The latter part is interesting as it relies on the higher order "relations between relations" structure, connecting KG inference to Category Theory.

Neural Algorithmic Reasoning (NAR)

Initially attempted back in 2014 with general-purpose but unstable Neural Turing Machines, modern NAR approaches limit their scope to making GNN-based "Algorithmic Processor Networks" which learn to mimic classical algorithms on synthetic data and can be deployed on noisy real-world problems by sandwiching their frozen instances inside Encoder-Processor-Decoder architecture.

Grokking

Open-Source Agents & Agent Frameworks

Algorithms

Weak Search Methods

Weak methods are general but don't use knowledge (heuristics) to guide the search process.

Strong Search Methods

Books

Biologically Inspired Approaches

Diverse approaches some of which tap into classical PDE systems of biological NNs, some concentrate on Distibuted Sparse Representations (by default non-differentiable), others draw inspiration from Hippocampal Grid Cells, Place Cells, etc. Biological systems surpass most ML methods for Continual and Online Learning, but are hard to implement efficienly on GPU.

Dense Associative Memory

Dense Associative Memory is mainly represented by Modern Hopfield Networks (MHN), which can be viewed as a generalized Transformers capable of storing queries, keys and values explicitly (as in Vector Databases) and running recurrent retrival by energy minimization (relating them to Diffusion models). Application for Continual Learning is possible when combined with uncertainty quantification and differentiable top-k selection.

Continual Learning

Software Tools & Libraries

Commercial Offerings

Competitions & Benchmarks

Code

Related Projects

Youtube Content

Joscha Bach

Best LLM APIs

Novel model architectures

Philosophy: Nature of Intelligence & Consciousness

Biology / Neuroscience

Workshops

https://s2r-at-scale-workshop.github.io (NeurIPS 2024)

How to contribute

To share a link related to reasoning in AI systems that is missing here please create a pull request for this file. See editing files in the github documentation.

Releases

No releases published

Packages

No packages published