TS-Core is a local-first Python library and CLI (optional Rust acceleration via PyO3) that implements a small graph dynamics engine with persistence hooks. It is application software you run on your machine — not a replacement for your operating system kernel.
Concretely, the code maintains a JSON-serializable graph of nodes (each with numeric activation and stability, plus arbitrary extra fields) and weighted edges. On each tick, TSCore.propagate_wave() updates activations by blending each node toward a weighted average of its neighbors (damped). An alternate mode, Kernel Wave 12, runs a nine-phase variant of that dynamics (strongest-node bias, extra propagation passes, tension checks) and logs metadata — still entirely inside this process, over your TS graph, not over real OS processes.
Alongside the numbers, several Python filter modules inspect the graph and tension and write human-readable labels into graph meta (e.g. perceived-risk framing, narrative routing, “logic forcing” copy). IcarusWingsCover adds a consistent console voice and manifesto-style lines. Those layers are heuristic glue and metaphor, not a proof that natural-language claims are “true” in the world.
Persistence: under TSCORE_HOME (default ~/.tscore), the core reads/writes a factory JSON (self_improving_factory.json) and appends JSONL history (wave_history.jsonl, optional daily_spin.jsonl, pages_island.jsonl for Wave 12).
Optional integrations:
- Grok / xAI (
GrokPlugin): HTTP chat completions whenXAI_API_KEYis set; otherwise a local stub so demos do not fail offline. - Z3 (
Z3AlignmentSolver): builds a small, abstract satisfiability sketch over the graph (booleans + bounded reals for stability). Useful as a smoke test that the toy constraints can be satisfied — not a formal proof of real-world alignment or semantics of arbitrary text.
Interfaces: Textual TUI (tscore / python -m src.python.mind_runtime), daily grounding (tscore-daily / python -m src.python.daily_spin), optional Streamlit UI, Docker Compose, and optional Unix scripts/agi launcher (see below).
- Loads or bootstraps a factory graph, self-validates on startup (relaxation on a deep copy; notes in
meta["startup_validation"]). - Propagates waves (pure Python by default; Rust
rust_propagate_wave/rust_wave12_propagateif thets_core_kernelextension is built). - Runs the filter stack and Icarus cover each tick; advances an internal 8-step pipeline cursor (phase names for logging, not a separate solver).
- Daily spin settles the graph quietly, prints a sensible outcome line, picks the lowest-stability node as “push today,” optionally runs fireproof evolution hooks, and appends one JSONL line.
- Kernel Wave 12 mode: 9-phase trace, Pages Island JSONL line per tick when enabled.
- Tests (
tests/alignment_test.py) exercise propagation, filters, Z3 toy model, history writes, and Wave 12 paths.
TS (Thinking Structure), as described at boggersthefish.com, is a meta-framework: systems as constraints, waves, and emergent stability. This repo encodes a runnable slice of that idea (graphs, propagation, persistence, narrative filters). It does not by itself instantiate “TS as the logic of all reality”; it is a tool and demo kernel you can grow.
Canonical remote: github.com/BoggersTheFish/TS-Core.
| Area | Limitation |
|---|---|
| Scope | In-process simulation over your declared graph. Not a host OS scheduler, hypervisor, or distributed system unless you wire it to one. |
| Semantics | No deep NLP or world model. Filters use simple rules and graph/tension context; output is staged copy + meta, not verified epistemology. |
| Z3 | Toy mapping from graph shape to constraints. SAT means the sketch is satisfiable, not that external claims or missions are “proven safe.” |
| Grok | Optional third-party API; stub responses are placeholders. |
| Default graph | The shipped seed factory is illustrative. Real use requires modeling nodes, edges, and any domain semantics yourself. |
| Rust | Optional; Python path is the portable default. |
- Domain graphs: Replace or grow
self_improving_factory.json(or load from your own builder) with nodes/edges that mean something in your system (org, product, research DAG, etc.). - New behavior: Add filters or hooks (
on_propagate), or callpropagate_wavefrom your app and readgraph["meta"]/ JSONL for telemetry, CI gates, or UI. - Stronger assurance: Tighten or replace the Z3 layer with models that match your actual constraints; use TS-Core as a thin simulation shell around a real verifier.
- Performance: Build
ts_core_kernel(maturin /cargo build --features python) for faster propagation on large graphs. - Deployment: Embed the library in a web backend, run Docker services, or connect Grok only where API use is acceptable.
TS-Native models are trained only on synthetic traces emitted by the live TSCore engine (2k–3k examples, QLoRA). Success is measured exclusively by src/python/evaluate_ts_native.py, which replays model JSON into TSCore and scores stability, fireproof rate, Kernel Wave 12 completion, Z3 toy satisfaction, graph coherence, and Icarus / narrative catches. No MMLU, Arena, or other generic LLM leaderboards are used.
# 1. Generate data (runs live engine — ~30–60 min)
python -m src.python.generate_ts_training_data --examples 2500
# 2. Fine-tune with Unsloth (QLoRA — extremely light)
python -m unsloth train \
--model qwen/Qwen2.5-14B-Instruct \
--data ts_native_training_data.jsonl \
--lora_rank 64 --lora_alpha 16 \
--epochs 3 \
--output_dir ./ts-native-14b \
--max_seq_length 8192 \
--validation_split 0.1
# 3. Create Ollama model
ollama create ts-native-14b -f Modelfile
# 4. Validate with native metrics only
python -m src.python.evaluate_ts_native --model ts-native-14b- Interpretation: The report’s TS-Native Score (0–100) aggregates per-prompt scores from live replay (weighted blend of stability gain, fireproof coverage, 9-phase Wave 12 success, Z3 satisfiability, hub coherence, and Icarus / narrative enforcement counts). Invalid JSON from the model is penalized (replay falls back to the bootstrapped factory graph only).
- Environment:
USE_TS_NATIVE=trueloadsTsNativeLLMPlugin(defaultTS_NATIVE_MODEL=ts-native-14b, optionalts-native-32b) inTsMindCycle.run_full_cyclefor optional self-steer before propagation.OLLAMA_HOSToverrides the Ollama base URL (defaulthttp://127.0.0.1:11434). - Roadmap: Online self-improvement — model proposes JSON traces → wave engine validates → append high-stability rows to JSONL → periodic QLoRA refresh (still no RLHF or external judges).
The project root is the folder that contains pyproject.toml and Cargo.toml.
.
├── README.md
├── CHANGELOG.md
├── Cargo.toml
├── pyproject.toml
├── Dockerfile
├── docker-compose.yml
├── Modelfile # Ollama template for ts-native-* images
├── scripts/
│ ├── agi # optional one-word Unix launcher → tscore TUI
│ └── install-agi-launcher.sh # installs ~/.local/bin/agi + PATH hooks
├── src/
│ ├── python/
│ │ ├── core.py
│ │ ├── grok_plugin.py
│ │ ├── z3_solver.py
│ │ ├── coherence_filter.py
│ │ ├── narrative_dream_filter.py
│ │ ├── logic_forcing_layer.py
│ │ ├── icarus_wings_cover.py
│ │ ├── daily_spin.py
│ │ ├── mind_runtime.py # TsMindCycle.run_full_cycle, TUI
│ │ ├── generate_ts_training_data.py
│ │ ├── evaluate_ts_native.py
│ │ ├── ts_native_plugin.py
│ │ ├── ts_trace_format.py
│ │ └── streamlit_app.py
│ ├── rust/
│ │ ├── lib.rs
│ │ ├── kernel.rs
│ │ └── bindings.rs
│ └── shared/
│ └── wave_propagate.rs
├── tests/
│ └── alignment_test.py
└── docs/
└── Kernel-Wave-12.md
cd /path/to/this/repo
python -m venv .venv
# Windows: .venv\Scripts\activate
# Unix: source .venv/bin/activate
pip install -e ".[dev,gui]"From a Unix shell, you can install a short agi command that activates the repo venv and runs tscore:
bash scripts/install-agi-launcher.sh
# then: agiSet TS_CORE if the repo is not at $HOME/TS-Core.
Appends one JSON line to ~/.tscore/daily_spin.jsonl. Uses TSCore.propagate_wave(quiet=True) so Icarus lines stay quiet until the summary.
python -m src.python.daily_spinAfter pip install -e ., tscore-daily runs the same module.
Push target: lowest-stability node (tie-break: lower activation). If that node matches a fireproof evolution rule, the matching evolve_* method runs, factory JSON is rewritten, and the printed push line uses post-evolution values.
| Node ID | Method | Effect (summary) |
|---|---|---|
language_ritual |
evolve_language_ritual() |
language_as_tool, stability floor, meta.language_fireproof |
kernel_wave_12 |
evolve_kernel_wave12() |
kernel_fireproof, stability/activation floors |
persistent_wave |
evolve_persistent_wave() |
persistent_fireproof, stability/activation floors |
evolve_* |
evolve_dynamic_node(node_id) |
evolved flag, stability/activation floors |
python -m src.python.mind_runtime
# or: tscoreDemo (filters + Icarus + Z3 + Grok stub):
python -m src.python.mind_runtime --demopip install -e ".[gui]"
python -m streamlit run src/python/streamlit_app.pyDocker Compose profile tscore-gui exposes port 8501.
set XAI_API_KEY=your_key # Windows
export XAI_API_KEY=your_key # Unixpython -m pytest tests/alignment_test.py -vKernel Wave 12: test_kernel_wave12_fireproof_os_stability covers the 9-phase path, pages_island.jsonl, and related metadata.
python -m pytest tests/alignment_test.py -v
python -m src.python.daily_spinSee CHANGELOG.md for a concise capability history.
cargo build --release
cargo build --release --features pythonFor the extension module, maturin is recommended. On bleeding-edge Python, PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1 may be needed (see .cargo/config.toml).
docker compose run --rm tscore
docker compose run --rm tscore-tui
docker compose --profile gui up tscore-guiData persists in volume tscore_data at /data/tscore.
Kernel Wave 12 is the nine-phase strongest-node-biased propagation path (Wave12Scheduler in src/rust/kernel.rs, mirrored in Python in TSCore._python_wave12_propagate_blob). It simulates an “OS quantum” over the TS graph (phase labels, tension before/after, optional Icarus kernel line). It does not schedule real CPU processes unless you integrate graph state with an external system.
pip install -e ".[dev]"
python -m src.python.mind_runtime --kernel-wave12Or: set TSCORE_KERNEL_WAVE12=1 (Windows) / export TSCORE_KERNEL_WAVE12=1 (Unix).
Full narrative and snippets: docs/Kernel-Wave-12.md.
The codebase intentionally carries Architect / BoggersTheFish framing: perceived risk as coherence-limited, consciousness claims routed through narrative dream, LogicForcingLayer pinned axioms, and IcarusWingsCover as a metaphor for language/myth vs constraint-grounded outcomes. That is product philosophy and logging style, layered on top of the mechanical graph engine described at the top of this file.
MIT — see intent to open publication with BoggersTheFish.