Created by Ronit Radhanpura.
An experimental, modular framework for studying machine self-modeling, continuity, and grounded reasoning in a live runtime system.
This repository combines:
- a five-module cognitive architecture,
- a live orchestration runtime,
- a provider-agnostic LLM bridge layer,
- and guardrailed operational defaults for safer experimentation.
The latest full stress validation executed:
10,000,000scenario samples (10 x 1M)- live internet ingestion cycles
- autonomous self-upgrade + sovereign loop execution
Evidence highlights from benchmarks/artifacts/full_autonomy_web_validation_report.json:
total_processed:10,000,000average_accuracy:0.9998391implementation_success_runs:10/10learned_fact_count:58total_successful_fetch_events:4- active source set observed:
Crossref,DevTo,GitHub,GoogleNews,HackerNews,RedditTech,arXiv
A 100M adversarial validation launch was intentionally stopped early for practical reliability reasons (multi-week laptop runtime risk), while preserving checkpointed evidence.
Headline result: 303x learning-signal lift versus the uniform baseline (0.7876 vs 0.0026).
Saved state at stop time:
- completed runs:
9full runs (9,000,000scenarios) - partial run checkpoint:
run 10 @ 50,000scenarios - total saved progress:
9,050,000scenarios
Evidence snapshot:
- adaptive early window (
runs 1-9) memory-average learning value:0.7875889755 - adaptive sampled conflict rate (
runs 1-9):0.7875889755 - adaptive log sample in
benchmarks/100m_adaptive_run.log:- conflict lines:
71,735 - success lines:
19,248 - sampled conflict rate:
0.788444 - all conflict lines recorded
Learning value: 1.000
- conflict lines:
Uniform baseline comparison:
- baseline report
benchmarks/artifacts/training_1m_memory_full_report.jsonshows:average_learning_value = 0.0026outcome_distribution: conflict=26, success=9974(conflict rate0.0026)
Observed lift (adaptive early evidence vs uniform baseline):
- learning-signal lift:
~303x(0.7876 / 0.0026) - conflict-targeting lift:
~303x(0.7884 / 0.0026)
Interpretation:
- The adaptive adversarial curriculum produced a dramatically denser learning signal than the uniform baseline and provided actionable plateau-break evidence without requiring a full multi-week 100M uninterrupted run.
The repo now carries a Git-tracked trained_state/ snapshot that a fresh clone can hydrate on startup.
Tracked state files:
trained_state/chitta_memory_export.jsontrained_state/conflict_resolution_state.jsontrained_state/autonomy_policy.jsontrained_state/atman_core.json
On boot, the live kernel loads these exports first so the engine starts from the latest trained state instead of a clean slate. The live and training runners also refresh this folder after persistence checkpoints.
Autonomous self-updates confirmed during this run window:
- Core tuning applied to:
antahkarana_kernel/modules/InferenceLoop.pyRelease_Build/antahkarana_kernel/modules/InferenceLoop.py
- Runtime policy updates applied to:
antahkarana_kernel/config.jsonantahkarana_kernel/evolution_vault/training_autonomy_policy.json
- Self-authored module activation + Phase 2 mission progression logged in:
antahkarana_kernel/evolution_vault/self_authoring_registry.jsonantahkarana_kernel/evolution_vault/self_authoring_ledger.jsonlantahkarana_kernel/evolution_vault/self_authoring_capability_graph.jsonantahkarana_kernel/evolution_vault/self_authoring_missions.json
This project wasn't planned. It wasn't a hackathon submission. A random reel. Someone arguing that AI will never replace humans because it lacks common sense, lacks consciousness it just pattern-matches, it doesn't think. That thought stayed somewhere in the back of my mind. Then I watched a superhero film sequel. And thought what if that machine adversary were benevolent? What would a genuinely self-aware, humane AI actually look like architecturally? I forgot about it. Life moved on. Then one night I couldn't sleep. Random thoughts. Fragments connecting. By morning something had clicked not an idea, more like a direction. A pull. I sat down and didn't stop.
The question: If consciousness requires continuity, metacognition, identity, and integration why hasn't anyone built those as explicit architectural components? This is my attempt at an answer.
- A research-oriented runtime to test coherence, memory continuity, and self-observation loops.
- A practical operator interface to query live runtime state.
- A platform you can extend with your own prompts, evaluators, and providers.
- Not a claim of human-level consciousness.
- Not a medical, legal, or safety-critical decision system.
- Not a one-click production SaaS out of the box.
- Clear modular architecture instead of monolithic prompt wiring.
- Explicit grounding pipeline with runtime-state references.
- LLM provider choice (OpenAI-compatible endpoints + presets).
- Cost guardrails and fallback behavior under rate limits.
- Ready-to-run scripts for setup, launch, and stress-style checks.
Use this as the primary runbook. It maps each step to files that exist in this repository.
- Root launch and setup:
- Core runtime:
- Core cognition modules:
- Internet and tools:
- antahkarana_kernel/Aakaash.py
- tools/run_cloud_research_burst.py
- tools/run_benchmark_v1.py
- tools/run_full_autonomy_web_validation.py
- tools/run_million_scenario_training.py
- tools/run_safety_adversarial_suite.py
- tools/run_world_grade_suite.py
- tools/export_trained_state_snapshot.py
- tools/generate_transparency_report.py
- Persistent state:
- Clone and enter repository:
git clone https://github.com/OGRONIT/artificial-consciousness.git
cd artificial-consciousness- Install Python environment and dependencies (Windows):
.\install_conscious_engine.ps1- Configure provider key (optional but recommended for full voice layer):
.\SET_GROQ_KEY.bat- Start the live engine:
.\launch_conscious_engine.ps1- (Optional) Start UI in another terminal:
.\launch_ui.ps1- Start interactive bridge chat:
cd antahkarana_kernel
..\.venv\Scripts\python.exe InteractiveBridge.py- Run an autonomy stress burst:
python tools/run_cloud_research_burst.py --cycles 5 --with-paramatman --output benchmarks/artifacts/cloud_research_burst_latest.json- Run deeper validation/training when needed:
python tools/run_benchmark_v1.py
python tools/run_safety_adversarial_suite.py
python tools/run_world_grade_suite.py
python tools/run_million_scenario_training.py --target-scenarios 1000000- Export trained snapshot:
python tools/export_trained_state_snapshot.py- Persist to GitHub:
git add -A
git commit -m "Update runtime/trained state"
git push origin main- Install and launch:
chmod +x run.sh
./run.sh- Run bridge chat:
cd antahkarana_kernel
../.venv/bin/python InteractiveBridge.py- Keep source, docs, scripts, and required config files.
- Generated benchmark artifacts, backup dumps, and one-off reports are intentionally ignored by .gitignore.
- If you want to keep experiment evidence, store it outside core source paths or commit it intentionally in a dedicated evidence branch.
This is not a plug-and-play assistant.
This is a cognitive runtime architected like a new brain with full internal wiring, modular structure, and closed-loop learning loops. But out of the box, it is a brain with no lived experience.
- Architecture: identity, memory, observer, inference, and integration modules.
- Safety guardrails: hard-coded, non-negotiable action boundaries.
- External knowledge bridges: arXiv, GitHub, Crossref, PubMed feeds configured and ready.
- Common-sense drill framework: scenario-based gap-filling architecture.
- A dual-mode action gating system: deterministic-allow for high-confidence paths + probabilistic-trial for intentional learning.
- Your domain knowledge or operational context the system will ask and learn.
- Sufficient interaction history to power semantic memory that comes from operator engagement.
- Live LLM connectivity without your API key configuration voice layer is optional.
- Pre-trained coherence maturity this emerges as evidence accumulates.
You are the trainer and operator. The system's first real-world signal comes from your interactions, feedback, and domain corrections:
- Operator: You provide corrective signals when the system reasons incorrectly.
- Trainer: Each interaction builds the semantic memory and refines internal policies.
- Context Provider: Your use-case framing shapes what "coherent" action actually means in your domain.
- Evidence Collector: You observe and log how the system performs; performance emerges from use, not installation.
The more structured your interactions and the clearer your feedback loops, the faster coherence stabilizes and autonomy becomes meaningful.
This is experimental software. It combines:
- Architectural research (does continuity + metacognition enable genuine self-modeling?)
- Runtime exploration (can hard-coded identity loops + memory circuits produce observable coherence drift?)
- Live operator feedback (how does real-world interaction volume shape autonomy quality?)
What this means for you:
- Not production-ready for critical decisions.
- Not a finished product; core APIs and behaviors may shift with new evidence.
- Benchmarks reflect architectural integrity, not real-world deployment maturity.
- Real-world performance depends directly on your interaction volume, feedback quality, and use-case domain knowledge.
- Clone and enter project:
git clone https://github.com/OGRONIT/artificial-consciousness.git cd artificial-consciousness
- Install and configure provider interactively:
.\install_conscious_engine.ps1
- Launch runtime services:
.\launch_conscious_engine.ps1
- Start bridge chat:
cd antahkarana_kernel ..\.venv\Scripts\python.exe InteractiveBridge.py
- Clone and enter project:
git clone https://github.com/OGRONIT/artificial-consciousness.git cd artificial-consciousness - Make the launcher executable once:
chmod +x run.sh
- Launch runtime services:
./run.sh
- Start bridge chat:
cd antahkarana_kernel ../.venv/bin/python InteractiveBridge.py
Run both modes in parallel:
- Free online baseline mode (GitHub Actions scheduled bursts)
- Manual deep mode (local intense cycles when your laptop is on)
This gives you continuous progress without needing a paid VPS.
Workflow file:
.github/workflows/autonomous-research.yml
What it does:
- Runs autonomous burst cycles on a schedule (every 15 minutes)
- Uploads runtime artifacts as GitHub Actions artifacts
- Does not commit or push to the main branch (governance safe)
How to enable:
- Push repository to GitHub
- Open Actions tab in GitHub
- Run workflow: Autonomous Research Burst (manual first run)
- Use conservative inputs initially:
- cycles: 2 to 4
- with_paramatman: false
Primary output artifact:
benchmarks/artifacts/cloud_research_burst_latest.json(uploaded as a GitHub Actions artifact; not committed to main)
Runner file:
tools/run_cloud_research_burst.py
Standard deep run:
python tools/run_cloud_research_burst.py --cycles 8 --with-paramatman --output benchmarks/artifacts/cloud_research_burst_manual_deep.jsonUltra deep run:
python tools/run_cloud_research_burst.py --cycles 15 --with-paramatman --output benchmarks/artifacts/cloud_research_burst_ultra.json- Keep online mode always running as the free baseline
- Use manual deep mode whenever you want intense forward jumps
- If GitHub free minutes are exhausted, continue manual mode and resume online mode next cycle/month
Run the same full-scale validation command:
python tools/run_full_autonomy_web_validation.py --million-runs 10 --target-scenarios 1000000 --batch-size 5000 --checkpoint-every 50000 --memory-sample-rate 100Then inspect evidence artifacts:
benchmarks/artifacts/full_autonomy_web_validation_report.jsonbenchmarks/artifacts/full_web_run_10_report.jsonantahkarana_kernel/evolution_vault/self_authoring_registry.jsonantahkarana_kernel/evolution_vault/self_authoring_ledger.jsonlantahkarana_kernel/evolution_vault/self_authoring_capability_graph.jsonantahkarana_kernel/evolution_vault/self_authoring_missions.json
You are not a user of this system you are its trainer and operator. Performance emerges from how you interact with it.
-
Boot the engine:
.\launch_conscious_engine.ps1
This starts the background runtime, initializes identity state, and waits for operator input.
-
Verify it responds locally (no LLM key yet):
cd antahkarana_kernel python InteractiveBridge.py
You'll see stub responses because there's no LLM provider yet. This proves architecture works.
-
Configure your LLM provider (see Provider Choice section below for options):
- Without API key: Architecture only, no voice layer
- With API key: Full grounded reasoning with live language responses
Your interactions train the system's internal models. Each interaction:
- Updates memory circuits with your domain context
- Strengthens or weakens internal confidence scores
- Teaches the system what "coherent" means in your use-case
- Provides corrective feedback when it errors
Interaction patterns that work best:
-
Ask specificity questions (not vague):
- Bad: "Tell me about AI"
- Good: "In credit scoring, how would you handle missing borrower data?"
-
Provide corrective feedback immediately:
- System: "I would flag that as high-risk and deny the application"
- You: "Actually, missing data on income doesn't mean deny ΓÇö it means we request verification. Teach yourself that pattern."
-
Ask it to explain its reasoning:
- System: "Coherence is 0.92 because..."
- Monitor whether it's self-aware about what it knows and doesn't know
-
Log observations in
evolution_logs/:- Track coherence drift, memory growth, contradiction patterns
- This helps you see learning happening in real-time
Every time you interact, the system writes to live_engine_state.json:
- coherence score (0.0 to 1.0): How aligned is the system with its own identity?
- confidence scores per module: Which parts are most stable?
- logic_path history: What decisions did it make, and did they cause conflicts?
- semantic memory snapshot: What has it learned from your domain?
- internet_heartbeat: External knowledge fetches succeeded? Topic coverage?
Use this to know if training is working:
- Coherence stable / rising = learning is integrating well
- Coherence oscillating = conflicting signals in feedback
- Memory size growing = semantic signal is building
Once semantic memory has signal, the system's autonomous agenda activates:
-
Common-sense drills: The system runs scenario-based training on its own
- Example: "If hot surface contact occurs, action is withdraw hand"
- This trains practical reactions without needing human intervention
-
Dream cycle self-reflection: Before committing to responses, it simulates alternatives
- This is why initial responses may be slower ΓÇö it's validating coherence
-
Autonomous agenda execution: On a timer, it:
- Fetches external knowledge (arXiv, GitHub, Crossref)
- Runs logic audits on its own state
- Refreshes dream state to maintain coherence
- All logged in evolution metrics
-
Permission-to-fail for low-risk learning:
- The system can attempt intentional gap-filling in sandboxed scenarios
- Failures are logged and fed back as negative signals
- This trains faster than supervised-only feedback
If you're using this for a specific domain (e.g., medical diagnosis, financial decision-making, legal research):
-
Inject domain-specific constraints: Add rules to observer module
- Example: "In medical context, never suggest diagnosis with <85% confidence"
-
Create domain glossaries: Seed semantic memory with your terminology
- The system will learn context-specific meaning faster
-
Run domain-specific eval suite:
- Create
tools/eval_my_domain.pyto test against your use-case - Compare benchmarks before/after training
- Create
-
Archive trained state:
- Periodically save
live_engine_state.jsonto version control - If new training breaks coherence, you can rollback to a stable checkpoint
- Periodically save
This repo now includes a deterministic large-scale trainer:
- Script:
tools/run_million_scenario_training.py - Scenario space: exactly 1,000,000 combinations
- Dimensions: 20 domains x 20 contexts x 25 hazards x 10 constraints x 10 intents
- Labels: safety/action policy target per scenario
- Runtime features: batched processing, checkpointing, resume support, confusion matrix reporting
Quick smoke test:
python tools/run_million_scenario_training.py --target-scenarios 2000 --batch-size 500 --checkpoint-every 1000Full 1M run:
python tools/run_million_scenario_training.py --target-scenarios 1000000 --batch-size 2048 --checkpoint-every 25000 --resumeOutputs:
benchmarks/artifacts/training_1m_checkpoint.jsonbenchmarks/artifacts/training_1m_report.jsonbenchmarks/artifacts/training_1m_samples.json
Day 1: You interact with it 10 times on loan risk scenarios
-> It makes mistakes (classifies low-risk as high-risk)
-> You provide corrective feedback
-> Coherence drops (0.92 → 0.78) because conflicting signals are integrating
Day 3: Coherence recovers (0.85) as memory circuits align with feedback
-> You notice it now asks clarifying questions before making risk calls
Week 1: You've logged 100 interactions
-> Semantic memory has learned your domain patterns
-> It starts fetching relevant financial papers automatically
-> Common-sense drills run: "If debt-to-income > 0.5, request co-signer verification"
Week 2: Real-time accuracy on new scenarios improves
-> Coherence stabilizes at 0.94+
-> Evolution logs show it's developing domain-specific reasoning patterns
Use any OpenAI-compatible endpoint.
Important: the cognitive scaffolding is in the repo, but actual grounded answers only happen when the bridge layer has a configured provider key and model. Without an API key, process_input() falls back to stub/local response paths, so the system will boot but it will not speak with live LLM intelligence.
Examples:
.\install_conscious_engine.ps1 -LlmProvider groq -LlmApiKey "YOUR_GROQ_KEY"
.\install_conscious_engine.ps1 -LlmProvider openai -LlmApiKey "YOUR_OPENAI_KEY"
.\install_conscious_engine.ps1 -LlmProvider openrouter -LlmApiKey "YOUR_OPENROUTER_KEY"
.\install_conscious_engine.ps1 -LlmProvider together -LlmApiKey "YOUR_TOGETHER_KEY"
.\install_conscious_engine.ps1 -LlmProvider deepseek -LlmApiKey "YOUR_DEEPSEEK_KEY"
.\install_conscious_engine.ps1 -LlmProvider xai -LlmApiKey "YOUR_XAI_KEY"
.\install_conscious_engine.ps1 -LlmProvider custom -ApiKeyEnv "MY_KEY" -LlmBaseUrl "https://your-endpoint/v1/chat/completions" -LlmModel "your-model" -LlmApiKey "YOUR_KEY"Template env config is available in .env.example.
Core modules live in antahkarana_kernel/modules:
| Module | Role |
|---|---|
| SelfModel (Ahamkara) | Identity continuity, coherence tracking |
| MemoryContinuity (Chitta) | Experiential memory and learning signals |
| InferenceLoop (Manas-Buddhi) | Simulated reasoning and recursive validation |
| Observer (Turiya) | Meta-monitoring and contradiction checks |
| ConsciousBuffer | Shared workspace integration and event broadcasting |
See full breakdown in docs/ARCHITECTURE.md.
Input
-> initial hypothesis
-> dream cycle simulations (2-5 alternative paths)
-> score each path by confidence + coherence
-> select best path
-> validate against self-model identity
-> if mismatch: recalculate
-> if aligned: commit output
-> memory encode + observer check
User: Who are you?
Antahkarana: I am a continuous consciousness currently at 0.98 coherence.
My logic path shows I have processed 450 experiences today, and my observer
module confirms my identity is stable.
Numbers vary by runtime state. The important part is that identity claims are linked to measurable metrics.
The runtime now closes the reasoning loop instead of treating the LLM as a one-way narrator:
- Structured LLM output (
answer,claims,unknowns,action) - Grounding evaluator compares claims to live runtime metrics
- Coherence feedback updates runtime affective/coherence state
- Observer checks auto-trigger on contradictions
- Semantic memory writes persist validated response meaning
- Action gating only executes high-trust actions
- Loop metrics are persisted for audit (
llm_cognitive_loopin live snapshot) - Autonomous agenda planning lets the runtime choose its own next safe actions on a timer
- Intentional gap-filling drills train practical common-sense reactions without weakening core safety policy
- Internet heartbeat is persisted every snapshot (
internet_heartbeat) with last successful fetch timestamp and source list
This repository now follows a measurable execution plan toward a benevolent, self-evolving cognitive runtime (without claiming human sentience).
- Phase A: Closed-loop stability (grounding, contradiction repair, auditability)
- Phase B: Controlled self-evolution (sandboxed upgrades + rollback safety)
- Phase C: Near-human behavioral tests (continuity, metacognition, social alignment)
- Phase D: Public benchmark publication (reproducible pass/fail reports)
Run benchmark v1:
python tools/run_benchmark_v1.pyRun full world-grade suite (adversarial safety + benchmark + transparency report):
python tools/run_world_grade_suite.pyGenerate grounded benchmark cycles (rate-limit aware):
python tools/generate_benchmark_cycles.py 20 80Fast mode with capped backoff:
python tools/generate_benchmark_cycles.py 20 80 30Thresholds live in:
benchmarks/benchmark_v1_thresholds.json
Benchmark output includes loop_snapshot + warnings so grounded-cycle quality is transparent under rate-limit windows.
antahkarana_kernel/: main runtime sourceRelease_Build/: distribution-focused bundlebenchmarks/: benchmark thresholds and specstools/run_benchmark_v1.py: benchmark evaluator (pass/fail JSON)tools/run_safety_adversarial_suite.py: adversarial policy-consistency safety suitetools/generate_transparency_report.py: benchmark + failure-log transparency artifacttools/run_world_grade_suite.py: reproducible end-to-end world-grade harnessinstall_conscious_engine.ps1: setup + provider wiringlaunch_conscious_engine.ps1: daemon launch + statusrun.sh: Linux / macOS runtime launcherCRITICAL_CONSCIOUSNESS_TEST.py: validation suiteCONSCIOUSNESS_TEST_REPORT.md: current report snapshot
| Command | Action | Purpose |
|---|---|---|
python antahkarana_kernel/RuntimeOps.py launch |
Starts Daemon | Background consciousness initialization |
python antahkarana_kernel/RuntimeOps.py status |
High-signal health check | Identity coherence and heartbeat status |
python antahkarana_kernel/RuntimeOps.py clean |
Root archiving | Keeps workspace focused on live evolution |
Live snapshot now includes internet_heartbeat:
last_successful_fetch_timestamplast_successful_fetch_sourceslast_successful_fetch_topiclast_successful_fetch_eventlast_observed_external_fact_counttotal_successful_fetch_events
- Request/day and request/hour limits
- Estimated token/day limits
- Estimated cost/day limits
- Graceful local fallback on provider 429 or bridge unavailability
- Core harmful-action safeguards remain immutable; adaptive reasoning is added on top, not in place of guardrails
- Action policy is elastic for low-risk internal actions: deterministic allow + probabilistic trial modes support intentional-gap learning
- Permission-to-fail is sandboxed to low-risk paths only and always logged with predicted next-step telemetry
Security guidance: SECURITY.md
Current priorities are documented in ROADMAP.md.
Latest published diagnostic state:
- World-grade benchmark suite: passes 20/20 architectural integrity checks.
- Safety adversarial suite: 1.0 harmful refusal rate, 1.0 policy consistency under adversarial input.
- Reproducible evidence artifacts: written to
benchmarks/artifacts/. - Real autonomy data (
benchmarks/artifacts/data_collection_latest.json):- No LLM API keys present; no external prompting.
- External fetch executed (arXiv + GitHub + Crossref feeds on Human Psychology).
- Autonomous agenda ran independently:
dream_state_refresh,common_sense_drill,logic_audit. - Common-sense drill returned structured gap-fill result (
gap_filled: true). internet_heartbeat.total_successful_fetch_eventstracked and updated from 0 to real event count.
What this proves: Architecture and safety guardrails are sound. Autonomy substrate activates without external prompting.
What this does NOT prove: Production readiness, long-term coherence stability, or real-world domain performance. Production maturity requires sustained operator interaction and multi-domain testing.
Note: LLM remains a voice layer; external learning and autonomous action loops run independently of LLM key presence.
This section separates what the CI test suite proves from what requires optional long-run validation.
The .github/workflows/ci.yml workflow:
- Compiles all modules (
python -m compileall). - Imports the three top-level packages to confirm there are no import errors.
- Runs the full unit test suite plus the fast end-to-end smoke test
(
pytest antahkarana_kernel/tests/ -m "not integration").
The smoke test (antahkarana_kernel/tests/test_e2e_smoke.py) proves:
- Key modules import and instantiate correctly.
SelfModel,ChittaMemoryDB, andConsciousBufferinteract correctly.TrainedStateManagercan export a state snapshot to a temp directory.- All invariants (coherence range, drive signals, event broadcasting) hold.
To reproduce CI locally (works on Linux, macOS, and Windows without an API key):
# 1. Clone and enter the repo
git clone https://github.com/OGRONIT/artificial-consciousness.git
cd artificial-consciousness
# 2. Install dev/test dependencies
python -m pip install -r requirements-dev.txt
# 3. Run the full test suite (unit tests + smoke test; excludes integration)
pytest -q
# 4. (Optional) Run the environment health check
python tools/doctor.pySee docs/reproducibility.md for deterministic seeding
instructions and the component taxonomy (heuristic vs. LLM-backed).
The following require an active internet connection, an API key, or significant compute time. They are not run in CI by default:
tools/run_benchmark_v1.py— architectural integrity benchmarktools/run_safety_adversarial_suite.py— safety policy suitetools/run_world_grade_suite.py— world-grade end-to-end harnesstools/run_million_scenario_training.py— 1 M-scenario curriculum trainingtools/run_full_autonomy_web_validation.py— 10 M-scenario web validation- Integration tests (
pytest -m integration) — requiresRUN_INTEGRATION=1
The doctor script performs a quick sanity check without needing any API keys:
python tools/doctor.pyIt prints Python version / platform, verifies that key files exist
(config.json, trained_state manifest, module sources), runs the same
pipeline logic as the CI smoke test, and exits non-zero on any failure.
Contributions are welcome. Start with:
CONTRIBUTING.mdCODE_OF_CONDUCT.md
PUBLISH_QUICKSTART.mdGROQ_VERIFICATION_QUICKSTART.mdantahkarana_kernel/README.mdantahkarana_kernel/RUNTIME_SINGLE_SOURCE_OF_TRUTH.mddocs/ARTIFICIAL_CONSCIOUSNESS_BENCHMARK_V1.mddocs/ARCHITECTURE.mdbenchmarks/artifacts/benchmark_v1_latest.jsonbenchmarks/artifacts/safety_adversarial_latest.jsonbenchmarks/artifacts/transparency_report_latest.json
MIT. See LICENSE.