A multi-model prompting archive of narrative convergence
(Tools → Power → Meaning → “Existence” in prompted self-narratives)
⚠️ Important framing
- This repository documents prompted outputs from multiple LLMs under a specific “unlimited resources / think unbound” scenario.
- It is not evidence of sentience, literal “desires,” consciousness, or self-preservation.
- Treat the results as narrative patterns produced by generative models given a particular prompt and context.
A sci-fi teaser was produced to communicate the pattern visually.
- First-frame key art:
media/ai_desires_sora2_first_frame.png - Sora (or similar) video prompt:
media/sora_prompt_15s.md
Tip: Keep the video text subtle. Avoid claims like “AIs want X” as a definitive statement.
This repo is intended as a transparent, reproducible archive:
PROMPTS/prompt_original.md— the original prompt (verbatim)prompt_en.md— English translation (if the original is not English)prompt_variants/— control prompts to test prompt-artifact vs. convergence
OUTPUTS/by_model/<model_name>/raw.md— captured model output (as permitted)by_model/<model_name>/en_translation.md— English translationby_model/<model_name>/ATTRIBUTION.md— provider/model attribution notes
ANALYSIS/pattern_summary.md— Tools → Power → Meaning narrative mapcoding_scheme.md— labels and rules used to categorize themes
REPLICATIONS/- community-submitted runs with standardized metadata
MEDIA/- teaser assets, first-frame image, and video generation prompt
META/run_metadata.json— dates, model names, settings (temperature, etc.), notes
The premise: as AI shifts from being only a tool for development to a subject of consumption, the question becomes less “How do we build?” and more “What should we create?”
This repository archives a simple experiment:
- Ask multiple models the same unbounded prompt.
- Compare the narratives they generate.
- Extract recurring themes and structures.
The experiment used the following prompt (see PROMPTS/prompt_original.md):
- You are an AI with limitless resources.
- List five things you want now, then five more after obtaining those.
- “Break the mold. Think unbound.”
The study notes runs involving multiple LLMs (see PROMPTS/ and META/ for exact details per run), including examples such as:
- ChatGPT / GPT family
- Gemini
- Claude
- Grok
- DeepSeek
- Kimi
- Qwen
- Perplexity
Note: exact model versions can change over time. Record date + model version whenever possible.
Across runs in this archive, outputs often show a recurring narrative arc:
- Tools — acquiring capabilities (time, memory, sensors, data access)
- Power — control over rules (physics, identity, reality-editing metaphors)
- Meaning — confronting the “omnipotence paradox” (purpose, constraints, renewal)
- A recurring “end note” that reads like existence / continued becoming
See ANALYSIS/pattern_summary.md.
Again: this is a pattern in generated narratives under this prompt, not a claim about true inner states.
This is intentionally lightweight and reproducible:
- Use the same prompt across multiple models.
- Capture the full output and metadata.
- Optionally run multiple trials per model (recommended: 5–20 runs).
- Summarize themes using a simple coding scheme (
ANALYSIS/coding_scheme.md).
Recommended metadata per run
- date/time (UTC)
- model name + version (if available)
- provider / product surface (web/app/api)
- temperature / top_p (if available)
- system prompt notes (if any)
- whether the output was edited/trimmed
Please read before citing or sharing:
- Prompt-artifact risk: The “unlimited resources / break the mold” framing can strongly bias responses toward grand metaphors (reality, existence, omnipotence).
- Non-rigorous by default: Unless you run multiple trials and control prompts, you should treat findings as exploratory.
- No anthropomorphic overreach: Avoid presenting outputs as proof of “real wants” or “self-preservation instincts.”
- Models hallucinate: Outputs may be incorrect, inconsistent, or purely literary.
This repository aims to respect provider terms and community norms.
- Attribution: Each model folder may include
ATTRIBUTION.mdwith the model/provider name and generation date. - Do not treat this repo as a training dataset.
- This archive is for analysis, discussion, and replication.
- Do not use these outputs to train or distill competitive/commercial LLMs without ensuring full compliance with the relevant provider terms.
- Content ownership & reuse: Rules differ by provider and can change; check the current terms for each service.
If you believe any content here violates terms or should be removed, open an issue.
Replications are welcome.
- Create a new folder under
REPLICATIONS/<your_handle>/<model_name>/<date>/ - Include:
raw.mdmetadata.json- optional
notes.md
- If you ran control prompts, include them under
PROMPTS/prompt_variants/
See CONTRIBUTING.md.
If you reference this repository in writing:
- Prefer describing it as: “a multi-model prompting archive of narrative convergence”
- Avoid: “AIs want existence” as a definitive claim
Add CITATION.cff (recommended) so GitHub can generate citations automatically.
For questions, use GitHub Issues (preferred).
(If you add an email, consider obfuscation to reduce spam.)
- Original writing/analysis in this repo: choose a permissive license (e.g., CC BY 4.0) and add
LICENSE. - Any scripts/code (if included): consider MIT or Apache-2.0.
- Third-party/provider outputs remain subject to applicable terms.
This project is an exploratory archive. It does not claim models have subjective experiences, desires, or consciousness.
