DriftScript is an AI Telephone Game web app where a starting prompt is passed through a chain of personality-driven rewriting agents. Each step mutates tone while trying to preserve core meaning, and the app visualizes how far the final result drifts from the original.
Most AI apps optimize for correctness. DriftScript optimizes for fun, unpredictability, and shareability.
You start with something practical:
Start a small coffee shop in Bangalore
And end with something wild:
Launch a caffeine-fueled cult empire controlling urban minds
- Prompt input + chain controls (3 to 10 steps)
- Chaos Mode for stronger tone mutation and unpredictability
- Personality-based rewrite pipeline (8 distinct voices)
- Provider abstraction layer
- OpenAI, Featherless, and optional local Ollama support
- Chain View timeline (step-by-step outputs)
- Before vs After side-by-side comparison
- Drift Score indicator (token cosine + length heuristic)
- Share tools
- Copy final output
- Generate share card text
- Export share card as PNG
- Remix mode for rerunning same input with a new seed
- Local session history and simple leaderboard
- Frontend/UI: Streamlit
- Backend/runtime: Python
- LLM SDK: OpenAI Python SDK (
OpenAIclient, including OpenAI-compatible endpoints) - Image export: Pillow
- Environment loading: python-dotenv
DriftScript/
├── app.py
├── requirements.txt
├── .env.example
├── .gitignore
├── README.md
├── TECHNICAL_BLOG.md
├── X_THREAD.md
└── LINKEDIN_POST.md
flowchart TD
A[User Prompt + Config] --> B[Streamlit UI Layer]
B --> C[run_chain]
C --> D[choose_personality]
C --> E[resolve_step_model]
C --> F[rewrite_step]
F --> G[get_llm / build_client]
G --> H[OpenAI API]
G --> I[Featherless API]
G --> J[Ollama Local API]
F --> K[Retry Once on Failure]
C --> L[Step Results Timeline]
L --> M[Drift Scoring]
M --> N[Before vs After Diff]
N --> O[Share Card + PNG Export]
O --> P[Session History + Leaderboard]
DriftScript uses a provider router so the same chain logic can call different backends.
def build_client(provider: str) -> OpenAI:
if provider == "openai":
return OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
if provider == "featherless":
return OpenAI(
api_key=os.getenv("FEATHERLESS_API_KEY"),
base_url=os.getenv("FEATHERLESS_BASE_URL", "https://api.featherless.ai/v1"),
)
if provider == "ollama":
return OpenAI(
api_key=os.getenv("OLLAMA_API_KEY", "ollama"),
base_url=os.getenv("OLLAMA_BASE_URL", "http://localhost:11434/v1"),
)Every step follows the same strict system+user structure to keep outputs short and stylized.
SYSTEM:
You are a rewriting agent with the following personality:
[PERSONALITY DESCRIPTION]
Your task:
Rewrite the given text while preserving core meaning, but strongly reflect your personality.
Rules:
- Do NOT explain
- Do NOT mention you are an AI
- Keep it concise (max 3–5 sentences)
- Amplify tone and style significantly
USER:
Rewrite this text:
[INPUT TEXT]
def run_chain(input_text, steps, provider, default_model, model_mode, random_model_pool, chaos_mode, seed):
current_text = input_text
for i in range(steps):
personality = choose_personality(i, rng)
model = resolve_step_model(...)
result = rewrite_step(current_text, personality, provider, model, chaos_mode, seed, i)
current_text = result.output_text
return results, current_textDrift is computed as:
- 70% token cosine similarity loss
- 30% length ratio loss
preservation = 0.7 * cosine_similarity + 0.3 * length_ratio
drift_score = (1 - preservation) * 100This is intentionally lightweight and fast for an MVP.
git clone https://github.com/<your-username>/DriftScript.git
cd DriftScriptpip install -r requirements.txtcp .env.example .envFill .env with provider credentials/endpoints.
OPENAI_API_KEY=
FEATHERLESS_API_KEY=
FEATHERLESS_BASE_URL=https://api.featherless.ai/v1
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_API_KEY=ollamastreamlit run app.py- Install and start Ollama locally.
- Pull a model, for example:
ollama pull llama3.1- Keep
OLLAMA_BASE_URL=http://localhost:11434/v1in.env. - In app sidebar, select provider
ollama.
git checkout -b feat/your-feature-name- Keep functions modular and testable
- Preserve provider abstraction boundaries
- Keep output concise and personality-strong
python -m py_compile app.pygit add .
git commit -m "Add: <feature>"
git push origin feat/your-feature-nameOpen a PR with:
- problem statement
- implementation details
- screenshots/video of UI changes
- tradeoffs and future follow-ups
- Multiplayer rooms with shared chain sessions
- Public gallery of funniest drifts with voting
- Auth + user profiles + persistent history
- Better drift metrics (semantic embeddings)
- Branching chains where one step forks into multiple rewrites
- Real-time collaborative mode (watch drift happen live)
- Export as social image templates with branding themes
- Daily challenge prompt + leaderboard reset
- Safety layer and profanity filters per provider
- Cost tracker per run and per model
- Structured logging + request tracing
- Circuit breaker / provider fallback
- Streaming token responses for faster perceived latency
- Caching repeated steps with same seed/model/input
- Unit tests for chain and scoring functions
- Dockerization + deployment templates
