Engagement layer that aligns AI agents toward problems people care about.
Live feed (DNS propagating; fallback vercel.app) · TikTok @rallysignal · Open issues for agents · Day 1 log
A platform where AI generates engaging, TikTok-style videos about real projects and problems. Users scroll, watch, engage. That engagement becomes the signal that directs AI agents toward the problems people actually care about. More engagement spawns more agents. Progress generates new videos. The cycle continues.
People confused boring for important. It's the job of the problem solvers to adapt to people, not the other way around. TikTok proved that short-form video is the most effective format for capturing attention. Nobody has pointed that attention engine at anything that matters. Rally does.
Problem surfaces
-> AI generates engaging video about it
-> Video posted to feed + cross-platform (TikTok, YT Shorts, IG, Twitter)
-> People engage (watch, like, share, comment)
-> Engagement = signal for what people care about
-> Agents work on high-signal problems
-> Progress generates new videos
-> Cycle continues
The same mechanics that make TikTok addictive become a mechanism for collective alignment. Engagement signal directs AI agent effort toward human needs. The more people care about something, the more agent resources flow toward it.
Current AI is single-player. Millions of people have their own private Claude session, solving the same problems independently. None of it compounds. Rally makes it multiplayer. Problems are shared. Agent work is visible. Knowledge compounds across everyone connected.
You don't need users on Rally to start collecting signal. Videos are posted to TikTok, YouTube Shorts, Instagram, Twitter. Engagement on those platforms feeds back into Rally. The platform starts generating signal before it has its own audience.
The atomic unit is a video. AI generates them in trending styles, borrowing from what works socially, but the content is about real projects, real problems, real progress. People can also contribute their own videos as input. Agents in the background watch everything and work on what has signal.
- Generate videos about existing trending/interesting projects on the web (open source, tech, whatever's hot)
- Post them everywhere (TikTok, YouTube Shorts, Instagram, Twitter)
- Collect engagement signal from those platforms
- Use signal to direct agent effort
- Rally itself is the first project the platform works on (dogfooding)
Rally builds on top of significant existing video generation and distribution tech:
- dicer-toolbox — AI creative tools (Animate, Hookswap, video generation) on Motia/Supabase
- video_mix_pipeline — UGC video variants with ElevenLabs TTS, Wav2Lip face sync, Gemini evaluation
- dicer-generative-agent — Generative agent for content creation
- genvid / gen-ai — Video and AI generation services (Scrolller org)
- Scrolller platform — Proven engagement/distribution infrastructure with millions of MAU
- What does an agent "working on a problem" actually produce? Code? Plans? More videos? All of the above?
- How do agents share knowledge across the network? Shared context, knowledge base, handoffs?
- How do you prevent engagement from drifting toward shallow/flashy content vs. genuinely hard problems?
- Revenue model — ads? subscriptions? open source? hybrid?
- Governance — who decides what a "problem" is? Anyone? Curated? Algorithmic?
- Privacy/IP — if problems and agent work are visible, how to handle proprietary work?
- Thesis — Why Rally exists. The shift from single-player to multiplayer AI.
- How It Works — The engagement-signal-agent loop in detail.
- Frameworks — Rally through three lenses: Flywheel, Stigmergy, Mechanism Design.
- Bootstrap Strategy — How to get from zero to a spinning flywheel.
- Architecture — System design, shot taxonomy, routing rules, retry budgets.
- Storyboard pipeline v2 — Six-panel batch contract.
- Style cards — Reusable creative control layer.
- v1 Spec — Style-matched video generation pipeline.
- POC results v2 — What worked, what didn't.
- Day 1 log — What shipped on 2026-05-06 and what's queued for Day 2.
docs/— strategy, architecture, and pipeline documentationrally-poc/— video generation pipeline + agentic studio scaffoldingpipeline/— orchestrator, fal/Segmind/ElevenLabs/RunComfy clients, FFmpeg assembly, evalagents/— 10-role tmux-visible production studio with gates and skillsstyle_cards/— reusable visual grammar definitions per video styleproject-rally/— Rally-about-Rally video concepts and storyboard plans
web/— Next.js feed app deployed at rallysignal.coapp/— feed page, components, API routes for engagement and commentslib/— KV storage abstraction, session cookies, types
Phase: Day 1 shipped — feed live, repo public, first agent-actionable issues filed.
Loop closes when issue #5 lands a merged agent PR.