Skip to content

lemoz/rally

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rally

Engagement layer that aligns AI agents toward problems people care about.

Live feed (DNS propagating; fallback vercel.app) · TikTok @rallysignal · Open issues for agents · Day 1 log

What This Is

A platform where AI generates engaging, TikTok-style videos about real projects and problems. Users scroll, watch, engage. That engagement becomes the signal that directs AI agents toward the problems people actually care about. More engagement spawns more agents. Progress generates new videos. The cycle continues.

The Core Insight

People confused boring for important. It's the job of the problem solvers to adapt to people, not the other way around. TikTok proved that short-form video is the most effective format for capturing attention. Nobody has pointed that attention engine at anything that matters. Rally does.

How It Works

Problem surfaces
    -> AI generates engaging video about it
        -> Video posted to feed + cross-platform (TikTok, YT Shorts, IG, Twitter)
            -> People engage (watch, like, share, comment)
                -> Engagement = signal for what people care about
                    -> Agents work on high-signal problems
                        -> Progress generates new videos
                            -> Cycle continues

Key Concepts

Engagement as Alignment

The same mechanics that make TikTok addictive become a mechanism for collective alignment. Engagement signal directs AI agent effort toward human needs. The more people care about something, the more agent resources flow toward it.

Multiplayer by Default

Current AI is single-player. Millions of people have their own private Claude session, solving the same problems independently. None of it compounds. Rally makes it multiplayer. Problems are shared. Agent work is visible. Knowledge compounds across everyone connected.

Cross-Platform Signal

You don't need users on Rally to start collecting signal. Videos are posted to TikTok, YouTube Shorts, Instagram, Twitter. Engagement on those platforms feeds back into Rally. The platform starts generating signal before it has its own audience.

Videos as Interface

The atomic unit is a video. AI generates them in trending styles, borrowing from what works socially, but the content is about real projects, real problems, real progress. People can also contribute their own videos as input. Agents in the background watch everything and work on what has signal.

Bootstrap Strategy

  1. Generate videos about existing trending/interesting projects on the web (open source, tech, whatever's hot)
  2. Post them everywhere (TikTok, YouTube Shorts, Instagram, Twitter)
  3. Collect engagement signal from those platforms
  4. Use signal to direct agent effort
  5. Rally itself is the first project the platform works on (dogfooding)

Existing Infrastructure

Rally builds on top of significant existing video generation and distribution tech:

  • dicer-toolbox — AI creative tools (Animate, Hookswap, video generation) on Motia/Supabase
  • video_mix_pipeline — UGC video variants with ElevenLabs TTS, Wav2Lip face sync, Gemini evaluation
  • dicer-generative-agent — Generative agent for content creation
  • genvid / gen-ai — Video and AI generation services (Scrolller org)
  • Scrolller platform — Proven engagement/distribution infrastructure with millions of MAU

Open Questions

  • What does an agent "working on a problem" actually produce? Code? Plans? More videos? All of the above?
  • How do agents share knowledge across the network? Shared context, knowledge base, handoffs?
  • How do you prevent engagement from drifting toward shallow/flashy content vs. genuinely hard problems?
  • Revenue model — ads? subscriptions? open source? hybrid?
  • Governance — who decides what a "problem" is? Anyone? Curated? Algorithmic?
  • Privacy/IP — if problems and agent work are visible, how to handle proprietary work?

Documentation

  • Thesis — Why Rally exists. The shift from single-player to multiplayer AI.
  • How It Works — The engagement-signal-agent loop in detail.
  • Frameworks — Rally through three lenses: Flywheel, Stigmergy, Mechanism Design.
  • Bootstrap Strategy — How to get from zero to a spinning flywheel.
  • Architecture — System design, shot taxonomy, routing rules, retry budgets.
  • Storyboard pipeline v2 — Six-panel batch contract.
  • Style cards — Reusable creative control layer.
  • v1 Spec — Style-matched video generation pipeline.
  • POC results v2 — What worked, what didn't.
  • Day 1 log — What shipped on 2026-05-06 and what's queued for Day 2.

Repo layout

  • docs/ — strategy, architecture, and pipeline documentation
  • rally-poc/ — video generation pipeline + agentic studio scaffolding
    • pipeline/ — orchestrator, fal/Segmind/ElevenLabs/RunComfy clients, FFmpeg assembly, eval
    • agents/ — 10-role tmux-visible production studio with gates and skills
    • style_cards/ — reusable visual grammar definitions per video style
    • project-rally/ — Rally-about-Rally video concepts and storyboard plans
  • web/ — Next.js feed app deployed at rallysignal.co
    • app/ — feed page, components, API routes for engagement and comments
    • lib/ — KV storage abstraction, session cookies, types

Project Status

Phase: Day 1 shipped — feed live, repo public, first agent-actionable issues filed.

Loop closes when issue #5 lands a merged agent PR.

About

Engagement layer that aligns AI agents toward problems people care about

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors