Skip to content

Varun-addh/DistributionAgent

Repository files navigation

DistributionAgent

Personal autonomous agent for growing a technical X audience by drafting in your own voice, from your own writing.

What this is

DistributionAgent ingests your past technical writing as a voice corpus, researches trending topics in AI / LLMs / agentic systems, drafts posts that sound like you, tracks how each post performs after you publish, and reflects weekly on what worked. The loop is research → draft → publish → track → reflect, with the reflection step feeding strategy back into the next round of drafts.

The "voice" isn't a fine-tune — it's retrieval-based. Each ingested post is embedded and stored in Qdrant alongside its engagement metrics. When the drafting agent works on a new topic, it pulls the top-performing semantically-similar past posts as few-shot examples, weighted by engagement. The model learns what worked for this author, not what reads like a generic technical tweet.

The reflection step closes the loop: a weekly job scores recent posts by engagement, identifies which topics, structures, and times performed best, and writes a strategy note that subsequent draft runs read. Over time the agent gets sharper at writing posts the audience actually engages with — without the author writing more.

Why I'm building this

It's a learning project plus a personal tool — not a startup, not a product, no business model. I want hands-on reps with agentic systems (multi-step workflows, retrieval pipelines, vector stores, LLM orchestration, reflection loops), and rather than build another todo-app demo I'd rather build something I'd actually use. Growing a technical audience on X is a real problem I have, so the dogfood loop is tight.

The deliverable is end-to-end fluency across the modern AI stack and a tool that helps me ship more distribution for the technical work I do. That's it.

Architecture

See ARCHITECTURE.md for the full breakdown — module responsibilities, end-to-end data flow, and rationale for each storage layer. Engineering principles (DI, no business logic in routes, pure services, type hints everywhere) live in CLAUDE.md.

flowchart LR
    User([User])
    API[FastAPI app]
    Postgres[(PostgreSQL<br/>posts, drafts, runs, metrics)]
    Qdrant[(Qdrant<br/>voice corpus embeddings)]
    Redis[(Redis<br/>cache + queues)]
    Research[Research agent]
    Drafter[Drafting agent]
    Reflector[Reflection agent]
    Gemini[Gemini API]
    Groq[Groq API]
    XAPI[X / Twitter]

    User -->|past writing| API
    API --> Postgres
    API --> Qdrant
    Research --> Postgres
    Research --> Redis
    Drafter --> Qdrant
    Drafter --> Postgres
    Drafter --> Gemini
    Drafter --> Groq
    User -->|publish| XAPI
    XAPI -->|engagement metrics| API
    Reflector --> Postgres
    Reflector --> Qdrant
Loading

Tech stack

  • Python 3.11, FastAPI — application + HTTP layer
  • PostgreSQL 16 + SQLAlchemy 2.x + Alembic — system of record
  • Qdrant — vector store for voice corpus embeddings
  • Redis — cache and lightweight queues
  • Gemini (text-embedding-004) for embeddings; Groq + Gemini for drafting/reflection inference
  • Docker Compose for local infra, pytest for tests

Status

Session 1 complete: scaffold + posts data model + corpus ingestion endpoint, verified working end-to-end (postgres / redis / qdrant up, migration applied, POST /api/v1/posts and GET /api/v1/posts returning real rows).

Roadmap

  • Session 1 — Scaffold + voice corpus data model + ingestion endpoint.
  • Session 2 — Embedding pipeline. Embed each post on ingestion; store vectors in Qdrant with engagement, topic, source, and length metadata.
  • Session 3 — Research agent. Pull from X (follows + lists), Hacker News, arxiv, RSS, dev.to; rank candidates by topic relevance; emit daily digest.
  • Session 4 — Drafting agent. Engagement-weighted retrieval of past posts as few-shot examples; generate 3 variants per topic; score on voice match + predicted engagement.
  • Session 5 — Manual review + publish flow. Approval-required UI/CLI; the human picks, edits, publishes. Nothing auto-posts in v1.
  • Session 6 — Performance tracker. Poll X API after each post; store engagement history (impressions, likes, replies, saves, shares) across multiple time windows.
  • Session 7 — Reflection agent. Weekly cron analysing what worked; updates the internal "what works for this user" model; emits next-week insights.
  • Session 8 — Digest delivery (Telegram or email), end-to-end polish, observability (logging, error tracking, LLM cost tracking).

Local setup

Prerequisites: Docker Desktop, Python 3.11+, a clone of this repo.

# 1. Boot infra (postgres on 5433, redis on 6379, qdrant on 6333)
docker compose up -d

# 2. Configure env
cp .env.example .env          # PowerShell: Copy-Item .env.example .env

# 3. Create venv + install deps
python -m venv .venv
. .venv/bin/activate          # PowerShell: . .venv\Scripts\Activate.ps1
pip install -e ".[dev]"

# 4. Apply migrations
alembic upgrade head

# 5. Run the API
uvicorn app.main:app --reload

Verify (in another shell):

curl http://localhost:8000/api/v1/health
# {"status":"ok"}

curl -X POST http://localhost:8000/api/v1/posts \
  -H "Content-Type: application/json" \
  -d '{"source":"tweet","content":"hello world","engagement_metrics":{"likes":1}}'

curl http://localhost:8000/api/v1/posts

Run the test suite:

pytest

Postgres port note: the container exposes postgres on host port 5433 (not the default 5432) to avoid clashes with a host-installed PostgreSQL. If you change it, update docker-compose.yml and DATABASE_URL in .env together.

Built with Claude Code

This project is built collaboratively with Claude Code, Anthropic's CLI agent. Pairing with an LLM agent on an LLM-agent project is part of the point — I'm using the same class of tooling I'm trying to understand. The agent operates inside the constraints documented in CLAUDE.md, and the decisions that shaped the architecture are visible in commit history.

About

Personal autonomous agent for growing a technical X audience

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors