CodeCanyon Comment Assistant is an automated AI pipeline built by Iqonic Design that monitors public CodeCanyon product pages for un-replied customer comments, generates context-aware draft replies using a Retrieval-Augmented Generation (RAG) pipeline powered by LlamaIndex and Gemini 2.0 Flash, and delivers the draft to the developer via a formatted Telegram message for final review before posting.
Customer Comment on CodeCanyon
↓
Playwright Scraper (guest, no login)
↓
SQLite (duplicate check)
↓
LlamaIndex RAG (retrieves relevant context)
↓
Gemini 2.0 Flash (generates draft reply)
↓
Telegram Bot (notifies developer)
↓
Developer reviews → copies → pastes on CodeCanyon
codecanyon-comment-assistant/
├── main.py # Entry point — CLI run or Flask serve mode
├── requirements.txt # Python dependencies
├── ecosystem.config.js # PM2 process config for production
├── .env # Secrets (git-ignored)
├── .env.example # Template — copy to .env and fill in values
├── .gitignore
├── scraper/
│ ├── codecanyon_scraper.py # Playwright guest scraper — finds un-replied comments
│ └── comment_store.py # SQLite deduplication store — tracks comment status
├── rag/
│ ├── doc_crawler.py # One-time setup — crawls docs site to .txt files
│ ├── qa_extractor.py # One-time setup — scrapes past Q&A reply pairs
│ ├── indexer.py # Builds and persists per-product LlamaIndex vector index
│ ├── retriever.py # Queries the index and returns top-k context chunks
│ └── knowledge_base/ # Per-product knowledge base files
│ ├── <product-id>/ # One folder per product (matches products.json id)
│ └── past_qa/ # Shared past Q&A pairs (.txt)
├── ai/
│ └── draft_generator.py # Gemini API call with system prompt — returns draft reply
├── notifier/
│ ├── telegram_notifier.py # Sends HTML-formatted message via Telegram Bot API
│ └── mattermost_notifier.py # Phase 2 alternative — Mattermost webhook
├── config/
│ └── products.json # List of CodeCanyon products to monitor
└── n8n/
└── workflow_export.json # Import into n8n to schedule the pipeline every 6 hours
git clone <repo-url>
cd codecanyon-comment-assistantpython -m venv venv# macOS / Linux
source venv/bin/activate
# Windows
venv\Scripts\activatepip install -r requirements.txtplaywright install chromiumcp .env.example .env
# Edit .env and fill in GEMINI_API_KEY, TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID{
"products": [
{
"id": "streamit-laravel",
"name": "Streamit Laravel - OTT, Movies & Live Video Streaming Platform",
"codecanyon_url": "https://codecanyon.net/item/streamit-laravel-movie-tv-show-video-streaming-platform-with-laravel/54895738/comments",
"knowledge_base_path": "rag/knowledge_base/streamit-laravel/"
}
]
}Field reference:
| Field | Description |
|---|---|
id |
Unique slug used for the index subfolder and logging |
name |
Display name shown in Telegram notifications |
codecanyon_url |
Full URL to the CodeCanyon item page (where comments appear) |
knowledge_base_path |
Path to the folder containing docs/changelogs for this product |
Crawls a product documentation site and saves the content as .txt files
that the RAG indexer will ingest:
python rag/doc_crawler.py <base_url> <output_dir>
# Example:
python rag/doc_crawler.py https://docs.example.com rag/knowledge_base/streamit-flutter/Scrapes previously replied CodeCanyon comment threads and saves them as Q&A pairs for the RAG knowledge base:
python rag/qa_extractor.py <codecanyon_comments_url>
# Example:
python rag/qa_extractor.py https://codecanyon.net/item/streamit/12345678python main.py runpython main.py serve
# Starts on http://localhost:5001
# n8n calls GET http://localhost:5001/run| Variable | Required | Description | Default |
|---|---|---|---|
GEMINI_API_KEY |
✅ | Google Gemini API key | — |
GEMINI_MODEL |
Gemini generation model name | gemini-2.0-flash |
|
GEMINI_EMBEDDING_MODEL |
Gemini embedding model for RAG indexing | models/gemini-embedding-001 |
|
TELEGRAM_BOT_TOKEN |
✅ | Telegram bot token from @BotFather | — |
TELEGRAM_CHAT_ID |
✅ | Telegram group/channel chat ID | — |
AUTHOR_USERNAME |
CodeCanyon seller username — used to detect own replies | iqonicdesign |
|
DB_PATH |
Path to SQLite database file | comment_store.db |
|
INDEX_PERSIST_DIR |
Directory where LlamaIndex vector indices are persisted | rag/index_store/ |
The pipeline is designed to make swapping the notification channel trivial.
notifier/mattermost_notifier.py is already built and ready. To switch from
Telegram to Mattermost, add MATTERMOST_WEBHOOK_URL to your .env, then in
main.py replace the send_to_telegram import and call with send_to_mattermost
from notifier/mattermost_notifier.py — the function signature is identical.
No other changes are needed.
pending → drafted → notified
Each comment_id is stored once; duplicate runs are safe.
Open ai/draft_generator.py and edit the SYSTEM_PROMPT variable at the
top of the file. This is intentionally separated from the generation logic
so tone adjustments require no code changes.
The workflow in n8n/workflow_export.json runs the pipeline every 6 hours
automatically and sends a Telegram summary after each run.
npm install -g n8n
n8n startn8n runs on http://localhost:5678 by default.
- Open n8n at http://localhost:5678
- Go to Settings → Import Workflow
- Select
n8n/workflow_export.json - Click Import
- Go to Settings → Environment Variables
- Add the following two variables:
| Variable | Value |
|---|---|
TELEGRAM_BOT_TOKEN |
Your Telegram bot token |
TELEGRAM_CHAT_ID |
Your Telegram group/channel ID |
- Open the imported workflow
- Toggle the Active switch to ON
- The pipeline will now run automatically every 6 hours (IST timezone)
- Make sure the Flask server is running:
python main.py serve - In n8n, open the workflow
- Click Execute Workflow to trigger manually
- Check Telegram for the summary message
Both the Flask server and n8n must be running persistently for automated scheduling to work. Use PM2 to keep both alive across reboots.
npm install -g pm2pm2 start ecosystem.config.jspm2 start ecosystem.config.js --only codecanyon-assistant
pm2 start ecosystem.config.js --only n8npm2 save
npm install -g pm2-windows-startup
pm2-startup installNote:
pm2 startupis Linux/macOS only. On Windows usepm2-windows-startupas shown above.
pm2 list # Show all processes and status
pm2 logs codecanyon-assistant # Tail Flask server logs
pm2 logs n8n # Tail n8n logs
pm2 restart codecanyon-assistant # Restart the Flask server
pm2 stop n8n # Stop n8n
pm2 delete all # Remove all processes from PM2