Skip to content

devv-gmits/codecanyon-comment-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CodeCanyon Comment Assistant

Overview

CodeCanyon Comment Assistant is an automated AI pipeline built by Iqonic Design that monitors public CodeCanyon product pages for un-replied customer comments, generates context-aware draft replies using a Retrieval-Augmented Generation (RAG) pipeline powered by LlamaIndex and Gemini 2.0 Flash, and delivers the draft to the developer via a formatted Telegram message for final review before posting.


Architecture

Customer Comment on CodeCanyon
        ↓
Playwright Scraper (guest, no login)
        ↓
SQLite (duplicate check)
        ↓
LlamaIndex RAG (retrieves relevant context)
        ↓
Gemini 2.0 Flash (generates draft reply)
        ↓
Telegram Bot (notifies developer)
        ↓
Developer reviews → copies → pastes on CodeCanyon

Project Structure

codecanyon-comment-assistant/
├── main.py                          # Entry point — CLI run or Flask serve mode
├── requirements.txt                 # Python dependencies
├── ecosystem.config.js              # PM2 process config for production
├── .env                             # Secrets (git-ignored)
├── .env.example                     # Template — copy to .env and fill in values
├── .gitignore
├── scraper/
│   ├── codecanyon_scraper.py         # Playwright guest scraper — finds un-replied comments
│   └── comment_store.py              # SQLite deduplication store — tracks comment status
├── rag/
│   ├── doc_crawler.py                # One-time setup — crawls docs site to .txt files
│   ├── qa_extractor.py               # One-time setup — scrapes past Q&A reply pairs
│   ├── indexer.py                    # Builds and persists per-product LlamaIndex vector index
│   ├── retriever.py                  # Queries the index and returns top-k context chunks
│   └── knowledge_base/               # Per-product knowledge base files
│       ├── <product-id>/             # One folder per product (matches products.json id)
│       └── past_qa/                  # Shared past Q&A pairs (.txt)
├── ai/
│   └── draft_generator.py            # Gemini API call with system prompt — returns draft reply
├── notifier/
│   ├── telegram_notifier.py          # Sends HTML-formatted message via Telegram Bot API
│   └── mattermost_notifier.py        # Phase 2 alternative — Mattermost webhook
├── config/
│   └── products.json                 # List of CodeCanyon products to monitor
└── n8n/
    └── workflow_export.json          # Import into n8n to schedule the pipeline every 6 hours

Setup Instructions

1. Clone the repo

git clone <repo-url>
cd codecanyon-comment-assistant

2. Create virtual environment

python -m venv venv

3. Activate the virtual environment

# macOS / Linux
source venv/bin/activate

# Windows
venv\Scripts\activate

4. Install dependencies

pip install -r requirements.txt

5. Install Playwright browsers

playwright install chromium

6. Copy .env.example to .env and fill in all values

cp .env.example .env
# Edit .env and fill in GEMINI_API_KEY, TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID

7. Add product config to config/products.json

{
  "products": [
    {
      "id": "streamit-laravel",
      "name": "Streamit Laravel - OTT, Movies & Live Video Streaming Platform",
      "codecanyon_url": "https://codecanyon.net/item/streamit-laravel-movie-tv-show-video-streaming-platform-with-laravel/54895738/comments",
      "knowledge_base_path": "rag/knowledge_base/streamit-laravel/"
    }
  ]
}

Field reference:

Field Description
id Unique slug used for the index subfolder and logging
name Display name shown in Telegram notifications
codecanyon_url Full URL to the CodeCanyon item page (where comments appear)
knowledge_base_path Path to the folder containing docs/changelogs for this product

8. Run the doc crawler for each product (one-time setup)

Crawls a product documentation site and saves the content as .txt files that the RAG indexer will ingest:

python rag/doc_crawler.py <base_url> <output_dir>

# Example:
python rag/doc_crawler.py https://docs.example.com rag/knowledge_base/streamit-flutter/

9. Run the Q&A extractor for each product (one-time setup)

Scrapes previously replied CodeCanyon comment threads and saves them as Q&A pairs for the RAG knowledge base:

python rag/qa_extractor.py <codecanyon_comments_url>

# Example:
python rag/qa_extractor.py https://codecanyon.net/item/streamit/12345678

10. Run the pipeline

python main.py run

11. Or start the Flask server for n8n

python main.py serve
# Starts on http://localhost:5001
# n8n calls GET http://localhost:5001/run

Environment Variables

Variable Required Description Default
GEMINI_API_KEY Google Gemini API key
GEMINI_MODEL Gemini generation model name gemini-2.0-flash
GEMINI_EMBEDDING_MODEL Gemini embedding model for RAG indexing models/gemini-embedding-001
TELEGRAM_BOT_TOKEN Telegram bot token from @BotFather
TELEGRAM_CHAT_ID Telegram group/channel chat ID
AUTHOR_USERNAME CodeCanyon seller username — used to detect own replies iqonicdesign
DB_PATH Path to SQLite database file comment_store.db
INDEX_PERSIST_DIR Directory where LlamaIndex vector indices are persisted rag/index_store/

Phase 2 — Mattermost

The pipeline is designed to make swapping the notification channel trivial. notifier/mattermost_notifier.py is already built and ready. To switch from Telegram to Mattermost, add MATTERMOST_WEBHOOK_URL to your .env, then in main.py replace the send_to_telegram import and call with send_to_mattermost from notifier/mattermost_notifier.py — the function signature is identical. No other changes are needed.


Status Flow (SQLite)

pending  →  drafted  →  notified

Each comment_id is stored once; duplicate runs are safe.


Tuning the AI Tone

Open ai/draft_generator.py and edit the SYSTEM_PROMPT variable at the top of the file. This is intentionally separated from the generation logic so tone adjustments require no code changes.


n8n Setup Instructions

The workflow in n8n/workflow_export.json runs the pipeline every 6 hours automatically and sends a Telegram summary after each run.

Install n8n (self-hosted)

npm install -g n8n
n8n start

n8n runs on http://localhost:5678 by default.

Import the Workflow

  1. Open n8n at http://localhost:5678
  2. Go to Settings → Import Workflow
  3. Select n8n/workflow_export.json
  4. Click Import

Set Environment Variables in n8n

  1. Go to Settings → Environment Variables
  2. Add the following two variables:
Variable Value
TELEGRAM_BOT_TOKEN Your Telegram bot token
TELEGRAM_CHAT_ID Your Telegram group/channel ID

Activate the Workflow

  1. Open the imported workflow
  2. Toggle the Active switch to ON
  3. The pipeline will now run automatically every 6 hours (IST timezone)

Manual Test Run

  1. Make sure the Flask server is running: python main.py serve
  2. In n8n, open the workflow
  3. Click Execute Workflow to trigger manually
  4. Check Telegram for the summary message

Running in Production

Both the Flask server and n8n must be running persistently for automated scheduling to work. Use PM2 to keep both alive across reboots.

Install PM2

npm install -g pm2

Start both processes

pm2 start ecosystem.config.js

Start a single process

pm2 start ecosystem.config.js --only codecanyon-assistant
pm2 start ecosystem.config.js --only n8n

Save process list and enable auto-start on Windows boot

pm2 save
npm install -g pm2-windows-startup
pm2-startup install

Note: pm2 startup is Linux/macOS only. On Windows use pm2-windows-startup as shown above.

Useful PM2 commands

pm2 list                          # Show all processes and status
pm2 logs codecanyon-assistant     # Tail Flask server logs
pm2 logs n8n                      # Tail n8n logs
pm2 restart codecanyon-assistant  # Restart the Flask server
pm2 stop n8n                      # Stop n8n
pm2 delete all                    # Remove all processes from PM2

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors