Skip to content

Adam-Abinsha-vahab-Baker/Meta-GraphAPI-Python-LLama.cpp

Repository files navigation

Meta-GraphAPI-Python-LLama.cpp

🤖 Automating Facebook Comments and Posts with Local AI Models (On-Prem)

This project automates Facebook Page interactions — reading user comments, analyzing them with a locally hosted AI model (LLM) such as LLaMA.cpp, and posting intelligent replies — all on-premises, using the Meta Graph API.


🚀 Overview

Workflow:

  1. A user comments on your Facebook Page post.
  2. The Meta Graph API Webhook notifies your backend.
  3. Your server fetches the comment content (and media, if any).
  4. The local AI model (running via llama.cpp or llama-cpp-python) generates a contextual reply using your custom knowledge base.
  5. The reply is automatically posted back to the same Facebook thread.

This setup keeps all intelligence local — no external AI API calls — ensuring data privacy and control.

WhatsApp Image 2025-07-30 at 5 51 28 PM

See the insights in the monitor WhatsApp Image 2025-07-30 at 5 51 59 PM

The generated comment with the on prem local ai model with our knowledge WhatsApp Image 2025-07-30 at 5 53 14 PM


🧠 Architecture

User → Facebook Page → Webhook (Flask/Python) → Graph API → Local AI Model (llama.cpp) → Reply via Graph API → Facebook Page

Components:

  • Webhook Listener: Receives and verifies Facebook events.
  • Graph API Client: Reads comments, posts, and publishes replies.
  • AI Layer: Local LLaMA model responding contextually using your KB.
  • Storage (Optional): SQLite / MongoDB for logs and comment history.

⚙️ Setup Instructions

1. Create a Facebook App

  • Go to Meta for Developers
  • Create an app → Add Webhooks and Pages API.
  • In “App Dashboard”, subscribe to Page Feed events.
  • Add a Callback URL (e.g., https://your-domain.com/webhook).
  • Add a Verify Token (any secret string).

2. Get Page Access Token

  • Generate a Page Access Token with the following permissions:
    • pages_read_engagement
    • pages_manage_posts
    • pages_manage_engagement
  • For development, you can use the Graph API Explorer.
  • Subscribe the page to your app:
    curl -X POST \
    "https://graph.facebook.com/v20.0/{PAGE_ID}/subscribed_apps" \
    -d "subscribed_fields=feed" \
    -d "access_token={PAGE_ACCESS_TOKEN}"
  1. Run Local LLaMA Model

Install llama.cpp or llama-cpp-python.

Place your quantized model weights locally.

Start the inference server:

python -m llama_cpp.server --model ./models/llama-2-7b.Q4_K_M.gguf --port 8000 

whatever port you run it

  1. Set Up Python Environment

     git clone https://github.com/yourusername/Meta-GraphAPI-Python-LLama.cpp.git
     cd Meta-GraphAPI-Python-LLama.cpp
     python3 -m venv venv
     source venv/bin/activate
     pip install -r requirements.txt "
    

Privacy & Security • No data leaves your server • All AI processing is on-prem • HTTPS + token verification enforced • No third-party analytics

Disclaimer This tool interacts with Facebook’s platform. Ensure full compliance with: • Facebook Platform Terms • Community Standards Use responsibly.

Authored by Adam Abinsha Vahab

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published