This project automates Facebook Page interactions — reading user comments, analyzing them with a locally hosted AI model (LLM) such as LLaMA.cpp, and posting intelligent replies — all on-premises, using the Meta Graph API.
Workflow:
- A user comments on your Facebook Page post.
- The Meta Graph API Webhook notifies your backend.
- Your server fetches the comment content (and media, if any).
- The local AI model (running via
llama.cpporllama-cpp-python) generates a contextual reply using your custom knowledge base. - The reply is automatically posted back to the same Facebook thread.
This setup keeps all intelligence local — no external AI API calls — ensuring data privacy and control.
See the insights in the monitor

The generated comment with the on prem local ai model with our knowledge

User → Facebook Page → Webhook (Flask/Python) → Graph API → Local AI Model (llama.cpp) → Reply via Graph API → Facebook Page
Components:
- Webhook Listener: Receives and verifies Facebook events.
- Graph API Client: Reads comments, posts, and publishes replies.
- AI Layer: Local LLaMA model responding contextually using your KB.
- Storage (Optional): SQLite / MongoDB for logs and comment history.
- Go to Meta for Developers
- Create an app → Add Webhooks and Pages API.
- In “App Dashboard”, subscribe to Page Feed events.
- Add a Callback URL (e.g.,
https://your-domain.com/webhook). - Add a Verify Token (any secret string).
- Generate a Page Access Token with the following permissions:
pages_read_engagementpages_manage_postspages_manage_engagement
- For development, you can use the Graph API Explorer.
- Subscribe the page to your app:
curl -X POST \ "https://graph.facebook.com/v20.0/{PAGE_ID}/subscribed_apps" \ -d "subscribed_fields=feed" \ -d "access_token={PAGE_ACCESS_TOKEN}"
- Run Local LLaMA Model
Install llama.cpp or llama-cpp-python.
Place your quantized model weights locally.
Start the inference server:
python -m llama_cpp.server --model ./models/llama-2-7b.Q4_K_M.gguf --port 8000
whatever port you run it
-
Set Up Python Environment
git clone https://github.com/yourusername/Meta-GraphAPI-Python-LLama.cpp.git cd Meta-GraphAPI-Python-LLama.cpp python3 -m venv venv source venv/bin/activate pip install -r requirements.txt "
Privacy & Security • No data leaves your server • All AI processing is on-prem • HTTPS + token verification enforced • No third-party analytics
Disclaimer This tool interacts with Facebook’s platform. Ensure full compliance with: • Facebook Platform Terms • Community Standards Use responsibly.
Authored by Adam Abinsha Vahab
