This project connects:
- a local LLM (
llama.cpp) - your Statix blog CLI
- a Python automation script
👉 Result: fully automated article generation + publishing.
$ stx set-credentials --url URL --password TOKEN$ stx subject add "AutoBlog - Auto AI generated articles"List subjects:
$ stx subjects👉 Note the ID and set while you'll run the script with the --subject_id flag:
- Cf: run
$ sudo apt update
$ sudo apt install -y build-essential cmake
$ git clone https://github.com/ggml-org/llama.cpp
$ cd llama.cpp
$ cmake -B build -DGGML_NATIVE=ON
$ cmake --build build -jDownload a GGUF model (example: LLaMA 3.1 8B Q4):
👉 https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
Recommended file:
Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
Place it in a folder, e.g.:
~/models/
./build/bin/llama-server \
-m ~/models/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf \
-c 1024 \
-t 8-
-c= context size → critical for RAM usage -
Recommended:
1024→ safe (16GB RAM)2048→ higher quality but heavier
-
-tis the number of CPU threads
$ python3 -m venv menv
$ source menv/bin/activate
(menv) $ pip install requestsProject structure:
.
├── articles
└── autoBlog.py
└── prompt.txt
└── models
The prompt is stored in:
prompt.txt
The randomly chosen topics are in:
topics.txt
Example:
(menv) $ python3 autoBlog.py \
--is_public true \
--subject-id 24 \
--timeout 6000 \
--llm-url http://127.0.0.1:8080/completion \
--topics-file topics.txt \
--prompt-file prompt.txtAfter running this command, a metadata Statix file will be created:
.statix_articles.json
That is the only side-effect
Loop:
- Pick a topic
- Generate article via local LLM
- Extract title
- Save markdown file
- Create nickname
- Publish via Statix
- Articles are generated in Markdown
- Title (
# Title) is extracted and removed from content - Body is saved in:
articles/<slug>.md
LLM → Markdown
↓
extract title
↓
save file
↓
stx nickname create
↓
stx publish
- No external APIs required
- Fully local (LLM + CLI)
- Performance depends on CPU + RAM
- Quality depends heavily on prompt
- Better prompt engineering
- Topic generation via LLM
- Deduplication / quality filtering
- Scheduling (cron)
No SaaS. No API keys. No black boxes.
Just local compute, full control, and automation.
