Conduct AI voice debates using two LLMs running entirely on a Slackware64 Current system. No external services, no cloud, no APIs — everything is local, private, and open source.
- A modern cpu + 64 GB ram or NVIDIA GPU + Cuda 12+
- Create project folder
mkdir ~/AI-DEBATES - Change in project's folder
cd ~/AI-AI-DEBATES || exit 1 - clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp - Change in to start building
cd llama.cpp || exit
Before build read: `https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md`
-
cmake -B build -DGGML_VULKAN=1
cmake --build build --config Release -
Download your models from
https://huggingface.co/(I use lm-studio so I have them in lm-studio path...) In any case here python scripts are ready for gemma-3n-E4B-it-text-GGUF and Qwen3-Coder-30B-A3B-Instruct-GGUF -
Change to project folder and clone this repo
cd ~/AI-DEBATES && git clone https://github.com/rizitis/AI-Philosophers.git -
Change in
cd AI-Philosophers || exit -
Install kokoro
pip install -q kokoro>=0.9.2 soundfile -
To install voices for kokoro
python fones.py -
Install last requirements
pip install numpy scipy pydub requests
Every debate must have a topic, a theme, the question in other words. Scripts in AI-Philosophers are prepared for the question
Is randomness a form of intelligence?
You can use for the very first time exactly as is and when you understand and get used how scripts working make your own topics (your own debates)
There are 3 keys in order to run this project:
- Your hardaware to support the heavy job or running locally llms (depeent on your system specs llama.cpp must be builded)
- understand how scripts works...
- Run them in correct order manually
STEPS
- Assume 3 keys have met in your system, now you must start llama servers, in order to do this change in llama bin folder. If you used the exaclty command I used to build llama.cpp that is folder is in
~/AI-DEBATES/llama.cpp/build-cpu/bin
In this folder open a terminal (konsole) and split it in half (left/right or up/down)
In every terminal start one server:
GGML_VULKAN_DISABLE=1 ./llama-server -m ~/.cache/lm-studio/models/lmstudio-community/gemma-3n-E4B-it-text-GGUF/gemma-3n-E4B-it-Q4_K_M.gguf --port 8081 -c 32768 -t 20 --temp 0.7
GGML_VULKAN_DISABLE=1 ./llama-server -m /home/omen/.cache/lm-studio/models/lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf --port 8080 -c 32768 -t 20 --temp 0.7
-c 32768is very high (the maximu) and the minimum is 8192 which is suggested.-t 20are cpu threads, edit for your needs.- Dont touch
--portselse scripts need modification.
- If you dont have lm-studio installed just create a fake dir in you home (I suggest to install it exist in SBo)
mkdir -p ~/.lmstudio/conversations/
In this folder ther will be a file auto_conversation.json which store debate dialogs.
Every time you create a new debate the json file must be deleted or removed from there.
python debate_ai2.pywill start the debate
Be careful close all gui apps, and watch your system this operation is very heavy it might break your hardware at all
I suggest: watch -n 2 'sensors | grep "^Core"'and top commands in separate terminal and if needed kill everything before its to late...
- When debate_ai2.py finish llms have created a 6 round debate
To hear it run2tts_watcher.py
- You have the ability to change voices in kokoro
ls -l ~/.cache/huggingface/hub/models--hexgrad--Kokoro-82M/snapshots/*/voices/
- Read 000.py and 999.py these scripts create TTS for your needs. An intro.wav and outro.wav
- theme scripts are creating music inro and outro ;)
- Finally you can merge all .wav files and create a podcast!
If you want to merge all wav files just runmerge_podcast.py
This is how i create my podcast ;)
This AI podcast was created using:
- Models: Qwen3, Gemma
- TTS: Kokoro-82M by hexgrad
- Voice models: af_bella, am_adam, af_heart, etc.
- Tools: llama.cpp, Hugging Face, PyTorch, ffmpeg
- All processing done locally on Slackware64 Current.
Inspired by open-source, curiosity, and the future of AI.
This software comes with NO WARRANTY, express or implied.
Use at your own risk.
This program is powerful and may interact deeply with your operating system and hardware. It has the potential to cause:
- System instability or crashes
- Hardware stress or overheating
- Data loss or corruption
- Unauthorized changes to system settings
- High resource usage (CPU, GPU, RAM, disk)
This tool is intended for:
- Advanced users
- Developers
- System testers
- Researchers
❌ Not recommended for beginners or mission-critical systems.
- You understand the risks involved.
- You are using this software voluntarily and responsibly.
- You have backed up your important data before running it.
- You will test it first in a safe environment (e.g., virtual machine or non-production device).
- The author(s) are not liable for any damage to your hardware, software, data, or system.
💡 You assume full responsibility for any consequences resulting from the use or misuse of this software.
The creator(s) of this project are not responsible for:
- System failures
- Hardware damage
- Data loss
- Security vulnerabilities introduced
- Any indirect or consequential losses
✅ Always:
- Read the documentation first
- Run in a sandbox or VM if unsure
- Monitor system temperature and performance
- Keep backups of critical data
- Review source code (if open) before execution
This software is provided under the MIT License but includes no liability for damages. See the full license for details.
🔔 Final Note:
Just because it runs, doesn't mean it's safe.
If you don't know what this program does — do not run it.
