Zord Coder v1 - The ultimate multilingual coding assistant optimized for Android Termux. Built with model merging techniques, nanoGPT philosophy, and blazing-fast GGUF inference.
- Blazing Fast - Optimized GGUF quantization for mobile devices
- Multi-Language - Python, JavaScript, TypeScript, C++, Rust, Go, Java, Bash, and more
- Interactive CLI - Beautiful terminal interface with syntax highlighting
- Streaming Output - Real-time token generation
- Reasoning Mode - Chain-of-thought like DeepSeek-R1
- Termux Ready - Optimized for Android devices
git clone https://github.com/sajadkoder/zordcoder.git
cd zordcoder
pip install -r requirements.txt
python scripts/zord_cli.py --interactiveThe model will download automatically on first run!
Note: First run will download the model (~833MB) automatically from HuggingFace.
# Run the setup script
bash scripts/setup_termux.sh
# Or manual installation
pkg update && pkg install python git
pip install -r requirements.txt
# Download model and run
python3 scripts/zord_cli.py --interactivepython3 scripts/zord_cli.py --interactivepython3 scripts/zord_cli.py "Write a Python hello world"python3 scripts/zord_cli.py "Explain recursion" \
--temperature 0.2 \
--max-tokens 512 \
--context 2048| Command | Description |
|---|---|
clear |
Clear conversation history |
reasoning |
Toggle reasoning mode |
stream |
Toggle streaming mode |
metrics |
Show performance metrics |
history |
Show conversation history |
language <lang> |
Set preferred language |
help |
Show help message |
exit |
Exit the program |
zordcoder/
├── config/
│ └── merge_config.yaml # Model merging configuration
├── docs/
│ ├── MODEL_SELECTION.md # Model selection strategy
│ ├── CONVERSION.md # GGUF conversion guide
│ └── OPTIMIZATION.md # Performance optimization
├── scripts/
│ ├── setup_termux.sh # Termux installation script
│ ├── train_zord.py # nanoGPT training script
│ └── zord_cli.py # CLI interface
├── src/
│ └── zord_core.py # Core inference engine
├── .gitignore
├── README.md
└── requirements.txt
- Android 7.0+
- 3GB RAM
- 2GB Storage
- Android 10+
- 6GB+ RAM
- 4GB+ Storage
Zord Coder v1 uses state-of-the-art model merging techniques:
- TIES (Task Interference Elimination)
- DARE (Data-Aware Reward Estimation)
- SLERP interpolation
See MODEL_SELECTION.md for details.
| Metric | Value |
|---|---|
| Tokens/sec | 15-30 (Q4_K_M) |
| Cold Start | ~3 seconds |
| Memory Usage | ~2GB (Q4_K_M) |
| Context Length | 2048 tokens |
# Check model file
ls -la models/
# Verify file
file models/zordcoder-v1-q4_k_m.gguf# Reduce context in config
export ZORD_CONTEXT_LENGTH=1024See OPTIMIZATION.md for tuning tips.
Contributions are welcome! Please read our contributing guidelines before submitting PRs.
MIT License - See LICENSE for details.
- llama.cpp - GGUF quantization
- nanoGPT - Training philosophy
- MergeKit - Model merging
- DeepSeek-Coder - Base model
- Qwen2.5-Coder - Secondary model
Built with ❤️ by sajad