High-performance, modular zsh configuration optimized for AI development on Apple M3 Silicon with natural language interface via Cursor AI.
- β‘ 3x Faster Shell Startup - 287ms β 90ms with instant prompt
- π€ Natural Language Interface - Control terminal via Cursor AI
- π― 16 AI Models - One-word switching for LM Studio (MLX/GGUF)
- π Real-time M3 Monitoring - GPU/CPU/Neural Engine visibility
- π Workflow Automation - Single-command Docker/N8N/Node-RED orchestration
- π§© Modular Architecture - 8 focused, maintainable configuration files
Traditional Terminal:
cd /Volumes/MICRO/LM_STUDIO_MODELS/lmstudio-community/Qwen3-8B-MLX-4bit
~/.lmstudio/bin/lms load "lmstudio-community/Qwen3-8B-MLX-4bit" --gpu=1.0
sudo powermetrics --samplers gpu_power -i1000With This Configuration (via Cursor):
You: "Load the coding model and monitor GPU"
Cursor: β
Qwen2.5-Coder loaded. GPU: 78% utilization, 8.2W
| Metric | Before | After | Improvement |
|---|---|---|---|
| Shell Startup | 287ms | 90ms | 3.2x faster β‘ |
| Model Switching | Manual 30-60s | One word <5s | 6-12x faster π |
| GPU Monitoring | Not available | Real-time | New capability π |
| Workflow Start | 7 manual steps | 1 command | 7x efficiency π |
| Config Maintenance | Monolithic file | 8 modular files | Maintainable π§© |
- Hardware: Apple M3, M3 Pro, or M3 Max (M1/M2 compatible with minor tweaks)
- Memory: 16GB+ RAM recommended for AI workloads
- OS: macOS 14.0+ (tested on macOS 26.0 Beta)
- Shell: zsh 5.9+ (default on macOS)
- Tools: Homebrew, Cursor IDE, LM Studio (optional)
# Clone repository
git clone https://github.com/AUTOGIO/m3-shell-config.git
cd m3-shell-config
# Run automated installer
./scripts/install.sh
# Follow on-screen instructionsThe installer will:
- β Check system requirements
- β Backup existing configurations
- β Install Homebrew packages
- β Copy configuration files
- β Set up Cursor AI integration
- β Verify installation
See Installation Guide for step-by-step instructions.
| Document | Description |
|---|---|
| Installation Guide | Complete setup instructions |
| User Guide | Natural language command reference with 50+ examples |
| Quick Reference | Cheat sheet for common commands |
| Troubleshooting | Common issues and solutions |
- Local LLM inference (LM Studio, Ollama)
- Model benchmarking and comparison
- Multi-model orchestration
- Prompt engineering workflows
- Docker container management
- N8N workflow automation
- Node-RED IoT orchestration
- CI/CD pipeline integration
- Cursor IDE integration
- Node.js version management (Mise)
- Fast file navigation (zoxide)
- Git workflow optimization (Delta)
You: "Show me my system status"
β Displays M3 CPU/GPU/Memory/Power summary
You: "Load the model best for coding"
β Starts Qwen2.5-Coder-14B-Instruct
You: "Start my AI development environment"
β Launches LM Studio + N8N + Docker
You: "What models do I have installed?"
β Lists all 16 MLX/GGUF models with details
You: "Monitor GPU during inference"
β Real-time GPU utilization dashboard
You: "Add a shortcut to restart N8N"
β Creates alias, updates config, reloads shell
16 MLX/GGUF models with one-word switching:
- DeepSeek-R1 (8B, MLX-4bit) - SOTA reasoning, chain-of-thought
- Phi-4-mini (MLX-4bit) - Advanced multi-step logic
- Mathstral (7B, GGUF) - Math specialist
- Qwen2.5-Coder (14B, MLX-4bit) - Near GPT-4o coding performance
- SmolLM2 (1.7B, Q8-mlx) - Fast on-device inference
- Qwen3 (1.7B/8B, MLX) - Enhanced reasoning
- Gemma-3 (4B, MLX-4bit) - Text generation
- Arctic Embed (L-v2.0) - Multilingual retrieval
- Qwen3-Embedding (4B/8B, DWQ) - High-quality embeddings
| Component | Technology | Purpose |
|---|---|---|
| Shell | zsh 5.9 + Powerlevel10k | <10ms instant prompt |
| Plugin Manager | Zinit (turbo mode) | Async plugin loading |
| Version Manager | Mise | 60x faster than asdf |
| Modern CLI | eza, bat, fd, ripgrep, fzf | Enhanced file operations |
| Navigation | zoxide | Smart directory jumping |
| Git | Delta | Syntax-highlighted diffs |
| AI Interface | Cursor AI | Natural language commands |
| Monitoring | asitop, powermetrics | M3-native system monitoring |
~/.zshrc # Minimal entry point (60 lines)
~/.zsh.d/
βββ 00-env.zsh # Environment (MLX, LM Studio, Docker)
βββ 10-path.zsh # Deterministic PATH construction
βββ 20-aliases.zsh # Modern CLI aliases
βββ 25-lmstudio.zsh # AI model management (16 models)
βββ 28-m3-monitoring.zsh # Apple Silicon monitoring
βββ 30-tools.zsh # Tool initialization
βββ 35-docker.zsh # Docker/N8N/Node-RED
βββ 40-completions.zsh # Zinit plugins & completions
Design Principles:
- Modular - Each file has single responsibility
- Performance-First - Instant prompt with deferred loading
- M3-Optimized - Native Metal/MLX environment variables
- Maintainable - Clear separation of concerns
If anything goes wrong:
# Automatic rollback to original configuration
~/Desktop/ROLLBACK_ZSH.sh
# Manual rollback
mv ~/.zshrc.backup_20251028_* ~/.zshrc
rm -rf ~/.zsh.d
exec zsh -lContributions welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'feat: Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
Contribution Ideas:
- Additional AI model integrations
- Support for M1/M2 (should work with minor PATH tweaks)
- Fish/Bash configuration variants
- Windows WSL2 adaptation
- Additional monitoring scripts
This project is licensed under the MIT License - see LICENSE file for details.
TL;DR: Use freely, modify as needed, attribute the source.
AUTOGIO
- GitHub: @AUTOGIO
- Domain: giovannini.us
- Powerlevel10k by romkatv - Instant prompt magic
- Zinit by zdharma-continuum - Turbo mode plugin manager
- Mise by jdx - Fast runtime version manager
- Cursor AI - Natural language IDE
- LM Studio - Local LLM inference platform
- Apple MLX - Apple Silicon ML framework
If this configuration helped you:
- β Star this repository
- π Report issues or bugs
- π‘ Suggest improvements
- π Share with others
- Prompt chaining templates
- BTT/Keyboard Maestro deep integration
- Model performance benchmarking suite
- Automated model download scripts
- Project-specific environment profiles
- Cloud sync (iCloud/GitHub)
- Advanced prompt engineering templates
- Obsidian integration
- M1/M2 official support
- Fish shell variant
- Video tutorial series
- Docker containerized version
Status: β
Production Ready
Version: 1.0.0
Last Updated: October 29, 2025
Next Release: November 15, 2025 (v1.1.0)
Made with β€οΈ by AI developers, for AI developers
Documentation β’ Report Issues β’ Discussions