Skip to content

πŸš€ M3 Apple Silicon optimized shell configuration for AI development with natural language interface via Cursor AI

License

AUTOGIO/m3-shell-config

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ M3 Apple Silicon Shell Configuration

High-performance, modular zsh configuration optimized for AI development on Apple M3 Silicon with natural language interface via Cursor AI.

Shell Platform Chip License


✨ Key Features

  • ⚑ 3x Faster Shell Startup - 287ms β†’ 90ms with instant prompt
  • πŸ€– Natural Language Interface - Control terminal via Cursor AI
  • 🎯 16 AI Models - One-word switching for LM Studio (MLX/GGUF)
  • πŸ“Š Real-time M3 Monitoring - GPU/CPU/Neural Engine visibility
  • πŸ”„ Workflow Automation - Single-command Docker/N8N/Node-RED orchestration
  • 🧩 Modular Architecture - 8 focused, maintainable configuration files

🎬 Quick Demo

Traditional Terminal:

cd /Volumes/MICRO/LM_STUDIO_MODELS/lmstudio-community/Qwen3-8B-MLX-4bit
~/.lmstudio/bin/lms load "lmstudio-community/Qwen3-8B-MLX-4bit" --gpu=1.0
sudo powermetrics --samplers gpu_power -i1000

With This Configuration (via Cursor):

You: "Load the coding model and monitor GPU"
Cursor: βœ… Qwen2.5-Coder loaded. GPU: 78% utilization, 8.2W

πŸ“Š Performance Comparison

Metric Before After Improvement
Shell Startup 287ms 90ms 3.2x faster ⚑
Model Switching Manual 30-60s One word <5s 6-12x faster πŸš€
GPU Monitoring Not available Real-time New capability πŸ“Š
Workflow Start 7 manual steps 1 command 7x efficiency πŸ”„
Config Maintenance Monolithic file 8 modular files Maintainable 🧩

πŸ–₯️ System Requirements

  • Hardware: Apple M3, M3 Pro, or M3 Max (M1/M2 compatible with minor tweaks)
  • Memory: 16GB+ RAM recommended for AI workloads
  • OS: macOS 14.0+ (tested on macOS 26.0 Beta)
  • Shell: zsh 5.9+ (default on macOS)
  • Tools: Homebrew, Cursor IDE, LM Studio (optional)

πŸš€ Installation

Automated Installation (Recommended)

# Clone repository
git clone https://github.com/AUTOGIO/m3-shell-config.git
cd m3-shell-config

# Run automated installer
./scripts/install.sh

# Follow on-screen instructions

The installer will:

  1. βœ… Check system requirements
  2. βœ… Backup existing configurations
  3. βœ… Install Homebrew packages
  4. βœ… Copy configuration files
  5. βœ… Set up Cursor AI integration
  6. βœ… Verify installation

Manual Installation

See Installation Guide for step-by-step instructions.


πŸ“š Documentation

Document Description
Installation Guide Complete setup instructions
User Guide Natural language command reference with 50+ examples
Quick Reference Cheat sheet for common commands
Troubleshooting Common issues and solutions

🎯 Use Cases

AI/ML Development

  • Local LLM inference (LM Studio, Ollama)
  • Model benchmarking and comparison
  • Multi-model orchestration
  • Prompt engineering workflows

DevOps Automation

  • Docker container management
  • N8N workflow automation
  • Node-RED IoT orchestration
  • CI/CD pipeline integration

Frontend Development

  • Cursor IDE integration
  • Node.js version management (Mise)
  • Fast file navigation (zoxide)
  • Git workflow optimization (Delta)

πŸ€– Natural Language Examples

You: "Show me my system status"
     β†’ Displays M3 CPU/GPU/Memory/Power summary

You: "Load the model best for coding"
     β†’ Starts Qwen2.5-Coder-14B-Instruct

You: "Start my AI development environment"
     β†’ Launches LM Studio + N8N + Docker

You: "What models do I have installed?"
     β†’ Lists all 16 MLX/GGUF models with details

You: "Monitor GPU during inference"
     β†’ Real-time GPU utilization dashboard

You: "Add a shortcut to restart N8N"
     β†’ Creates alias, updates config, reloads shell

More Examples β†’


πŸ“¦ Supported AI Models

16 MLX/GGUF models with one-word switching:

Reasoning & Math

  • DeepSeek-R1 (8B, MLX-4bit) - SOTA reasoning, chain-of-thought
  • Phi-4-mini (MLX-4bit) - Advanced multi-step logic
  • Mathstral (7B, GGUF) - Math specialist

Coding

  • Qwen2.5-Coder (14B, MLX-4bit) - Near GPT-4o coding performance

General Purpose

  • SmolLM2 (1.7B, Q8-mlx) - Fast on-device inference
  • Qwen3 (1.7B/8B, MLX) - Enhanced reasoning
  • Gemma-3 (4B, MLX-4bit) - Text generation

Embeddings

  • Arctic Embed (L-v2.0) - Multilingual retrieval
  • Qwen3-Embedding (4B/8B, DWQ) - High-quality embeddings

Full Model Documentation β†’


πŸ› οΈ Technology Stack

Component Technology Purpose
Shell zsh 5.9 + Powerlevel10k <10ms instant prompt
Plugin Manager Zinit (turbo mode) Async plugin loading
Version Manager Mise 60x faster than asdf
Modern CLI eza, bat, fd, ripgrep, fzf Enhanced file operations
Navigation zoxide Smart directory jumping
Git Delta Syntax-highlighted diffs
AI Interface Cursor AI Natural language commands
Monitoring asitop, powermetrics M3-native system monitoring

πŸ”§ Configuration Architecture

~/.zshrc                  # Minimal entry point (60 lines)
~/.zsh.d/
  β”œβ”€β”€ 00-env.zsh          # Environment (MLX, LM Studio, Docker)
  β”œβ”€β”€ 10-path.zsh         # Deterministic PATH construction
  β”œβ”€β”€ 20-aliases.zsh      # Modern CLI aliases
  β”œβ”€β”€ 25-lmstudio.zsh     # AI model management (16 models)
  β”œβ”€β”€ 28-m3-monitoring.zsh # Apple Silicon monitoring
  β”œβ”€β”€ 30-tools.zsh        # Tool initialization
  β”œβ”€β”€ 35-docker.zsh       # Docker/N8N/Node-RED
  └── 40-completions.zsh  # Zinit plugins & completions

Design Principles:

  • Modular - Each file has single responsibility
  • Performance-First - Instant prompt with deferred loading
  • M3-Optimized - Native Metal/MLX environment variables
  • Maintainable - Clear separation of concerns

🚨 Emergency Rollback

If anything goes wrong:

# Automatic rollback to original configuration
~/Desktop/ROLLBACK_ZSH.sh

# Manual rollback
mv ~/.zshrc.backup_20251028_* ~/.zshrc
rm -rf ~/.zsh.d
exec zsh -l

🀝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'feat: Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Contribution Ideas:

  • Additional AI model integrations
  • Support for M1/M2 (should work with minor PATH tweaks)
  • Fish/Bash configuration variants
  • Windows WSL2 adaptation
  • Additional monitoring scripts

πŸ“„ License

This project is licensed under the MIT License - see LICENSE file for details.

TL;DR: Use freely, modify as needed, attribute the source.


πŸ‘€ Author

AUTOGIO


πŸ™ Acknowledgments

  • Powerlevel10k by romkatv - Instant prompt magic
  • Zinit by zdharma-continuum - Turbo mode plugin manager
  • Mise by jdx - Fast runtime version manager
  • Cursor AI - Natural language IDE
  • LM Studio - Local LLM inference platform
  • Apple MLX - Apple Silicon ML framework

⭐ Support This Project

If this configuration helped you:

  • ⭐ Star this repository
  • πŸ› Report issues or bugs
  • πŸ’‘ Suggest improvements
  • πŸ“– Share with others

πŸ“ˆ Project Stats

GitHub stars GitHub forks GitHub issues GitHub pull requests


πŸ—ΊοΈ Roadmap

Phase 2 (Q4 2025)

  • Prompt chaining templates
  • BTT/Keyboard Maestro deep integration
  • Model performance benchmarking suite
  • Automated model download scripts

Phase 3 (Q1 2026)

  • Project-specific environment profiles
  • Cloud sync (iCloud/GitHub)
  • Advanced prompt engineering templates
  • Obsidian integration

Community Requests

  • M1/M2 official support
  • Fish shell variant
  • Video tutorial series
  • Docker containerized version

Vote on features β†’


Status: βœ… Production Ready
Version: 1.0.0
Last Updated: October 29, 2025
Next Release: November 15, 2025 (v1.1.0)


Made with ❀️ by AI developers, for AI developers

Documentation β€’ Report Issues β€’ Discussions

About

πŸš€ M3 Apple Silicon optimized shell configuration for AI development with natural language interface via Cursor AI

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages