A beautiful native macOS application (optimized for Apple Silicon M-Series chips) that uses OpenAI Shap-E to generate 3D models from text prompts or images, with optional PBR material generation.
Prometheus is a native macOS app optimized for Apple Silicon (M1, M2, M3, M4) that brings the power of OpenAI's Shap-E 3D generation to your desktop. With a beautiful SwiftUI interface, you can:
- Generate 3D models from text descriptions - Simply describe what you want and get a 3D model
- Convert images to 3D models - Upload an image and transform it into a 3D object
- Generate PBR materials - Optional material generation creates albedo, roughness, metallic, and bump maps for your models
- Export in standard formats - Models are saved as PLY files and USDZ files (for iPhone/Vision Pro compatibility)
- Native Apple Silicon performance - Optimized for M-Series chips with MPS (Metal Performance Shaders) acceleration
The app uses a Python backend powered by OpenAI Shap-E, providing state-of-the-art text-to-3D and image-to-3D generation capabilities. Material generation is powered by MaterialAnything for high-quality PBR textures.
Here are some example 3D models generated with Prometheus:
- π¨ Beautiful, modern SwiftUI interface
- π Text-to-3D generation (Shap-E)
- πΌοΈ Image-to-3D generation (Shap-E)
- π¬ NeRF (Neural Radiance Fields) - Reconstruct 3D scenes from multiple images (experimental)
- π¨ PBR Material Generation - Generate albedo, roughness, metallic, and bump maps (optional)
- π Native Apple Silicon Support - Optimized for M1/M2/M3/M4 chips with MPS acceleration
- π± USDZ export for iPhone and Vision Pro compatibility
- π Python backend integration
- β‘ Real-time generation status
- π Easy output file management
- macOS 13.0 or later
- Apple Silicon (M1/M2/M3/M4) - Optimized for native performance with MPS acceleration
- Intel Macs supported but may be slower (CPU mode)
- Xcode 15.0 or later
- Python 3.9, 3.10, or 3.11 (recommended - Shap-E compatibility)
- Note: Python 3.12+ may have compatibility issues
- Git (for installing Shap-E from GitHub)
- OpenAI API key (optional - for Shap-E model downloads)
Recommended: Use Python 3.9, 3.10, or 3.11
Clone the repository:
git clone https://github.com/caraveo/Prometheus.git
cd PrometheusCreate virtual environment:
If you have multiple Python versions installed:
python3.11 -m venv env # or python3.10, python3.9
source env/bin/activateOr use the default Python 3:
python3 -m venv env
source env/bin/activateOption A: Use the automated setup script (recommended)
./setup.shOption B: Manual installation
pip install --upgrade pip
pip install -r requirements.txt
pip install git+https://github.com/openai/shap-e.gitNote: Shap-E must be installed from GitHub as it's not available on PyPI.
To enable PBR material generation, download the MaterialAnything models:
./download_material_models.shThis will download the material estimator and refiner models from HuggingFace (~2-3GB). Material generation works without these models but will use simplified mode.
Shap-E models will be downloaded automatically on first use. If you want to use OpenAI API:
export OPENAI_API_KEY="your-api-key-here"- Open Terminal and navigate to the project directory
- Create an Xcode project:
swift package generate-xcodeproj
- Open
Prometheus.xcodeprojin Xcode - Build and run (βR)
Best method - launches as proper macOS app with focus:
./run.shThis will:
- Build the app
- Create a proper macOS app bundle
- Launch it with proper window focus
Alternative - direct run (may have focus issues):
swift build
swift run Prometheus- Open Xcode
- File β New β Project
- Choose "macOS" β "App"
- Set Product Name to "Prometheus"
- Choose SwiftUI for Interface
- Add the Swift files to the project
- Build and run
-
Text-to-3D Mode:
- Select "Text to 3D" mode
- Enter a descriptive prompt (e.g., "a red sports car", "a wooden chair")
- (Optional) Toggle "Generate Materials" to create PBR material maps
- Click "Generate 3D Model"
-
Image-to-3D Mode:
- Select "Image to 3D" mode
- Drag and drop an image into the drop zone
- Optionally add a text prompt for additional guidance
- (Optional) Toggle "Generate Materials" to create PBR material maps
- Click "Generate 3D Model"
-
NeRF Mode (Experimental):
- Select "NeRF (Multi-Image)" mode
- Select a directory containing multiple images of the same scene from different angles
- NeRF will reconstruct a 3D scene from the images
- Note: NeRF requires TensorFlow 1.15 and CUDA, which may not work on Apple Silicon. Consider using modern NeRF implementations for M-Series chips.
-
View Results:
- Generated models are saved in the
output/directory - Click the folder icon to reveal the file in Finder
- Models are saved as
.plyfiles (compatible with most 3D software) - USDZ files are automatically generated for iPhone/Vision Pro compatibility
- Material maps (if generated) are saved in
output/materials/directory
- Generated models are saved in the
Prometheus/
βββ PrometheusApp.swift # Main app entry point
βββ ContentView.swift # Main UI view
βββ shap_e_generator.py # Python backend script (Shap-E)
βββ material_generator.py # MaterialAnything material generation module
βββ nerf_generator.py # NeRF (Neural Radiance Fields) integration module
βββ download_material_models.sh # Script to download MaterialAnything models
βββ material_anything/ # MaterialAnything repository
βββ nerf_repo/ # NeRF repository (bmild/nerf)
βββ requirements.txt # Python dependencies
βββ Package.swift # Swift package configuration
βββ setup.sh # Automated setup script
βββ run.sh # Build and launch script
βββ build.sh # Build script
βββ README.md # This file
βββ .gitignore # Git ignore rules
βββ env/ # Python virtual environment (created during setup)
-
Clone the repository:
git clone https://github.com/caraveo/Prometheus.git cd Prometheus -
Run the setup script:
chmod +x setup.sh ./setup.sh
This will:
- Create a Python virtual environment
- Install all required dependencies
- Install Shap-E from GitHub
-
Launch the app:
chmod +x run.sh ./run.sh
That's it! The app will build and launch automatically.
- Ensure the virtual environment is created:
python3 -m venv env - Make sure you're in the project root directory
- Activate the virtual environment:
source env/bin/activate - Reinstall dependencies:
pip install -r requirements.txt
- Shap-E models are large (~2GB). Ensure you have sufficient disk space
- First generation may take longer as models download
- Check your internet connection
- Apple Silicon (M1/M2/M3/M4): Automatically uses MPS (Metal Performance Shaders) for GPU acceleration
- CUDA: Automatically uses CUDA if available (NVIDIA GPUs)
- CPU: Falls back to CPU mode if no GPU is available (slower but functional)
- Material generation works on all platforms, with full MaterialAnything pipeline requiring CUDA
- Apple Silicon Performance: Optimized for M-Series chips with native MPS acceleration for faster generation
- First generation may take 5-10 minutes as models download and initialize
- Subsequent generations are faster (typically 1-3 minutes on M-Series chips)
- Generated models are saved as PLY files, compatible with Blender, MeshLab, and other 3D software
- USDZ files are automatically generated for spatial computing (iPhone/Vision Pro)
- Material generation is optional and works in simplified mode without MaterialAnything models
- Full MaterialAnything material generation requires CUDA and additional dependencies (pytorch3d, kaolin)
This project uses OpenAI Shap-E, which is subject to OpenAI's terms of use.


