AI Music Studio – Full songs, vocals, mastering & DAW-style creation.
QuillMusic is a next-generation music production platform that combines the power of AI with professional DAW features. Create complete songs from simple text prompts, or dive deep with manual editing tools for full creative control.
- ✅ AI Song Blueprint Generator: Generate song structures, lyrics, and styles from text prompts
- ✅ Fake Render Engine: Development-ready system with mock audio generation
- ✅ Modern React Frontend: Clean, professional UI with dark theme
- ✅ FastAPI Backend: Modular, typed, and ready for real AI models
- ✅ Job Queue System: Redis + RQ for asynchronous processing
- ✅ Comprehensive Tests: Backend tests with pytest
- 🔜 Real Instrumental Generation: Stable Audio 2.0 or MusicGen integration
- 🔜 Vocal Synthesis: AI-powered singing and speech
- 🔜 Professional Mastering: Automated mixing and mastering
- 🔜 Manual Creator / DAW: Full timeline editor with MIDI, mixing, effects
- 🔜 Collaboration Tools: Share and work on projects together
- 🔜 Stem Exports: Download individual tracks (drums, bass, vocals, etc.)
QuillMusic/
├── quillmusic/
│ ├── backend/ # FastAPI backend
│ │ ├── app/
│ │ │ ├── api/ # API routes
│ │ │ ├── core/ # Config and dependencies
│ │ │ ├── models/ # Database models (future)
│ │ │ ├── schemas/ # Pydantic schemas
│ │ │ ├── services/ # Business logic and AI engines
│ │ │ └── workers/ # Background job workers
│ │ ├── tests/ # Pytest tests
│ │ ├── requirements.txt
│ │ ├── Dockerfile
│ │ └── pytest.ini
│ └── frontend/ # React frontend
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── lib/ # Utilities and API client
│ │ ├── pages/ # Page components
│ │ └── types/ # TypeScript types
│ ├── package.json
│ ├── vite.config.ts
│ └── tailwind.config.js
├── docs/ # Documentation
│ ├── ARCHITECTURE.md # System architecture
│ ├── UI_FIGMA_BRIEF.md # Design system
│ ├── ROADMAP.md # Development roadmap
│ ├── MUSIC_MODELS.md # AI model research
│ └── PRICING.md # Pricing strategy
├── docker-compose.yml
└── README.md
- Python 3.11+ (for backend)
- Node.js 18+ (for frontend)
One command runs everything!
Just click the Run button!
The .replit configuration automatically:
- Starts Redis in daemon mode
- Starts the FastAPI backend on port 8000
- Starts the Vite frontend dev server on port 5000
- The Replit preview shows the React UI
Development branch: claude/unified-dev-runner-013TKA1cg1SCS1zm96jZrfGm
One command to run everything:
# Clone the repository
git clone https://github.com/AidiJackson/QuillMusic.git
cd QuillMusic
# Checkout the development branch
git checkout claude/unified-dev-runner-013TKA1cg1SCS1zm96jZrfGm
# Install dependencies (first time only)
npm install # Root dev runner (concurrently)
cd quillmusic/backend && pip install -r requirements.txt && cd ../..
cd quillmusic/frontend && npm install && cd ../..
# Start everything together
npm run dev
# That's it! 🎉
# Backend: http://localhost:8000 (API docs: http://localhost:8000/docs)
# Frontend: http://localhost:5000The unified npm run dev command automatically:
- ✅ Starts Redis server in daemon mode
- ✅ Starts FastAPI backend on port 8000 with auto-reload
- ✅ Starts Vite frontend dev server on port 5000
- ✅ Colored output (blue for backend, green for frontend)
- ✅ Vite proxy forwards
/apirequests to the backend seamlessly
Click to expand manual setup instructions
# Navigate to backend
cd quillmusic/backend
# Install dependencies
pip install -r requirements.txt
# Run the backend
uvicorn app.main:app --reload --port 8000
# Run tests
python -m pytest# Navigate to frontend
cd quillmusic/frontend
# Install dependencies
npm install
# Start development server
npm run dev
# The frontend will be available at http://localhost:5173# Start services (Redis + Backend)
docker-compose up -d
# The API will be available at http://localhost:8000
# API documentation at http://localhost:8000/docs- Navigate to AI Song Builder
- Enter a prompt describing your song (e.g., "A dreamy synthwave track about late night drives")
- Select genre and mood
- Adjust optional parameters (BPM, key, duration)
- Click Generate Blueprint
The system will create:
- Complete song structure (intro, verses, chorus, etc.)
- Generated lyrics for each section
- Vocal style configuration
- Production notes
- After generating a blueprint, click Send to Render Engine
- Navigate to Render Queue
- Check job status by entering the job ID
- Download the audio when ready
Note: Current version uses fake engines. Phase 2+ will integrate real AI models.
The Manual Creator will provide a full DAW interface for:
- Multi-track timeline editing
- MIDI piano roll
- Audio effects and mixing
- Automation
- Collaboration
QuillMusic supports multiple instrumental audio engines for generating music. The available engines are configured via environment variables.
-
Fake Demo Engine (Default)
- No configuration required
- Generates fake audio URLs for development and testing
- Always available
-
Stable Audio Hosted API (Recommended for Quick Start)
- Official Stable Audio API - paid but cheap
- Generates real, high-quality music without self-hosting
- Requires API key from Stability AI
-
Stable Audio Open (Self-Hosted)
- Open-source Stable Audio model
- Requires self-hosting the inference server
- Free but requires GPU infrastructure
-
MusicGen (Self-Hosted)
- Meta's MusicGen model
- Requires self-hosting the inference server
- Free but requires GPU infrastructure
To use the official Stable Audio API for quick, real music generation:
-
Get API Credentials
- Sign up at Stability AI
- Navigate to API settings
- Generate an API key
-
Set Environment Variables
Create a
.envfile inquillmusic/backend/with:# Enable Stable Audio API QUILLMUSIC_INSTRUMENTAL_ENGINES=stable_audio_api QUILLMUSIC_DEFAULT_INSTRUMENTAL_MODEL=stable_audio_api # Stable Audio API Configuration QUILLMUSIC_STABLE_AUDIO_API_BASE_URL=https://api.stableaudio.com QUILLMUSIC_STABLE_AUDIO_API_KEY=sk-your-api-key-here QUILLMUSIC_STABLE_AUDIO_API_MODEL=stable-audio-1.0
-
Use in Instrumental Studio
- Navigate to Instrumental Studio
- Select "Stable Audio (Hosted API)" from the engine dropdown
- Choose a blueprint or manual project
- Click Render Instrumental
The system will call the Stable Audio API and return a real audio URL.
To use a self-hosted MusicGen server for free, high-quality instrumental generation:
-
Deploy MusicGen HTTP Service
MusicGen must be exposed as an HTTP service with the following interface:
Endpoint:
POST {BASE_URL}/v1/generate/audioRequest Body (JSON):
{ "model": "facebook/musicgen-medium", "prompt": "energetic pop track with synths and drums", "seconds_total": 30 }Response Body (JSON):
{ "status": "ready", "audio_url": "https://your-cdn.com/generated-audio.wav" }Deployment Options:
- RunPod: Deploy MusicGen on RunPod serverless GPU
- Hugging Face Spaces: Deploy as an API endpoint
- Self-hosted: Run on your own GPU infrastructure
- Modal/Replicate: Use serverless platforms
-
Set Environment Variables
Create a
.envfile inquillmusic/backend/with:# Enable MusicGen engine QUILLMUSIC_INSTRUMENTAL_ENGINES=fake,musicgen QUILLMUSIC_DEFAULT_INSTRUMENTAL_MODEL=musicgen # MusicGen Configuration QUILLMUSIC_MUSICGEN_BASE_URL=https://your-musicgen-server.example.com QUILLMUSIC_MUSICGEN_MODEL=facebook/musicgen-medium
Notes:
- No API key required for self-hosted MusicGen
- Model options:
facebook/musicgen-small,facebook/musicgen-medium,facebook/musicgen-large - Ensure your MusicGen service returns publicly accessible audio URLs
-
Use in Instrumental Studio
- Navigate to Instrumental Studio
- Select "MusicGen (Meta, Free)" from the engine dropdown
- Choose a blueprint or manual project
- Click Render Instrumental
The system will call your MusicGen HTTP service and retrieve the generated audio.
You can enable multiple engines simultaneously:
# Enable multiple engines (comma-separated)
QUILLMUSIC_INSTRUMENTAL_ENGINES=fake,stable_audio_api,musicgen
# Stable Audio API
QUILLMUSIC_STABLE_AUDIO_API_BASE_URL=https://api.stableaudio.com
QUILLMUSIC_STABLE_AUDIO_API_KEY=sk-your-api-key
# MusicGen (self-hosted)
QUILLMUSIC_MUSICGEN_BASE_URL=http://localhost:8001
QUILLMUSIC_MUSICGEN_API_KEY=optional-key
QUILLMUSIC_MUSICGEN_MODEL=musicgen-mediumAll configured engines will appear in the Instrumental Studio engine dropdown.
- If no engine environment variables are set, only the Fake Demo Engine will be available
- The Stable Audio API requires a paid account but is the easiest way to generate real music
- Self-hosted engines require significant GPU resources (see MUSIC_MODELS.md)
- API costs vary by provider - check Stability AI pricing
cd quillmusic/backend
python -m pytest
# With coverage
python -m pytest --cov=app --cov-report=html
# Run specific test file
python -m pytest tests/test_song_blueprints.pyOr run from root:
npm run test:backendcd quillmusic/frontend
npm run build
# Preview production build
npm run previewOr build from root:
npm run build:frontendFrom the project root, you can run:
npm run dev- Start both backend and frontend (unified dev experience)npm run test:backend- Run backend testsnpm run test:frontend- Build frontend (validates TypeScript)npm run build:frontend- Build frontend for productionnpm run install:all- Install all dependencies (backend + frontend)
- ARCHITECTURE.md: System architecture and design decisions
- UI_FIGMA_BRIEF.md: UI design system and Figma guide
- ROADMAP.md: Development phases and timeline
- MUSIC_MODELS.md: AI music model research and comparison
- PRICING.md: Pricing tiers and monetization strategy
- FastAPI: Modern Python web framework
- Pydantic: Data validation and settings
- Redis + RQ: Job queue for async processing
- pytest: Testing framework
- Docker: Containerization
- React 18: UI library
- TypeScript: Type-safe JavaScript
- Vite: Fast build tool
- Tailwind CSS: Utility-first CSS framework
- shadcn/ui: Beautiful component library
- React Router: Client-side routing
- Sonner: Toast notifications
- Stable Audio 2.0: Instrumental generation
- MusicGen: Alternative instrumental model
- Bark + RVC: Vocal synthesis
- Matchering: Automated mastering
Complete skeleton with fake engines, clean architecture, and comprehensive docs.
Integrate Stable Audio 2.0 or MusicGen for real music generation.
Add Bark + RVC for AI-generated vocals.
Professional audio quality with automated mastering.
Full DAW interface with timeline, MIDI editor, mixer, effects.
Launch with pricing tiers, user management, and scaling infrastructure.
Collaboration, marketplace, integrations, mobile apps, and more.
See ROADMAP.md for detailed plans.
QuillMusic will follow a freemium model:
- Free: 3 blueprints, 2 renders/month, 60s max, watermarked
- Creator ($9.99/mo): 25 blueprints, 15 renders, 5min max, commercial rights
- Pro Studio ($29.99/mo): Unlimited blueprints, 100 renders, DAW access, stems
- Pro+ ($99/mo): Unlimited everything, custom voices, API access
- Enterprise: Custom pricing for businesses
See PRICING.md for full details.
We welcome contributions! Here's how you can help:
- Report Bugs: Open an issue with details and reproduction steps
- Suggest Features: Share your ideas in the discussions
- Submit PRs: Fork, create a feature branch, and submit a pull request
- Improve Docs: Help us make documentation clearer
- Test: Try the platform and provide feedback
# Fork and clone
git clone https://github.com/your-username/QuillMusic.git
cd QuillMusic
# Create feature branch
git checkout -b feature/your-feature-name
# Make changes and test
# ... code, test, commit ...
# Push and create PR
git push origin feature/your-feature-nameThis project is licensed under the MIT License - see the LICENSE file for details.
- Stability AI for Stable Audio research
- Meta AI for MusicGen and AudioCraft
- Suno AI for Bark TTS
- shadcn for the amazing UI component library
- The open-source AI music community
- Email: support@quillmusic.ai (planned)
- Discord: Join our community (planned)
- Twitter: @QuillMusicAI (planned)
- GitHub Issues: Report bugs
Phase 1 Complete: QuillMusic is now a fully functional scaffold with:
- Clean, typed backend ready for AI model integration
- Modern, responsive frontend with professional UI
- Job queue system for async processing
- Comprehensive documentation
- Testing infrastructure
Next Steps:
- Integrate real instrumental AI model (Stable Audio 2.0 or MusicGen)
- Set up GPU infrastructure for inference
- Add audio file storage and delivery
- Begin beta testing with early users
Built with ❤️ by the QuillMusic team
Status: Phase 1 Complete | Version: 0.1.0 | Last Updated: 2025
