This is a Next.js app that provides a UI and API for:
- Uploading a MIDI file
- Optionally providing genre + subgenre
- Selecting additional instruments to be AI-composed (each instrument = track)
- Optionally defining song sections with timestamps and per-section instrument selection
- Pressing Compose to run an AI-powered composer and download the resulting MIDI
✅ The server-side composer uses OpenAI to generate musical arrangements that layer on top of your MIDI input.
- Next.js App Router (React)
- Postgres via Prisma
- OpenAI API for AI music composition
- @tonejs/midi for MIDI parsing and generation
- Install dependencies:
npm install- Create a Postgres DB and set environment variables:
cp .env.example .envEdit .env and set:
DATABASE_URL- your Postgres connection stringOPENAI_API_KEY- your OpenAI API key for AI composition
- Generate Prisma client and run migrations:
npm run prisma:generate
npm run prisma:migrate- Start the dev server:
npm run devPOST /api/jobs– acceptsmultipart/form-datawith:midiFile(File)genre(string)subgenre(string)instruments(JSON array of instrument ids)sections(JSON array of section configs)
POST /api/jobs/:jobId/compose– runs the AI composer to generate musical arrangementsGET /api/jobs/:jobId– job status + metadataGET /api/jobs/:jobId/download– streams the output MIDI
- Original uploads:
./uploads - Output files:
./outputs
In production/serverless, you'd likely move these to object storage (S3/R2/GCS) and store URLs in Postgres.
- Background job queue (BullMQ / pg-boss) + progress polling for long-running compositions
- Authentication + user libraries
- Fine-tune composition parameters (temperature, creativity level)
- Support for more AI models (Claude, custom music generation models)
- Real-time composition preview