Skip to content

magicvoiceai/MagicVoice

Repository files navigation

Magic Voice (Smart Serve Project)

Official Website: https://magicvoice.online

Magic Voice is an advanced AI voice cloning and TTS (Text-to-Speech) platform. Experience the power of AI voice technology directly on our official website.

This is a multi-component project with:

  • Frontend: Vue 3.js application
  • Backend: Spring Boot REST API
  • LLM Service: Python Flask application

Project Structure

smart-serve/
├── frontend/                 # Vue 3 frontend
│   ├── src/
│   │   ├── components/       # Vue components
│   │   ├── views/            # Page views
│   │   ├── assets/           # Static assets
│   │   ├── router/           # Vue Router configuration
│   │   └── store/            # Pinia state management
│   ├── public/
│   ├── package.json          # Dependencies and scripts
│   └── vite.config.js        # Vite configuration
├── backend/                  # Spring Boot backend
│   ├── src/main/java/        # Java source files
│   ├── src/main/resources/   # Configuration files
│   └── pom.xml              # Maven dependencies
└── llm-python/              # Python LLM service
    ├── api/                 # Flask application
    ├── services/            # LLM service logic
    ├── models/              # ML models (if any)
    ├── utils/               # Utility functions
    ├── config/              # Configuration files
    └── requirements.txt     # Python dependencies

Getting Started

Frontend Setup

  1. Navigate to the frontend directory:
cd frontend
  1. Install dependencies:
npm install
  1. Start the development server:
npm run dev

The frontend will be accessible at http://localhost:3000

Backend Setup

  1. Navigate to the backend directory:
cd backend
  1. Build the project:
mvn clean install
  1. Run the application:
mvn spring-boot:run

The backend will be accessible at http://localhost:80080

LLM Service Setup

  1. Navigate to the llm-python directory:
cd llm-python
  1. Create a virtual environment:
python -m venv venv
  1. Activate the virtual environment:
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Run the application:
python api/app.py

The LLM service will be accessible at http://localhost:5000

Configuration

Environment variables for each service:

Frontend (environment variables in .env)

Backend (application.properties)

  • server.port=8080 (Port to run the server)
  • spring.datasource.url=jdbc:h2:mem:testdb (Database URL)

LLM Service (.env file)

  • LLM_MODEL_NAME=gpt2 (Default model name)
  • API_HOST=0.0.0.0 (Host for the API)
  • API_PORT=5000 (Port for the API)

Releases

No releases published

Packages

 
 
 

Contributors