An AI-powered instructional design system based on the ADDIE model for automated course creation and evaluation.
@misc{yao2025instructionalagentsllmagents,
title={Instructional Agents: Reducing Teaching Faculty Workload through Multi-Agent Instructional Design},
author={Yao, Huaiyuan and Xu, Wanpeng and Turnau, Justin and Kellam, Nadia and Wei, Hua},
year={2025},
eprint={2508.19611},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.19611},
}
[2026.1.6] Instructional Agents is accepted by EACL 2026 Main Conference!
[2026.1.6] Release v1.0.0 - Initial release with core features - Thanks to all the contributors! β€οΈ
History releases
[2026.1.4] v0.1.0 - Docker deployment and web interface support
| Feature | Description |
|---|---|
| π€ Multi-Agent Collaboration | Multiple specialized LLM agents working together based on ADDIE instructional design model |
| π Automated Course Generation | Generate complete course materials including syllabus, slides, scripts, and assessments |
| π― Catalog Mode | Use structured catalog files to guide course generation with student profiles and institutional requirements |
| π€ Copilot Mode | Interactive mode for providing feedback during generation at each ADDIE phase |
| π Real-time Progress | Monitor generation progress with real-time logs, progress bars, and file updates |
| π Web Interface | User-friendly web interface for course generation, progress monitoring, and file management |
| π Multiple Usage Methods | Support for web interface, command-line, and RESTful API |
| π LaTeX/PDF Output | Generate professional LaTeX slides and compile to PDF format |
| β Automatic Evaluation | Built-in evaluation system for assessing generated course materials |
This guide will walk you through the complete workflow from setup to viewing results.
Step 1: Environment Setup
- Docker and Docker Compose installed
- Check installation:
docker --versionanddocker-compose --version - Install: Docker Desktop
- Check installation:
- OpenAI API Key
- Get one from: https://platform.openai.com/api-keys
# Clone the repository (if not already done)
git clone <repository-url>
cd instructional_agents
# Create environment variables file
cp .env.example .env
# Edit .env file and add your OPENAI_API_KEY
# OPENAI_API_KEY=your_api_key_here
# API_PORT=8000Note: You can also configure the API key directly in the web interface (see Step 2.2). If you skip setting it in
.env, you'll need to enter it in the frontend.
# Option 1: Use the start script (recommended)
./start.sh
# Option 2: Start manually
docker-compose up -d
# Verify service is running
curl http://localhost:8000/health
# Should return: {"status":"healthy","version":"1.0.0",...}Tip: If port 8000 is already in use, modify
API_PORTin your.envfile.
Step 2: Access Web Interface
Option A: Direct file access (simplest)
# Open frontend/index.html directly in your browser
open frontend/index.html # macOS
# or double-click frontend/index.html in your file managerOption B: Local server (recommended for better CORS support)
# Using Python
cd frontend
python -m http.server 8080
# Then open http://localhost:8080/index.html in your browser- In the web interface, locate the "API Configuration" section at the top
- Enter your OpenAI API Key in the input field
- Click "Save API Key" to save it (stored locally in your browser)
- The status indicator will show "β API Key Configured" when successful
Note: Your API key is only stored in your browser's local storage and never sent to any server except OpenAI during course generation.
-
Fill in the course configuration form:
- Course Name (required): e.g., "Introduction to Machine Learning"
- Model Selection: Choose from GPT-4o Mini (recommended), GPT-4o, or GPT-4 Turbo
- Experiment Name: Leave as "default" or specify a custom name
- Copilot Mode: Enable for interactive feedback during generation (optional)
- Catalog Mode:
- Select "Not Use" for basic generation
- Select "Upload Catalog File" to upload a custom catalog JSON
- Select "Use Default Catalog" to use the default catalog
-
Click "Generate Course" to start the task
-
To look at the Generation Procedure on the following, please go to Step 3 and 4:
- Progress bar showing completion percentage
- Current stage information
- Real-time logs stream
Step 3: Monitor Progress and Logs
If you need to view logs outside the web interface:
# View container logs in real-time
docker-compose logs -f api
# View last 100 lines
docker-compose logs --tail=100 api
# View logs for a specific time range
docker-compose logs --since 30m apiStep 4: View Generated Results
Once generation starts, the "Generation Results" section will appear showing:
-
File Location:
- Displays the local path where files are saved
- Example:
/Users/your_username/PycharmProjects/instructional_agents/exp/your_experiment_name/ - Quick actions:
- π Copy Path: Copy the path to clipboard
- π Open Directory: Open the directory in Finder/Explorer
-
File List (updates incrementally):
- Files appear as they are generated (no need to wait for completion)
- Each file shows:
- File icon based on type (π .md, π .tex, π .pdf, π .json)
- File name and size
- π New badge for newly generated files
- π₯ Download button for immediate download
-
File Organization:
- Files are grouped by directory
- Foundation files (syllabus, goals, etc.) in the root
- Chapter materials in
chapter_1/,chapter_2/, etc.
Generated files are saved in the exp/ directory in your project folder:
# List all experiments
ls exp/
# View a specific experiment's structure
ls -R exp/your_experiment_name/
# Open in Finder (macOS)
open exp/your_experiment_name/
# Open in Explorer (Windows)
explorer exp\\your_experiment_name\\
# View course syllabus
cat exp/your_experiment_name/result_syllabus_design.md
# View generated slides PDF
open exp/your_experiment_name/chapter_1/slides.pdfFile Structure:
exp/{experiment_name}/
βββ result_instructional_goals.md # Learning objectives
βββ result_resource_assessment.md # Resource assessment
βββ result_target_audience.md # Target audience analysis
βββ result_syllabus_design.md # Course syllabus (β important)
βββ result_assessment_planning.md # Assessment planning
βββ result_final_exam_project.md # Final project design
βββ processed_chapters.json # Chapter metadata
βββ statistics.json # Generation statistics
β
βββ chapter_1/ # Chapter 1 materials
β βββ slides.tex # LaTeX source
β βββ slides.pdf # Compiled PDF slides (β ready to use)
β βββ script.md # Presentation script
β βββ assessment.md # Assessment materials
β βββ statistics_slides_chapter_1.json # Chapter statistics
β
βββ chapter_2/ # Chapter 2 materials
β βββ ...
βββ ...
Tip: Files are generated incrementally. You can download or view them as soon as they appear, without waiting for the entire generation to complete.
For detailed file descriptions, see Generated Files Guide.
Step 5: Next Steps
See Documentation section below for detailed guides and references.
For developers who want to run the system locally without Docker:
- Python 3.11+
- pip
- LaTeX (for PDF generation)
- macOS:
brew install --cask mactex - Ubuntu:
sudo apt-get install texlive-full - Windows: Install MiKTeX
- macOS:
pip install -r requirements.txtOption A: Using config.json
{
"OPENAI_API_KEY": "your_openai_api_key_here"
}Option B: Using environment variable
export OPENAI_API_KEY=your_api_key_here# Start the API server
python api_server.py
# Or use uvicorn directly with auto-reload
uvicorn api_server:app --host 0.0.0.0 --port 8000 --reloadThe API will be available at http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
The easiest way to use the system. See Step 2 above for detailed instructions.
Features:
- π Visual course configuration form
- π Real-time progress monitoring
- π Result file browsing and download
- π€ Catalog file upload and management
- π Real-time log streaming
Entry Point: run.py β Main workflow entry point
# Simple course generation
python run.py "Introduction to Machine Learning"
# With specific model
python run.py "Data Structures" --model gpt-4o-mini
# With experiment name
python run.py "Web Development" --exp web_dev_v1
# Interactive copilot mode
python run.py "Database Systems" --copilot
# Use catalog mode
python run.py "Software Engineering" --catalog
# Use specific catalog file
python run.py "AI Fundamentals" --catalog ai_catalog
# Combine catalog and copilot
python run.py "Educational Psychology" --copilot --catalog edu_psyCommand Line Arguments:
python run.py <course_name> [OPTIONS]
Required:
course_name Name of the course to design
Options:
--copilot Enable interactive copilot mode
--catalog [name] Use structured data from catalog/ directory
(optional: specify catalog name without '.json')
--model MODEL OpenAI model to use (default: gpt-4o-mini)
--exp EXP_NAME Experiment name for saving output (default: exp1)API Server: api_server.py β RESTful API service
# Start API server first (if not using Docker)
python api_server.py
# Generate a course
curl -X POST http://localhost:8000/api/course/generate \
-H "Content-Type: application/json" \
-H "X-OpenAI-API-Key: your_api_key_here" \
-d '{
"course_name": "Introduction to Machine Learning",
"model_name": "gpt-4o-mini",
"exp_name": "ml_intro_v1"
}'
# Check task status
curl http://localhost:8000/api/course/status/{task_id}
# Get result files
curl http://localhost:8000/api/course/results/{task_id}/files
# Download a file
curl http://localhost:8000/api/course/results/{task_id}/download/chapter_1/slides.pdf \
--output slides.pdfFor complete API documentation, see API Documentation.
| Module | Description | Usage |
|---|---|---|
| Course Generation | Generate complete course materials based on ADDIE model | Web interface, CLI (run.py), or RESTful API |
| Catalog Mode | Use structured catalog files for guided generation | --catalog flag or upload in web interface |
| Copilot Mode | Interactive feedback during generation | --copilot flag in CLI or enable in web interface |
| Evaluation | Automatic assessment of generated materials | python evaluate.py --exp <exp_name> |
| Web Interface | Visual interface for course generation | Open frontend/index.html in browser |
| API Server | RESTful API for programmatic access | python api_server.py or Docker |
Catalog files provide structured input data to guide the course generation process. They include:
- Student profiles and backgrounds
- Instructor preferences and style
- Course structure requirements
- Assessment design preferences
- Teaching constraints
- Institutional requirements
Using Catalogs:
# Use default catalog
python run.py "Software Engineering" --catalog
# Use a specific catalog file (without .json extension)
python run.py "AI Fundamentals" --catalog ai_catalog
# System looks for: catalog/ai_catalog.json
# Upload catalog via web interface
# In the web interface, select "δΈδΌ Catalog ζδ»Ά" and upload your JSON fileSee API Documentation for catalog format details.
Interactive mode that prompts for feedback after each phase of the ADDIE workflow:
- Analysis phase: Review and provide feedback on learning goals, resource assessment, target audience
- Design phase: Review and refine syllabus design, assessment planning, final project
- Development phase: Review and adjust chapter materials as they're generated
python run.py "Advanced Algorithms" --copilot --exp algo_course_v2Entry Point: evaluate.py β Automatic assessment and scoring
# Evaluate a specific experiment
python evaluate.py --exp web_dev_v1Evaluation results are saved in eval/{experiment_name}/ directory.
For long-running tasks, run in the background:
# Run in background with log file
nohup python run.py "Advanced Machine Learning" --exp ml_advanced > logs/ml_course.log 2>&1 &
# Monitor progress
tail -f logs/ml_course.log
# Check process status
ps aux | grep "python run.py"# Step 1: Generate course using catalog
python run.py "Python Fundamentals" \
--catalog python_catalog \
--model gpt-4o \
--exp py_course_v1
# Step 2: Evaluate results
python evaluate.py --exp py_course_v1
# Step 3: Review generated materials
open exp/py_course_v1/result_syllabus_design.md
open exp/py_course_v1/chapter_1/slides.pdfpython run.py "Advanced Algorithms" --copilot --exp algo_course_v2
# You'll be prompted for feedback after each phase:
# - Analysis β feedback on goals, resources, audience
# - Design β feedback on syllabus, assessments
# - Development β feedback on chapter materials| API Documentation | Docker Deployment | Generated Files Guide |
|---|---|---|
| Complete API reference and endpoints | Docker setup and deployment guide | Detailed description of generated files |
| Workflow Documentation | Development Guide | |
|---|---|---|
| System workflow and agent collaboration details | Development and debugging documentation |
How to configure API key?
Checklist
- Get OpenAI API key from https://platform.openai.com/api-keys
- Configure in
.envfile or web interface
Solutions
- Option 1: Set in
.envfile:OPENAI_API_KEY=your_key_here - Option 2: Configure in web interface (stored in browser local storage only)
Port 8000 already in use?
Problem
Starting the service shows "port already in use" error.
Solution
# macOS/Linux: Find and kill the process
lsof -i :8000
kill -9 <PID>
# Or change port in .env file
API_PORT=8001How to use catalog files?
Checklist
- Catalog files should be in JSON format
- Place catalog files in
catalog/directory
Solutions
- Default catalog: Use
--catalogwithout value to usecatalog/default_catalog.json - Custom catalog: Use
--catalog my_catalogto usecatalog/my_catalog.json - Web interface: Upload catalog file directly in the web interface
Where are generated files saved?
Answer
Generated files are saved in exp/{experiment_name}/ directory:
- Foundation files (syllabus, goals, etc.) in the root
- Chapter materials in
chapter_1/,chapter_2/, etc. - Files are generated incrementally and can be downloaded as soon as they appear
Web interface cannot connect to backend?
Checklist
- Confirm backend is running (visit http://localhost:8000/docs or http://localhost:8000/health)
- Check browser console for error messages
- Verify API address configuration
Solution
- Docker: Ensure Docker container is running:
docker-compose ps - Local: Ensure API server is running:
python api_server.py - Check that the port matches (default: 8000)
What models are supported?
Answer
Currently supports OpenAI models:
- GPT-4o Mini (recommended, cost-effective)
- GPT-4o
- GPT-4 Turbo
Configure via model selection in web interface or --model flag in CLI.
MIT License