Skip to content

salt-community/CodeQuest

Repository files navigation

CodeQuest — AI-Driven Story-Based Coding Platform

CodeQuest is a web-based learning platform where users learn C# through an adaptive, AI-generated narrative adventure. Code is the game mechanic, the story is the motivation, and AI is the content engine.

Vision

The goal is to create an experience that feels like an adventure, not a task list. The backend acts as a deterministic judge — AI generates scenarios and feedback, but the server objectively validates all code.

Core Principles

  1. Story drives engagement — every coding challenge is embedded in a narrative
  2. Code solves the situation — the player writes C# to progress the story
  3. Objective validation — the server (not AI) determines right/wrong
  4. AI adapts difficulty — problem complexity scales with the player's SkillScore
  5. Server is source of truth — judged outcomes are stored server-side; expected output is never sent to the client
  6. No level structure — progression is fluid, not staged
  7. Adaptive progression — SkillScore (0–100) controls everything

Game Loop

Each game segment follows this cycle:

Story → Problem → Code Solution → Evaluation → Feedback → New Story

Step 1 — Scenario Generation

The first scenario is generated by Azure OpenAI using the player's SkillScore. Subsequent scenarios continue the narrative using the ChapterSummary from the previous chapter to maintain story continuity. The AI returns a structured JSON response containing a title, story text, problem description, method signature, and a summary for the next chapter. The session and scenario are persisted in PostgreSQL via EF Core. If the AI call fails, a built-in fallback scenario is used.

Step 2 — Presentation

The frontend displays the story text, problem description, and a method skeleton using the Monaco Editor. Concrete input is shown clearly to the player. Expected output is never shown to the client.

public string GetSword(List<string> backpack)
{
    // your code here
}
backpack = ["apple", "sword", "rope"]

Step 3 — Submit

The player writes only the method body and submits it to the backend.

Step 4 — Server Validation

The backend:

  1. Wraps the method body in a complete class
  2. Compiles with Roslyn
  3. If compile error → returns diagnostics (CompileError)
  4. If it compiles → executes with the concrete input
  5. If a runtime exception occurs → returns error message (RuntimeError)
  6. Compares the return value against the expected output
  7. Sets the outcome: Success or Failure
  8. Persists the judged outcome in the session

All validation happens server-side. The client is purely UX. The judged outcome is stored server-side and used when advancing to the next chapter.

Note on execution hardening: User code is compiled with Roslyn and executed in-process via reflection. Timeout and memory limit options (CodeExecutionOptions) are defined in the configuration but are not yet enforced at runtime. There is no process-level sandboxing or isolation at this stage. Production-grade hardening — including execution timeouts, memory limits, and an isolated code runner service — is not yet implemented.

Step 5 — Feedback & Next Scenario

After a judged outcome, the player calls the advance endpoint. The backend increments the chapter counter, generates the next scenario via Azure OpenAI (using the current narrative context and outcome), persists the new scenario, and returns it to the client.

Note on incomplete progression features:

  • Pedagogical feedback is currently a placeholder; the AI infrastructure for generating it exists but the progression service does not yet call it.
  • SkillScore is part of the session model and the API response, but recalculation based on outcomes is not yet implemented. The score remains unchanged during the session.
  • StoryBranch exists in the domain model and is included in the session, but the progression logic does not yet update it based on the outcome.

Problem Model (MVP — Instance-Based)

All problems are instance-based:

  • Each problem has a concrete input and a concrete expected output
  • No general algorithm is required
  • Only one correct answer per scenario

Example:

Signature: string GetSword(List<string> backpack)
Input:     ["apple", "sword", "rope"]
Expected:  "sword"

SkillScore System

SkillScore is an integer between 0–100 that controls problem complexity:

Score Range Complexity
0–20 Simple lists and strings
20–40 Loops and conditionals
40–60 Collections and filtering
60–80 More complex logic
80–100 Multiple parameters and advanced types

Note: SkillScore recalculation based on outcomes is not yet implemented. The value is set at session creation and passed to the AI for scenario generation, but it does not change during play.

Story Branching

Each step has two possible outcomes:

Branch Effect
Success Optimal narrative progression
Failure Alternative narrative progression

Both paths always lead forward. The player never gets stuck. The story is consequence-based, not punishment-based.

Note: StoryBranch exists in the domain model and is persisted with the session, but the progression logic does not yet update it based on the outcome.

Architecture

Layer Technology Responsibility
Frontend React + TypeScript + Vite Monaco Editor, story rendering, submit
Backend .NET 10 Web API Session management, Roslyn compilation, server-side execution, AI orchestration
Database PostgreSQL + EF Core Session and scenario persistence
AI Azure OpenAI (o4-mini) Story text, problem definition, scenario generation

Project Structure

CodeQuest.Server/
├── CodeQuest.Api/              ← Web API layer
│   ├── Controllers/            ← SessionsController, SubmissionsController, ProgressionController
│   ├── Options/                ← CodeExecutionOptions
│   └── Program.cs
├── CodeQuest.Application/      ← Application/business logic layer
│   ├── Configuration/          ← AzureOpenAiOptions, ExternalApiOptions
│   ├── Data/
│   │   ├── Seed/               ← StoreSeeds (fallback scenario)
│   │   └── Stores/             ← DbSessionStore, DbScenarioStore (active); legacy in-memory stores also present
│   ├── Interfaces/             ← Service interfaces
│   └── Services/               ← Service implementations
└── CodeQuest.Domain/           ← Domain models
    ├── Dtos/                   ← Request/response DTOs
    └── Models/                 ← Enums, GameSession, Scenario

CodeQuest.Client/               ← React + TypeScript + Vite frontend
└── src/
    ├── components/             ← EditorPanel, ScenarioPanel, FeedbackPanel, etc.
    ├── hooks/                  ← useAppState
    └── pages/                  ← LandingPage, StoryPage

API Endpoints

Method Endpoint Description
POST /sessions Create a new game session and generate the first scenario
GET /sessions/{sessionId}/scenario Get the current scenario (public DTO — expected output is not included)
POST /sessions/{sessionId}/submit Submit a solution for server-side validation
POST /sessions/{sessionId}/advance Advance to the next chapter after a judged outcome

UX Principles

  • Story first, code second
  • Input is shown clearly
  • Expected output is never shown
  • Compile errors are shown immediately
  • Feedback is shown before the next story segment
  • The flow should feel fast and responsive

The player should feel: "I'm solving the situation with code."

MVP Scope

Implemented and working:

  • C# only
  • Instance-based problems
  • Session and scenario persisted in PostgreSQL via EF Core
  • AI-generated scenarios via Azure OpenAI
  • Server-side code execution via Roslyn (in-process)
  • Objective outcome judgment (Success, Failure, CompileError, RuntimeError)
  • Judged outcome stored server-side; client receives only the validation result
  • End-to-end game loop: create session → generate scenario → submit → judge → persist outcome → advance
  • Monaco Editor (no IntelliSense)

In progress / not yet fully implemented:

  • AI-generated pedagogical feedback (infrastructure present; progression returns a placeholder string)
  • SkillScore recalculation based on outcome
  • StoryBranch update based on outcome
  • Execution timeout and memory limit enforcement
  • Process-level sandboxing / isolated code runner

Not included:

  • Full RPG mechanics (no inventory, health, or permanent attributes)
  • General algorithm platform
  • Multiplayer
  • Level-based course structure

Getting Started

Prerequisites

Database

There are two ways to provide the database for local development. User secrets take priority over appsettings.Development.json, so whichever option you choose, the backend will pick up the right connection string automatically.

Option A — Local PostgreSQL via Docker Compose (default)

appsettings.Development.json already contains a connection string pointing to a local PostgreSQL container. Start it from the repository root:

docker compose up -d

This starts a PostgreSQL 16 container (codequest-postgres) on port 5432 with the following defaults:

Setting Value
Host localhost
Port 5432
Database codequest_db
Username codequest
Password codequest

To stop the container:

docker compose down

Option B — Shared Neon PostgreSQL via user secrets (recommended if you have access)

The project has a shared PostgreSQL instance hosted on Neon. If you have been given the connection string, store it as a user secret so it overrides the local default:

cd CodeQuest.Server/CodeQuest.Api
dotnet user-secrets set "ConnectionStrings:DefaultConnection" "<your-neon-connection-string>"

No Docker container is needed when using this option.

Migrations

Install the EF Core CLI tool if you haven't already:

dotnet tool install --global dotnet-ef

Apply all pending migrations to create or update the database schema:

cd CodeQuest.Server
dotnet ef database update --project CodeQuest.Application --startup-project CodeQuest.Api

To add a new migration after making model changes:

cd CodeQuest.Server
dotnet ef migrations add <MigrationName> --project CodeQuest.Application --startup-project CodeQuest.Api

Backend

Configure credentials using user secrets. At minimum you need the Azure OpenAI keys; add the database connection string only if you are using the Neon instance instead of the local Docker container (see Option B above).

cd CodeQuest.Server/CodeQuest.Api

# Azure OpenAI (required)
dotnet user-secrets set "AzureOpenAi:Endpoint" "https://<your-resource>.openai.azure.com/"
dotnet user-secrets set "AzureOpenAi:ApiKey" "<your-api-key>"
dotnet user-secrets set "AzureOpenAi:DeploymentName" "<your-deployment-name>"

# Database connection string (only needed when using Neon instead of Docker)
dotnet user-secrets set "ConnectionStrings:DefaultConnection" "<your-neon-connection-string>"

Then build and run the API:

cd CodeQuest.Server
dotnet build
dotnet run --project CodeQuest.Api

The API will be available at http://localhost:5255. An interactive API reference (Scalar) is served at http://localhost:5255/scalar in development mode.

Frontend

cd CodeQuest.Client
npm install
npm run dev

The Vite dev server proxies /sessions requests to the backend at http://localhost:5255, so no additional configuration is needed for local development.

API Base URL

The frontend currently uses a hardcoded relative path (/sessions), which works via the Vite dev proxy during local development. For production deployments where the frontend and backend are served from different origins, the frontend API base URL would need to be made configurable (e.g., via a VITE_API_BASE_URL environment variable). This is not yet implemented.

Notes for Cloud Deployment

This project is an MVP in active development. The following summarises what is already in place and what still needs to be added before it is cloud-ready.

Already in place:

  • PostgreSQL-backed persistence (sessions and scenarios via EF Core)
  • Shared Neon PostgreSQL instance (usable by all developers via user secrets)
  • Docker Compose for local database setup
  • EF Core migrations for schema management
  • Vite dev proxy for local frontend–backend integration

Not yet in place:

  • CORS configuration (required for cross-origin frontend/API deployments)
  • Health check endpoint
  • Backend Dockerfile
  • CI/CD pipeline
  • Azure Container Apps (or equivalent) deployment configuration
  • Production API base URL support in the frontend (VITE_API_BASE_URL)
  • Isolated code runner service (currently user code runs in-process)

License

This project is part of the Salt community.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors