CodeQuest is a web-based learning platform where users learn C# through an adaptive, AI-generated narrative adventure. Code is the game mechanic, the story is the motivation, and AI is the content engine.
The goal is to create an experience that feels like an adventure, not a task list. The backend acts as a deterministic judge — AI generates scenarios and feedback, but the server objectively validates all code.
- Story drives engagement — every coding challenge is embedded in a narrative
- Code solves the situation — the player writes C# to progress the story
- Objective validation — the server (not AI) determines right/wrong
- AI adapts difficulty — problem complexity scales with the player's SkillScore
- Server is source of truth — judged outcomes are stored server-side; expected output is never sent to the client
- No level structure — progression is fluid, not staged
- Adaptive progression — SkillScore (0–100) controls everything
Each game segment follows this cycle:
Story → Problem → Code Solution → Evaluation → Feedback → New Story
The first scenario is generated by Azure OpenAI using the player's SkillScore. Subsequent scenarios continue the narrative using the ChapterSummary from the previous chapter to maintain story continuity. The AI returns a structured JSON response containing a title, story text, problem description, method signature, and a summary for the next chapter. The session and scenario are persisted in PostgreSQL via EF Core. If the AI call fails, a built-in fallback scenario is used.
The frontend displays the story text, problem description, and a method skeleton using the Monaco Editor. Concrete input is shown clearly to the player. Expected output is never shown to the client.
public string GetSword(List<string> backpack)
{
// your code here
}backpack = ["apple", "sword", "rope"]
The player writes only the method body and submits it to the backend.
The backend:
- Wraps the method body in a complete class
- Compiles with Roslyn
- If compile error → returns diagnostics (
CompileError) - If it compiles → executes with the concrete input
- If a runtime exception occurs → returns error message (
RuntimeError) - Compares the return value against the expected output
- Sets the outcome:
SuccessorFailure - Persists the judged outcome in the session
All validation happens server-side. The client is purely UX. The judged outcome is stored server-side and used when advancing to the next chapter.
Note on execution hardening: User code is compiled with Roslyn and executed in-process via reflection. Timeout and memory limit options (
CodeExecutionOptions) are defined in the configuration but are not yet enforced at runtime. There is no process-level sandboxing or isolation at this stage. Production-grade hardening — including execution timeouts, memory limits, and an isolated code runner service — is not yet implemented.
After a judged outcome, the player calls the advance endpoint. The backend increments the chapter counter, generates the next scenario via Azure OpenAI (using the current narrative context and outcome), persists the new scenario, and returns it to the client.
Note on incomplete progression features:
- Pedagogical feedback is currently a placeholder; the AI infrastructure for generating it exists but the progression service does not yet call it.
- SkillScore is part of the session model and the API response, but recalculation based on outcomes is not yet implemented. The score remains unchanged during the session.
- StoryBranch exists in the domain model and is included in the session, but the progression logic does not yet update it based on the outcome.
All problems are instance-based:
- Each problem has a concrete input and a concrete expected output
- No general algorithm is required
- Only one correct answer per scenario
Example:
Signature: string GetSword(List<string> backpack)
Input: ["apple", "sword", "rope"]
Expected: "sword"
SkillScore is an integer between 0–100 that controls problem complexity:
| Score Range | Complexity |
|---|---|
| 0–20 | Simple lists and strings |
| 20–40 | Loops and conditionals |
| 40–60 | Collections and filtering |
| 60–80 | More complex logic |
| 80–100 | Multiple parameters and advanced types |
Note: SkillScore recalculation based on outcomes is not yet implemented. The value is set at session creation and passed to the AI for scenario generation, but it does not change during play.
Each step has two possible outcomes:
| Branch | Effect |
|---|---|
| Success | Optimal narrative progression |
| Failure | Alternative narrative progression |
Both paths always lead forward. The player never gets stuck. The story is consequence-based, not punishment-based.
Note:
StoryBranchexists in the domain model and is persisted with the session, but the progression logic does not yet update it based on the outcome.
| Layer | Technology | Responsibility |
|---|---|---|
| Frontend | React + TypeScript + Vite | Monaco Editor, story rendering, submit |
| Backend | .NET 10 Web API | Session management, Roslyn compilation, server-side execution, AI orchestration |
| Database | PostgreSQL + EF Core | Session and scenario persistence |
| AI | Azure OpenAI (o4-mini) | Story text, problem definition, scenario generation |
CodeQuest.Server/
├── CodeQuest.Api/ ← Web API layer
│ ├── Controllers/ ← SessionsController, SubmissionsController, ProgressionController
│ ├── Options/ ← CodeExecutionOptions
│ └── Program.cs
├── CodeQuest.Application/ ← Application/business logic layer
│ ├── Configuration/ ← AzureOpenAiOptions, ExternalApiOptions
│ ├── Data/
│ │ ├── Seed/ ← StoreSeeds (fallback scenario)
│ │ └── Stores/ ← DbSessionStore, DbScenarioStore (active); legacy in-memory stores also present
│ ├── Interfaces/ ← Service interfaces
│ └── Services/ ← Service implementations
└── CodeQuest.Domain/ ← Domain models
├── Dtos/ ← Request/response DTOs
└── Models/ ← Enums, GameSession, Scenario
CodeQuest.Client/ ← React + TypeScript + Vite frontend
└── src/
├── components/ ← EditorPanel, ScenarioPanel, FeedbackPanel, etc.
├── hooks/ ← useAppState
└── pages/ ← LandingPage, StoryPage
| Method | Endpoint | Description |
|---|---|---|
POST |
/sessions |
Create a new game session and generate the first scenario |
GET |
/sessions/{sessionId}/scenario |
Get the current scenario (public DTO — expected output is not included) |
POST |
/sessions/{sessionId}/submit |
Submit a solution for server-side validation |
POST |
/sessions/{sessionId}/advance |
Advance to the next chapter after a judged outcome |
- Story first, code second
- Input is shown clearly
- Expected output is never shown
- Compile errors are shown immediately
- Feedback is shown before the next story segment
- The flow should feel fast and responsive
The player should feel: "I'm solving the situation with code."
Implemented and working:
- C# only
- Instance-based problems
- Session and scenario persisted in PostgreSQL via EF Core
- AI-generated scenarios via Azure OpenAI
- Server-side code execution via Roslyn (in-process)
- Objective outcome judgment (Success, Failure, CompileError, RuntimeError)
- Judged outcome stored server-side; client receives only the validation result
- End-to-end game loop: create session → generate scenario → submit → judge → persist outcome → advance
- Monaco Editor (no IntelliSense)
In progress / not yet fully implemented:
- AI-generated pedagogical feedback (infrastructure present; progression returns a placeholder string)
- SkillScore recalculation based on outcome
- StoryBranch update based on outcome
- Execution timeout and memory limit enforcement
- Process-level sandboxing / isolated code runner
Not included:
- Full RPG mechanics (no inventory, health, or permanent attributes)
- General algorithm platform
- Multiplayer
- Level-based course structure
- .NET 10 SDK
- Node.js (for the frontend)
- Docker (for the database)
- An Azure OpenAI resource with a deployed model
There are two ways to provide the database for local development. User secrets take priority over appsettings.Development.json, so whichever option you choose, the backend will pick up the right connection string automatically.
appsettings.Development.json already contains a connection string pointing to a local PostgreSQL container. Start it from the repository root:
docker compose up -dThis starts a PostgreSQL 16 container (codequest-postgres) on port 5432 with the following defaults:
| Setting | Value |
|---|---|
| Host | localhost |
| Port | 5432 |
| Database | codequest_db |
| Username | codequest |
| Password | codequest |
To stop the container:
docker compose downThe project has a shared PostgreSQL instance hosted on Neon. If you have been given the connection string, store it as a user secret so it overrides the local default:
cd CodeQuest.Server/CodeQuest.Api
dotnet user-secrets set "ConnectionStrings:DefaultConnection" "<your-neon-connection-string>"No Docker container is needed when using this option.
Install the EF Core CLI tool if you haven't already:
dotnet tool install --global dotnet-efApply all pending migrations to create or update the database schema:
cd CodeQuest.Server
dotnet ef database update --project CodeQuest.Application --startup-project CodeQuest.ApiTo add a new migration after making model changes:
cd CodeQuest.Server
dotnet ef migrations add <MigrationName> --project CodeQuest.Application --startup-project CodeQuest.ApiConfigure credentials using user secrets. At minimum you need the Azure OpenAI keys; add the database connection string only if you are using the Neon instance instead of the local Docker container (see Option B above).
cd CodeQuest.Server/CodeQuest.Api
# Azure OpenAI (required)
dotnet user-secrets set "AzureOpenAi:Endpoint" "https://<your-resource>.openai.azure.com/"
dotnet user-secrets set "AzureOpenAi:ApiKey" "<your-api-key>"
dotnet user-secrets set "AzureOpenAi:DeploymentName" "<your-deployment-name>"
# Database connection string (only needed when using Neon instead of Docker)
dotnet user-secrets set "ConnectionStrings:DefaultConnection" "<your-neon-connection-string>"Then build and run the API:
cd CodeQuest.Server
dotnet build
dotnet run --project CodeQuest.ApiThe API will be available at http://localhost:5255. An interactive API reference (Scalar) is served at http://localhost:5255/scalar in development mode.
cd CodeQuest.Client
npm install
npm run devThe Vite dev server proxies /sessions requests to the backend at http://localhost:5255, so no additional configuration is needed for local development.
The frontend currently uses a hardcoded relative path (/sessions), which works via the Vite dev proxy during local development. For production deployments where the frontend and backend are served from different origins, the frontend API base URL would need to be made configurable (e.g., via a VITE_API_BASE_URL environment variable). This is not yet implemented.
This project is an MVP in active development. The following summarises what is already in place and what still needs to be added before it is cloud-ready.
Already in place:
- PostgreSQL-backed persistence (sessions and scenarios via EF Core)
- Shared Neon PostgreSQL instance (usable by all developers via user secrets)
- Docker Compose for local database setup
- EF Core migrations for schema management
- Vite dev proxy for local frontend–backend integration
Not yet in place:
- CORS configuration (required for cross-origin frontend/API deployments)
- Health check endpoint
- Backend Dockerfile
- CI/CD pipeline
- Azure Container Apps (or equivalent) deployment configuration
- Production API base URL support in the frontend (
VITE_API_BASE_URL) - Isolated code runner service (currently user code runs in-process)
This project is part of the Salt community.