A tiny sandbox to explore how an agent interprets tasks, applies rules, and changes behavior as signals drift.
This project is part of my Applied Intelligence Systems Series, exploring how intelligent systems behave beneath the UI layer — from signal ingestion to rule evaluation to action selection and feedback.
The goal of this sandbox is to provide a simple, interactive way to see how different variables affect an agent’s behavior:
- Task description
- Rules and constraints
- Context and input signals
- Noise / drift in the environment
- Execution trace and outcomes
The simulation is intentionally small and easy to extend.
The first version will include:
- Input fields for task, rules, and context
- A simple agent “reasoning” trace (steps the agent takes)
- Visual comparison of behavior under two different rule sets or contexts
- Basic drift controls (e.g., add noise, change a constraint, toggle a rule)
- Simple action flow:
Task → Interpret → Decide → Act → Log
[Task + Context + Rules]
|
v
Input Normalization
(clean, validate, shape)
|
v
Interpretation Layer
(what does this task mean?)
|
v
Policy & Rule Engine
(check constraints, priorities)
|
v
Decision Selection
(choose next action or plan)
|
v
Execution Step
(apply action to environment)
|
v
Behavior Log / Trace
(what the agent did and why)
Agent behavior often looks “mysterious,” but real systems depend on:
- Clear task definitions
- Transparent rules and constraints
- Stable, well-structured inputs
- Explicit traces of what the agent decided and why
- Awareness of how drift in signals changes outcomes
This sandbox provides a small, understandable way to visualize these concepts without building a full agent framework.
Even though it's minimal, each part corresponds to real architecture:
In production systems, tasks arrive with partial context (user state, environment, permissions). Mis-specified tasks or missing context cause surprising behavior.
Policies, guardrails, and business rules limit what an agent is allowed to do. Real engines implement this as policy checks, allow/deny lists, and safety filters.
Before acting, intelligent systems interpret inputs: “what does this mean?”
This is where prompt parsing, schema mapping, or semantic understanding lives.
Agents often have multiple possible actions. Selection might depend on priority, cost, risk, or external constraints.
A good system exposes why and how decisions were made. Traces are critical for debugging, audits, and governance.
When inputs, rules, or context drift slightly, behavior can shift dramatically.
This sandbox makes that shift visible at a small scale.
This tool is a legible micro-version of how agent-like systems work under the hood.
Main repo:
https://github.com/rtfenter/Applied-Intelligence-Systems-Series
MVP planned.
This sandbox will focus on core mechanics required to demonstrate agent behavior under different rules and contexts, not on building a full production agent framework.
Everything will run client-side.
To run locally (once files are added):
- Clone the repo
- Open
index.htmlin your browser
That’s it — static HTML + JS, no backend required.