- Overview
- What Is MCP?
- Quick Start
- Installation
- Basic Usage
- Tools Overview
- Example: Agent-in-the-Loop (ReBAC Feature)
- Documentation
- Development
- Versioning
- License
A local Model Context Protocol (MCP) server that gives AI agents structured task planning, execution tracking, and guided research workflows.
TaskFlow MCP helps agents turn vague goals into concrete, trackable work. It provides a persistent task system plus research and reasoning tools so agents can plan, execute, and verify tasks without re‑sending long context every time.
- Lower token use: retrieve structured task summaries instead of restating context.
- Smarter workflows: dependency‑aware planning reduces rework.
- Better handoffs: tasks, notes, and research state persist across sessions.
- More reliable execution: schemas validate tool inputs.
- Auditability: clear task history, verification, and scores.
MCP is a standard way for AI tools to call external capabilities over JSON‑RPC (usually STDIO). This server exposes tools that an agent can invoke to plan work, track progress, and keep context consistent across long sessions.
pnpm install
pnpm build
pnpm start# npm
npm install
# yarn
yarn install
# pnpm
pnpm installpnpm start# PowerShell
$env:DATA_DIR="${PWD}\.mcp-tasks"Use npx to run the MCP server directly from GitHub. Replace <DATA_DIR> with your preferred data path.
Path examples:
- Windows:
<DATA_DIR>=C:\repos\mcp-taskflow\.mcp-tasks - macOS/Linux:
<DATA_DIR>=/Users/you/repos/mcp-taskflow/.mcp-tasks
{
"servers": {
"mcp-taskflow": {
"type": "stdio",
"command": "npx",
"args": ["mcp-taskflow"],
"env": {
"DATA_DIR": "<DATA_DIR>"
}
}
}
}{
"mcpServers": {
"mcp-taskflow": {
"command": "npx",
"args": ["mcp-taskflow"],
"env": { "DATA_DIR": "<DATA_DIR>" }
}
}
}[mcp_servers.mcp-taskflow]
type = "stdio"
command = "npx"
args = ["mcp-taskflow"]
env = { DATA_DIR="<DATA_DIR>" }
startup_timeout_sec = 120TaskFlow MCP exposes a focused toolset. Most clients surface these as callable actions for your agent.
- plan_task: turn a goal into a structured plan
- split_tasks: split a plan into discrete tasks with dependencies
- analyze_task: capture analysis and rationale
- reflect_task: record reflections and improvements
- list_tasks: list tasks by status
- get_task_detail: show full details for a task
- query_task: search tasks by keyword or ID
- create_task: create a task directly
- update_task: update status, notes, dependencies, or metadata
- delete_task: remove a task by ID
- clear_all_tasks: clear the task list
- execute_task: mark a task in progress and generate an execution prompt
- verify_task: score and mark a task complete
- research_mode: guided research with state tracking
- process_thought: capture a structured reasoning step
- init_project_rules: create or refresh project rules
- get_server_info: get server status and task counts
Below is a simple, human‑readable script that shows how a user might ask an agent to plan and execute a feature. The agent uses TaskFlow MCP tools behind the scenes, but you don’t need MCP details to follow the flow.
User “I want to add a Relationship‑Based system. Create a task list and start working through it.”
Agent “I’ll plan the work, split it into tasks, and track progress as we go.”
Agent (planning summary) “Here’s a structured plan with dependencies and a clear execution order.”
- Define the ReBAC model
- Entities, relations, and permission checks
- Identify critical query patterns
- Design the storage layer
- Graph storage schema and indexes
- Query strategy for path checks
- Implement the policy engine
- Relationship evaluation
- Cached query results
- Integrate with existing auth
- Request context mapping
- Backward compatibility
- Add tests and benchmarks
- Unit tests for rules
- Integration tests for policy decisions
Agent (starts work) “I’ll mark the first task as in progress and add notes as I go.”
Progress updates
- Task 1: In progress — “Drafted entity/relationship schema and example checks”
- Task 1: Completed — “Added model doc and validation rules”
- Task 2: In progress — “Evaluating graph storage options”
Task verification example (with scoring and challenges) Agent “I’ve verified Task 1 and logged a score.”
- Score: 92/100
- Checks passed: model completeness, schema validation, examples included
- Challenges: ambiguous relationship naming in legacy data; resolved by adding a normalization step and a short mapping table
- Next step: start Task 2 with the normalized model in place
Why this helps
- The agent keeps a durable task list and status updates.
- You can stop and resume without losing context.
- Large features become manageable, with explicit dependencies.
| Document | Purpose |
|---|---|
| docs/API.md | Tool overview and API surface |
| docs/ARCHITECTURE.md | High-level architecture |
| docs/PERFORMANCE.md | Benchmarks and performance targets |
| AI_AGENT_QUICK_REFERENCE.md | Agent workflow reference |
| SECURITY.md | Threat model and controls |
| CONTRIBUTING.md | Contribution workflow and changesets |
| CHANGELOG.md | Release notes |
pnpm test
pnpm type-check
pnpm lintThis project uses Changesets for versioning and release notes. See CONTRIBUTING.md for guidance.
Git-based execution assumes the repository is buildable and includes a valid bin entry in package.json. For production or shared use, prefer a tagged release published via Changesets.
Typical flow:
- Add a changeset in your PR.
- CI creates a release PR with version bumps and changelog entries.
- Merging the release PR publishes to npm and creates a GitHub release.
Use git-based execution for fast testing; use npm releases for stable installs.
# pnpm
pnpm dlx git+https://github.com/CalebGerman/mcp-taskflow.git mcp-taskflow
# npx (fallback)
npx git+https://github.com/CalebGerman/mcp-taskflow.git mcp-taskflowPrerequisites:
binentry points todist/index.jspnpm buildcompletes successfully
MIT. See LICENSE.md.
Inspired by:
https://github.com/cjo4m06/mcp-shrimp-task-manager
Also informed by related MCP server patterns and workflows:
https://www.nuget.org/packages/Mcp.TaskAndResearch