A flexible canvas for tasks and notes where you can connect ideas and work alongside AI without waiting.
Two workflow problems led to this: first, when an LLM response opens multiple directions, carrying context across separate threads means a lot of copying and pasting. Second, as AI agents take longer to produce outputs (like OpenClaw or ChatGPT Deep Research), a chat interface forces you to wait — there is no way to keep working while the model thinks.
Task Hub is a canvas where notes form a directed graph. Connecting two notes carries context from the upstream note to the downstream one. When you ask AI on a node, it receives that node and all its ancestors in topological order, then writes the response back to the same note as a clearly marked, collapsible panel. The rest of the canvas stays interactive while the model works.
Vanilla JavaScript with CSS transforms for pan and zoom. State is saved to localStorage as JSON. Nodes form a DAG with cycle prevention enforced on the client. AI requests are assembled by walking the reverse-adjacency list and sorting ancestors topologically before sending to the server. An Express backend proxies Claude 3.5 Haiku through AWS Bedrock, applies per-IP rate limiting, and sanitizes both user HTML and AI Markdown before rendering. Hosted on Vercel at task-hub-virid-gamma.vercel.app.
Every node currently routes to the same model. The next step is an agent registry that lets users point individual nodes at their own agents via webhooks — which is where the tool gets personal.