Skip to content

bitgator/contextpm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

ContextPM

AI-powered product management — from raw feedback to development-ready stories.

ContextPM closes the gap between discovery and execution. Product teams collect feedback from dozens of sources but struggle to synthesize it into clear direction. ContextPM uses a structured AI pipeline to transform unstructured customer signals into validated product decisions, without losing the human judgment that makes those decisions good.

The Pipeline

ContextPM models the full product management workflow as a four-stage pipeline:

Discovery → Recommendation → Idea → Story

Each stage has explicit status cycles, clear ownership, and a defined handoff to the next. Feedback flows in from connectors (Zendesk, Aha, manual entry). AI clusters and synthesizes it into recommendations. PMs curate recommendations into ideas — partner-facing validation documents with problem statements, proposed solutions, and stakeholder questions. Validated ideas become development stories with user story format, assumptions, open questions, and concrete use cases.

The key insight: AI handles synthesis and structure, humans handle judgment and prioritization. The system never auto-promotes — every stage transition is a deliberate PM decision.

How AI Fits In

ContextPM uses Claude (Anthropic) for three distinct generation tasks:

  • Recommendation generation — A three-pass pipeline clusters raw feedback into thematic recommendations, consolidates duplicates, and computes impact. Runs server-side via AWS Step Functions so it survives browser navigation and laptop sleep.
  • Idea brief generation — Converts a recommendation plus its supporting feedback into a structured brief: what we're hearing, what we're thinking, how it helps, what we need from stakeholders.
  • Story brief generation — Transforms a validated idea into a development-ready artifact: story name framed as end state ("Partners can X" not "Create X"), As a/I want/So that statement, assumptions, open questions, and use cases grounded in real feedback.

AI calls are proxied through a backend Lambda — API keys never reach the browser. PMs can iteratively refine any generated artifact with notes and regenerate.

Architecture

graph TB
    subgraph Sources
        Z[Zendesk]
        A[Aha!]
        M[Manual Entry]
    end

    subgraph Frontend
        UI[React · Mantine UI]
    end

    subgraph AWS
        API[AppSync GraphQL]
        AUTH[Cognito Auth]
        SF[Step Functions]
        FN[Lambda Functions]
        DB[(DynamoDB)]
    end

    CLAUDE[Claude API]

    Z & A & M --> UI
    UI <--> API
    UI <--> AUTH
    API <--> DB
    API --> SF
    SF --> FN
    FN --> CLAUDE
    FN --> DB
Loading

Tech Stack

Layer Technology
Frontend React 19, TypeScript, Mantine v8
Routing React Router v7
Auth AWS Cognito
API AWS AppSync (GraphQL)
Database DynamoDB
AI Claude API via server-side Lambda proxy
Orchestration AWS Step Functions
Infrastructure AWS Amplify Gen 2 (code-first CDK)

Design Decisions

Multi-tenant isolation via DynamoDB secondary indexes

Decision: Every data model is scoped to an orgId with secondary indexes, rather than using row-level security or separate tables per tenant.

Rationale: Secondary indexes on orgId give efficient single-tenant queries without the operational overhead of per-tenant infrastructure. Combined with server-side org validation on every mutation, this provides strong isolation with minimal complexity.

Trade-off: No cross-org queries by design. Analytics across tenants would require a separate read model — acceptable for a product where orgs should never see each other's data.

Step Functions for long-running AI pipelines

Decision: Recommendation generation runs as a Step Functions state machine with explicit cancellation polling, not as a synchronous API call or background browser task.

Rationale: A three-pass AI pipeline processing hundreds of feedback items can take minutes. Browser-based execution dies on navigation, laptop sleep, or network interruption. Step Functions survive all of these, provide native retry with backoff, and produce a durable execution log.

Trade-off: Adds architectural complexity — cancellation requires explicit polling between batches since Step Functions doesn't natively support cancelling Map state iterations. Worth it for reliability.

Preservation-based reprocessing

Decision: When rerunning recommendation generation, starred and promoted recommendations survive. Everything else is deleted and regenerated.

Rationale: PMs invest real effort curating recommendations — starring good ones, promoting them to ideas. A full rerun shouldn't destroy that work. But non-curated recommendations should be regenerable as new feedback arrives.

Trade-off: Creates a mixed state where some recommendations are from different pipeline runs. Acceptable because the PM has explicitly blessed the preserved ones.

Mantine with zero custom CSS

Decision: All styling uses Mantine component props and theme tokens. No CSS modules, no styled-components, no inline pixel values.

Rationale: Enforcing this constraint eliminates an entire category of tech debt. Every spacing value, color, and typography choice flows through the theme, making visual consistency automatic and redesigns mechanical.

Trade-off: Occasionally awkward when Mantine's prop API doesn't perfectly match a design intention. Minor layout compromises over introducing escape hatches.

Current Status

ContextPM is in active development and being dogfooded internally — we use it to manage its own roadmap. The core pipeline (Discovery → Recommendations → Ideas → Stories) is functional with real customer data, and we're iterating on AI prompt quality and connector coverage.


Built by David

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors