Skip to content

docs: Add contributor guidance problem domain#7

Merged
ralphbean merged 2 commits into
fullsend-ai:mainfrom
adambkaplan:contributor-guidance-problem
Mar 17, 2026
Merged

docs: Add contributor guidance problem domain#7
ralphbean merged 2 commits into
fullsend-ai:mainfrom
adambkaplan:contributor-guidance-problem

Conversation

@adambkaplan
Copy link
Copy Markdown
Contributor

Adds a new problem document exploring how to make Konflux contribution rules clear to both human contributors and AI coding assistants.

The core challenge: humans learn contribution norms through mentorship, Slack conversations, and institutional memory, while AI agents (Claude, GitHub Copilot, Cursor) only have access to what's written in the repository. This creates a "verbosity gap" — AI needs all institutional knowledge explicit and written down, but making documentation that verbose can overwhelm human contributors.

The document explores five approaches for bridging this gap, including making implicit knowledge explicit, using verbose CLAUDE.md files for agent-specific context, and layered documentation with progressive disclosure. It emphasizes the "no agent required" principle: using AI must not be required to contribute to Konflux.

@adambkaplan adambkaplan requested a review from a team as a code owner March 10, 2026 16:33
Comment thread docs/problems/contributor-guidance.md Outdated
Comment thread docs/problems/contributor-guidance.md Outdated
Comment thread docs/problems/contributor-guidance.md Outdated
What tests need to pass before a PR is mergeable? What's the difference between a CI failure that's a legitimate block vs. a known flaky test?

- Humans can ask: "CI failed but I see this test is flaky — should I retry?" or "How do I run the integration tests locally?"
- AI assistants need this documented: "Known flaky tests: X, Y, Z — these can be retried. To run integration tests locally: [commands]"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this targeting flaky tests in the inner or outer loop? I would imagine that a better consideration would be to remove flaky tests because they likely don't provide a good signal (or at least they provide a confusing one).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to be more general - discuss "how to contribute tests"

Comment thread docs/problems/contributor-guidance.md Outdated

For exploratory contributions ("I think this might fix the issue but I'm not sure"), what's the low-friction path to get feedback?

- Draft PRs are the GitHub-native answer
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some people use draft PRs to indicate that they haven't looked at the content/changes yet. This breaks down if we are aiming to go fully agentic. What do we see as the intended use case of these?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think draft prs for agents is an open question. There's also two categories of "draft" that I have observed:

  • GitHub Drafts which typically don't have CI checks enabled
  • WIP pull requests that do have CI checks enabled, but otherwise trigger an automation that blocks merge.

I don't even know if "draft" makes sense for a fully agentic flow.


This isn't a separate CLAUDE.md for agents vs CONTRIBUTING.md for humans. It's making the existing CONTRIBUTING.md and architecture docs more complete and explicit, so both humans and agents can learn from them.

**Pros:** Single source of truth. Benefits humans too (especially new contributors). No special formats or duplication.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be sources because we are mentioning multiple ones?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrased to "root source of truth"

Comment thread docs/problems/contributor-guidance.md Outdated

**Pros:** Single source of truth. Benefits humans too (especially new contributors). No special formats or duplication.

**Cons:** Requires discipline to capture institutional knowledge as it emerges. Risk of becoming encyclopedic and overwhelming. Needs curation to stay relevant.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are looking at an agent-assisted flow, are there criteria that we can use to suggest whether we need to update any of this documentation to capture contributing guidelines which were not clear?

Comment thread docs/problems/contributor-guidance.md Outdated

### Approach 3: Layered documentation with progressive disclosure

Not all contributors need to know all rules. Structure guidance by contribution complexity:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this structure proposed for specific files?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to specifically reference CONTRIBUTING.md

Comment thread docs/problems/contributor-guidance.md Outdated
@gbenhaim
Copy link
Copy Markdown
Contributor

What is the value of an outside contributor when we have a team of agents that write all the code? In this case, what we need is reviewers we can count on, since producing code isn't a challenge anymore.

@adambkaplan
Copy link
Copy Markdown
Contributor Author

What is the value of an outside contributor when we have a team of agents that write all the code?

  • This is almost a philosophical question - what does it mean to be an open source contributor when the code is written by bots? Are we all project managers or product owners now?
  • A cynical take on having human code contributors alongside bots: passionate contributors are cheaper than any agent token budget.
  • A more charitable take: there will always be an N% of feature requests or bug reports that our current agent prompts + LLMs can't handle. Human intervention will be needed from time to time.

@twaugh
Copy link
Copy Markdown
Contributor

twaugh commented Mar 11, 2026

What is the value of an outside contributor when we have a team of agents that write all the code?

  • This is almost a philosophical question - what does it mean to be an open source contributor when the code is written by bots? Are we all project managers or product owners now?

I submitted another PR tackling some of these topics: #9

adambkaplan added a commit to adambkaplan/konflux-ci-fullsend that referenced this pull request Mar 11, 2026
Incorporates review feedback from PR fullsend-ai#7 to resolve conflicts with
other problem documents and clarify several points.

Major changes:

1. Fix conflict with codebase-context.md (arewm, line 88):
   - Revised Approach 2 to align with ETH Zürich research
   - Changed from "verbose CLAUDE.md" to "minimal CLAUDE.md with
     references to CONTRIBUTING.md"
   - Emphasizes 60-line limit and single source of truth in
     CONTRIBUTING.md
   - CLAUDE.md now provides only non-obvious context plus navigation

2. Remove zero trust violation (arewm & ralphbean, line 166):
   - Deleted exemption for agent-generated PRs
   - All PRs treated equally regardless of source
   - Aligns with security-threat-model.md zero trust principle
   - Simpler implementation, better auditability

3. Add security perspective to "no agent required" (ralphbean, line 197):
   - New subsection on equal treatment from security perspective
   - "System shouldn't grant preferential treatment to input simply
     because it appears to be from an agent"
   - Reinforces both accessibility AND security principles

Minor clarifications:

- Line 100: Revise flaky tests - should be fixed/removed, not documented
- Line 106: Draft PRs serve same purpose regardless of who opens them
- Line 124/126: Grammar fix and documentation feedback loop
- Line 149: Clarify layered structure is conceptual, not file-specific

All reviewer comments addressed while maintaining document structure
presenting multiple viable approaches.

Signed-off-by: Adam Kaplan <adkaplan@redhat.com>
Assisted-by: Claude <noreply@anthropic.com>
@adambkaplan adambkaplan force-pushed the contributor-guidance-problem branch 3 times, most recently from 0de0dda to 0b9ae7d Compare March 16, 2026 17:22
Adds a new problem document exploring how to make Konflux contribution
rules clear to both human contributors and AI coding assistants.

The core challenge: humans learn contribution norms through mentorship,
Slack conversations, and institutional memory, while AI agents (Claude,
GitHub Copilot, Cursor) only have access to what's written in the
repository. This creates a "verbosity gap" — AI needs all institutional
knowledge explicit and written down, but making documentation that
verbose can overwhelm human contributors. We must find ways to document
code requirements, norms, and "head knowledge" systematically such that
they are comprehensible to humans and agents.

This is a distinct from the "human factors" problem, which addresses
higher philisophical challenges of having agents being the primary
code authors in an open source project. It potentially relates to/
overlaps with the "codbebase context" problem, which focuses on how to
provide code context to AI agents specifically.

The following key principles are set for solving the contributor
guidance problem:

- AI agents are not required to contribute source code
- The system should treat human and agent-authored code equally

The document explores five approaches for bridging this gap, including
making implicit knowledge explicit, using verbose CLAUDE.md files for
agent-specific context, and layered documentation with progressive
disclosure. These approaches can be refined in follow-up experiments.

Signed-off-by: Adam Kaplan <adkaplan@redhat.com>
Assisted-by: Claude <noreply@anthropic.com>
@adambkaplan adambkaplan force-pushed the contributor-guidance-problem branch from 0b9ae7d to bb8a9b4 Compare March 16, 2026 17:32
@adambkaplan
Copy link
Copy Markdown
Contributor Author

@arewm @ralphbean updated to incorporate the broader "human factors" problem and "codebase context" problem - both of which are related. I also cut out a bunch of Claude generated content that was repetitive/overly verbose.

The prescribed "fixes" are now "proposed solutions" that are candidates for future experiments.

@ralphbean ralphbean merged commit 41993a3 into fullsend-ai:main Mar 17, 2026
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Mar 31, 2026
Document the end-to-end SDLC pipeline experiment using GitHub Actions
as the coordination layer for four AI agents (triage, implementation,
review, fix). The repo is the coordinator — no orchestrator agent.

Includes:
- Pipeline, review agent, fix agent, and context passing diagrams
- Pluggable reviewer architecture and identity model
- Security layers (Model Armor, token isolation, fail-closed scanning)
- Two verified demos: first-pass approval and review/fix loop
- Timing data and fix agent telemetry from PR #7
- 13 tracked issues with concise write-ups
- 58s demo video and PR timeline screenshot

Test repo: nonflux/integration-service (public playground fork)

Signed-off-by: Wayne Sun <gsun@redhat.com>
waynesun09 added a commit that referenced this pull request Apr 12, 2026
Seven specialized agents for working on the fullsend project:

- fullsend-architect (opus): architectural coherence guardian; knows all
  ADRs, five execution layers, story dependencies, repo-as-coordinator invariant
- go-developer (sonnet): CLI specialist; forge abstraction, layered config,
  multi-role GitHub App model, known gaps in PR #132
- doc-architect (sonnet): problem doc and ADR writer; design-exploration
  conventions, org-agnostic authoring rules
- stage-prompt-designer (opus): designs/reviews stage agent prompts;
  triage/implement/review/fix constraints, injection surface rules,
  known failure modes from live operation (Issues #4, #5, #010a)
- security-reviewer (opus): applies fullsend threat model; prompt injection,
  ADR 0017 credential isolation, sandbox integrity, workflow file protection
- workflow-engineer (sonnet): GitHub Actions and dispatch layer; label state
  machine, slash commands, concurrency groups, fixes for Issues #1 #003b
  #4 #5 #7 #9 #010a
- e2e-integrator (opus): full flow tracing; integration gap analysis, demo
  readiness checklist, sprint prioritization across stories

Also adds .claude/AGENTS.md with usage guide and team composition patterns.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants