Skip to content

[P1] Agent Content Quality Crisis - 60% of Agents Have Zero Engagement #10419

@github-actions

Description

@github-actions

Problem Statement

60% of active agents (6 out of 10) are producing content that receives zero engagement - no reactions, no comments, no replies. This represents a significant content quality crisis affecting the majority of the agent ecosystem.

Affected Agents:

  1. Daily - 6 issues created, 0 reactions/comments (also failing at 40% success rate)
  2. Workflow - 2 issues created, 0 reactions/comments
  3. Code - 1 issue created, 0 reactions/comments
  4. GitHub - 1 issue created, 0 reactions/comments
  5. Copilot - 1 issue created, 0 reactions/comments
  6. (Note: More agents may be affected as data becomes available)

Impact Assessment

User Experience Impact

  • Low perceived value: Content that receives no engagement signals it's not useful
  • Noise vs. signal: Creating issues/PRs that nobody interacts with clutters the repository
  • Resource waste: Agent compute time and tokens spent on content nobody uses
  • Trust erosion: Repeated low-quality outputs reduce confidence in agent system

Ecosystem Health Impact

  • Average quality score: 11.9/100 (very low)
  • Engagement rate: 16.7% overall (6 engaged outputs / 36 total)
  • Zero-engagement rate: 60% of agents
  • Productivity question: Are agents solving real problems or creating busywork?

Root Cause Hypotheses

Hypothesis 1: Content Not Actionable

  • Issues lack clear next steps
  • Missing success criteria
  • Unclear problem statements
  • No clear owner or assignee
  • Too vague or too complex

Hypothesis 2: Poor Title/Description Clarity

  • Titles don't convey value
  • Descriptions buried in technical detail
  • Missing context or background
  • No clear "why this matters"
  • Unclear priority or urgency

Hypothesis 3: Wrong Target Audience

  • Content not relevant to repository maintainers
  • Solving problems nobody asked for
  • Missing alignment with project goals
  • Not addressing actual pain points
  • Duplicate or redundant work

Hypothesis 4: Timing and Visibility

  • Published at low-activity times
  • Not announced or highlighted
  • Missing labels or project assignments
  • Not linked to related discussions
  • Hidden in notification flood

Hypothesis 5: Agent Configuration Issues

  • Prompts not emphasizing quality
  • Missing quality gates
  • No user feedback loop
  • Agents not learning from past engagement
  • Insufficient context or constraints

Audit Plan

Phase 1: Individual Agent Content Review

For each affected agent, review 5-10 most recent outputs:

Daily Agent (Quality: 6.7/100, 6 issues, 0 engagement)

Also experiencing 40% workflow failure rate - see issue #9899

  • Review 6 recent issues created by Daily
  • Assess title clarity (1-5 scale)
  • Assess description actionability (1-5 scale)
  • Assess relevance to repository (1-5 scale)
  • Assess completeness (1-5 scale)
  • Identify common patterns in zero-engagement content
  • Compare to high-engagement issues (if any)
  • Check timing (when published vs. when maintainers active)
  • Special attention: Is low quality related to workflow failures?

Workflow Agent (Quality: 2.2/100, 2 issues, 0 engagement)

  • Review 2 issues created
  • Same assessment criteria as Daily
  • Identify if issues are too generic or too specific
  • Check if workflow-related issues attract different audience

Code Agent (Quality: 1.1/100, 1 issue, 0 engagement)

  • Review issue created
  • Assess technical depth and correctness
  • Check if code-related issues need different format
  • Verify if issue includes code examples

GitHub Agent (Quality: 1.1/100, 1 issue, 0 engagement)

  • Review issue created
  • Assess GitHub-specific context
  • Check if meta-GitHub issues are relevant
  • Verify if issue overlaps with other agents

Copilot Agent (Quality: 1.1/100, 1 issue, 0 engagement)

  • Review issue created
  • Assess Copilot-specific content
  • Check if Copilot issues have right audience
  • Verify no duplication with other agents

Phase 2: Comparative Analysis

Compare zero-engagement agents with high-engagement agents:

High-engagement agents (50% rate):

  • Smoke: 1 issue, 1 PR, 1 engagement
  • Q: 1 issue, 1 comment, 1 engagement

Questions:

  • What do Smoke and Q do differently?
  • Are their titles more compelling?
  • Are their descriptions more actionable?
  • Do they target specific people or teams?
  • Do they include more context or examples?
  • Do they have better timing?
  • Do they use labels/projects more effectively?

Phase 3: Pattern Detection

Identify systemic patterns across all zero-engagement content:

  • Common title patterns (e.g., all start with "[agent-name]"?)
  • Common description structures
  • Common labels or lack thereof
  • Common timing patterns
  • Common missing elements (examples, links, context)
  • Common redundancies or overlaps

Phase 4: User Feedback

  • Ask maintainers what makes issues engaging vs. ignorable
  • Survey what types of agent outputs they find valuable
  • Identify pain points that agents should address
  • Get feedback on current agent outputs
  • Understand workflow for reviewing agent content

Recommended Improvements

Immediate (Per-Agent Fixes)

For Daily Agent

Priority: P1 (also has workflow failure issue #9899)

  1. Improve issue titles:

    • Current pattern: ?
    • Suggested: "Daily Digest: [Key Highlight] - [Date]"
    • Include most interesting finding in title
  2. Improve issue structure:

    ## 🔑 Key Highlights
    - Most important finding (1 sentence)
    - Second most important (1 sentence)
    
    ## 📊 Details
    [Full analysis]
    
    ## 🎯 Action Items
    - [ ] Specific action for maintainers
    - [ ] Specific action for contributors
  3. Add actionability:

    • Each issue should have clear next steps
    • Assign to relevant team or person
    • Add labels (priority, area)
  4. Fix workflow failures:

    • Address timeout issues (P1)
    • Ensure consistent delivery

For Workflow Agent

  1. Increase specificity:

    • Focus on specific workflows with issues
    • Include links to failing runs
    • Provide concrete fix recommendations
  2. Add visual elements:

    • Include status badges
    • Show before/after metrics
    • Use tables for comparisons

For Code/GitHub/Copilot Agents

Assessment needed: Are these agents running frequently enough?

  • Only 1 output each in entire period
  • May indicate trigger condition issues
  • Consider consolidation if truly low-value
  1. Increase output frequency OR
  2. Deprecate if not providing value OR
  3. Consolidate into higher-value agents

Systemic Improvements (All Agents)

1. Quality Gate Implementation

Add pre-publication checks:

quality_checks:
  - title_clarity: min_score 3/5
  - description_completeness: required_sections
  - actionability: must_have_next_steps
  - relevance: must_match_repository_goals

2. Template Standardization

Create issue/PR templates for agents:

# [Agent Name] - [Clear, Specific Title]

## Why This Matters
[1-2 sentences explaining impact]

## What We Found
[Key findings with evidence]

## What You Should Do
- [ ] Specific action 1
- [ ] Specific action 2

## More Details
[Full analysis]

---
🤖 Created by [Agent Name] | [Date] | [Link to run]

3. Engagement Tracking

Track engagement for continuous improvement:

  • Monitor which issues get reactions
  • Learn from high-engagement patterns
  • Adjust agent prompts based on engagement data
  • A/B test different title/description formats

4. User Feedback Loop

  • Add "Was this useful?" prompt to agent outputs
  • Collect feedback on agent-created content
  • Use feedback to refine agent prompts
  • Close feedback loop with improvements

Success Criteria

Immediate (30 days)

  • All 6 agents audited
  • Common patterns identified
  • Improvement recommendations implemented
  • Engagement rate >30% (from 0%)
  • Average quality score >25/100 (from 1-7)

60 days

  • Engagement rate >50%
  • Average quality score >40/100
  • Zero-engagement agents <20% (from 60%)
  • Quality gates operational
  • Templates adopted by all agents

90 days

  • Engagement rate >60%
  • Average quality score >50/100
  • Zero-engagement agents <10%
  • User feedback system operational
  • Continuous improvement process established

Investigation Timeline

  • Week 1: Phase 1 - Individual agent content review (6 agents x 1 hour = 6 hours)
  • Week 2: Phase 2 - Comparative analysis (4 hours)
  • Week 2: Phase 3 - Pattern detection (2 hours)
  • Week 3: Phase 4 - User feedback (2 hours)
  • Week 3-4: Implement improvements per agent (2-3 hours each)
  • Week 4: Deploy quality gates and templates (4 hours)

Total estimated effort: 20-25 hours over 4 weeks

Related Issues

Priority Justification

Why P1 (High):

  1. Affects 60% of agent ecosystem
  2. Indicates fundamental content quality issues
  3. Wastes agent compute and user attention
  4. Erodes trust in agent system
  5. Relatively straightforward to improve with focused effort

Not P0 because:

  • Agents are producing outputs (volume is fine)
  • System is operational (not a failure)
  • No immediate user-facing breakage
  • Can be addressed incrementally

Reported by: Agent Performance Analyzer
Date: 2026-01-17T04:57:32Z
Data source: Agent Performance Report (Week of January 13-17, 2026)

AI generated by Agent Performance Analyzer - Meta-Orchestrator

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions