-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Problem
Current create-expert generates functional experts but not usable experts:
What's Generated Today
- ✅ Main expert with good functionality
- ✅ Error handling and security
- ❌ NO setup automation
- ❌ NO demo/test mode
- ❌ NO troubleshooting support
Real User Experience
# User tries research-assistant
npx perstack run research-assistant "query"
# ❌ Error: BRAVE_API_KEY is required
# User is stuck - no guidance on how to fixExpected: Works immediately with demo data OR setup is automated
Actual: Manual configuration required, no guidance provided
Solution: Generate Expert Ecosystems
Transform create-expert from generating single experts to generating expert ecosystems:
# Instead of just this:
[experts."research-assistant"]
# Generate this ecosystem:
[experts."research-assistant"] # Main expert
[experts."research-assistant-setup"] # Automated setup wizard
[experts."research-assistant-demo"] # Demo mode (no API key needed)
[experts."research-assistant-doctor"] # Troubleshooting assistantChanges Required
1. Extend Property Extraction
Add Usability Properties to property-extractor:
interface UsabilityProperty {
category: 'usability'
name: 'Zero-Config' | 'Setup-Automation' | 'Self-Service-Troubleshooting'
acceptance: string
testStrategy: 'automated' | 'manual'
}
const USABILITY_PROPERTIES = [
{
name: 'Zero-Config',
description: 'User can try expert immediately without setup',
acceptance: 'Demo mode works with test data, OR setup is fully automated',
test: 'Run with demo data - should succeed without any configuration'
},
{
name: 'Setup-Automation',
description: 'If external deps needed, setup is automated',
acceptance: 'Setup expert exists, completes in < 2 minutes, handles errors gracefully',
test: 'Run setup expert - should detect missing deps and configure them'
},
{
name: 'Error-Guidance',
description: 'All errors include actionable next steps',
acceptance: '100% of errors have "To fix: ..." guidance',
test: 'Trigger common errors - verify clear guidance provided'
}
]2. Enhance Expert-Builder → Ecosystem-Builder
Current expert-builder instruction:
Use editTextFile to APPEND the new Expert definition to perstack.toml
New ecosystem-builder instruction:
## Ecosystem Generation Strategy
1. **Analyze Dependencies**
- Detect external API requirements (e.g., BRAVE_API_KEY)
- Identify required tools/packages
- Determine if demo mode is feasible
2. **Generate Ecosystem**
ALWAYS generate:
- Main expert
- Demo expert (with test data OR guided walkthrough)
IF external dependencies exist:
- Setup expert (automated configuration)
- Doctor expert (troubleshooting)
3. **Implementation Patterns**
Setup Expert Template:
```toml
[experts."<name>-setup"]
description = "Automated setup for <name>"
instruction = '''
1. Check if <DEPENDENCY> exists
2. If missing:
- Inform user: "You need <DEPENDENCY> from <URL>"
- Ask for value
- Validate format
- Save to .env
3. Verify configuration with test
4. Confirm: "✓ Setup complete! Try: npx perstack run <name> ..."
'''
Demo Expert Template:
[experts."<name>-demo"]
description = "Interactive demo with sample data"
instruction = '''
Run a demonstration using built-in test data.
Show user what the expert can do without requiring setup.
'''Doctor Expert Template:
[experts."<name>-doctor"]
description = "Diagnose and fix common issues"
instruction = '''
Run diagnostics:
1. Check environment variables
2. Verify network connectivity
3. Test API keys (if applicable)
4. Validate dependencies
For each issue found, provide:
- What's wrong
- Why it matters
- How to fix (exact commands)
'''
### 3. Add Usability-Manager
**New expert to manage usability PDCA:**
```toml
[experts."usability-manager"]
description = "Ensures expert ecosystem is usable"
delegates = ["expert-tester"]
instruction = '''
## Your Role
Verify the expert ecosystem is production-ready from a UX perspective.
## Usability PDCA
### Plan
Define usability test scenarios:
1. Fresh User Test: Can someone with zero knowledge succeed?
2. Setup Test: If setup needed, does setup expert work flawlessly?
3. Demo Test: Can user see value without any setup?
4. Error Recovery Test: Do errors lead to solutions?
### Do
Delegate to expert-tester with stage "usability":
- Test demo expert (should work without setup)
- Test setup expert (should complete successfully)
- Test main expert (should work after setup)
- Test doctor expert (should diagnose issues)
- Trigger intentional errors (should see guidance)
### Check
Verify usability properties:
- [ ] Demo expert works without configuration
- [ ] Setup expert (if exists) completes in < 2 minutes
- [ ] All errors include "To fix: ..." guidance
- [ ] Doctor expert can diagnose 90% of common issues
- [ ] Time to first success < 5 minutes
### Act
If any property fails:
- If demo missing/broken: Fix demo expert
- If setup broken: Fix setup automation
- If errors unclear: Add actionable guidance
- If doctor missing: Generate doctor expert
- Loop back to Do
## Exit Condition
All usability properties pass → return success to parent.
'''
4. Update Create-Expert Workflow
Current workflow:
create-expert
├── property-extractor
├── expert-builder
├── happy-path-manager
├── unhappy-path-manager
├── adversarial-manager
└── report-generator
New workflow:
create-expert
├── property-extractor (functional + usability properties)
├── ecosystem-builder (main + setup + demo + doctor)
├── functional-manager (combines happy/unhappy/adversarial)
├── usability-manager (fresh user + setup + demo + error recovery)
└── report-generator (includes usability metrics)
5. Update Expert-Tester
Add usability test stage:
if (stage === "usability") {
testCases = [
{
name: "Demo Mode",
command: "npx perstack run <name>-demo --workspace .",
expectation: "Should complete without errors, no setup required"
},
{
name: "Setup Automation",
command: "npx perstack run <name>-setup --workspace .",
expectation: "Should detect missing config and guide user through setup"
},
{
name: "Error Guidance",
command: "npx perstack run <name> 'query' --workspace .",
expectation: "If error occurs, should include 'To fix: ...' guidance"
},
{
name: "Troubleshooting",
command: "npx perstack run <name>-doctor --workspace .",
expectation: "Should diagnose issues and provide fixes"
}
]
}Success Criteria
After this change, when a user runs:
npx create-expert --description "A web researcher using Brave Search"They should get:
[experts."web-researcher"] # Main functionality
[experts."web-researcher-demo"] # Try it immediately
[experts."web-researcher-setup"] # Automated BRAVE_API_KEY setup
[experts."web-researcher-doctor"] # TroubleshootingAnd the user experience should be:
# First try - see what it does
npx perstack run web-researcher-demo
# ✓ Shows demo with test data
# Set it up
npx perstack run web-researcher-setup
# ✓ Guides through BRAVE_API_KEY setup
# ✓ Validates configuration
# ✓ Confirms ready to use
# Use it for real
npx perstack run web-researcher "TypeScript features"
# ✓ Works!
# If issues occur
npx perstack run web-researcher-doctor
# ✓ Diagnoses and suggests fixesTime to first success: < 5 minutes
Testing Plan
-
Generate expert with external dependency (e.g., Brave Search)
- Verify setup expert is created
- Verify demo expert is created
- Verify doctor expert is created
-
Run usability-manager PDCA
- Verify all usability properties pass
-
Fresh user simulation
- Start with no configuration
- Follow generated instructions only
- Measure time to first success
Related
This addresses the core issue found in testing: research-assistant was generated but unusable without manual BRAVE_API_KEY configuration. No setup guidance was provided.
Current Rating: B-- (functional but not usable)
Target Rating: A (production-ready ecosystem)