Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
308 changes: 308 additions & 0 deletions .cursor/test_monitoring_rule.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,308 @@
# Automated Test Monitoring and Immediate Bug Fixing Rule

**CRITICAL**: Implement automated test monitoring with immediate error detection and bug fixing workflow.

## Terminal Monitoring Requirements

### After Every Test Run
1. **Always Read Terminal Output**
- Automatically capture and read all terminal output after test execution
- Parse output for error messages, warnings, and failures
- Monitor for stack traces, assertion failures, and exception details
- Track test execution time and performance metrics
- Capture both stdout and stderr streams

2. **Test Command Monitoring**
- Monitor pytest executions: `pytest`, `python -m pytest`, `pytest -v`
- Monitor unittest executions: `python -m unittest`, `python test_*.py`
- Monitor integration tests and end-to-end tests
- Monitor custom test runners and scripts
- Monitor CI/CD pipeline test executions

3. **Output Analysis Patterns**
- Failed test indicators: `FAILED`, `ERROR`, `AssertionError`, `Exception`
- Warning patterns: `WARNING`, `DeprecationWarning`, `UserWarning`
- Coverage issues: `coverage`, missing lines, low coverage percentages
- Performance issues: timeouts, slow tests, memory usage
- Import errors, module not found, dependency issues

## Error Detection and Interruption

### Immediate Test Interruption Triggers
1. **Critical Errors** - Stop immediately and fix:
- Syntax errors in test files
- Import errors preventing test discovery
- Missing dependencies or configuration issues
- Database connection failures
- API endpoint unreachable errors
- Authentication/authorization failures

2. **Test Failures** - Interrupt and analyze:
- Assertion failures with unexpected values
- Exception raising in test methods
- Timeout errors in async tests
- Mock/stub configuration errors
- Data validation failures

3. **Infrastructure Failures** - Halt and resolve:
- Test database setup failures
- Test fixture loading errors
- Test environment configuration issues
- Resource allocation failures
- Permission denied errors

### Error Classification System
```
CRITICAL (Stop All Tests):
- Syntax errors, import errors, missing dependencies
- Database/service connection failures
- Environment setup failures

HIGH (Stop Current Test Suite):
- Multiple test failures in same module
- Resource allocation failures
- Configuration errors

MEDIUM (Complete Suite, Then Fix):
- Individual test assertion failures
- Minor timeout issues
- Warning accumulation

LOW (Log and Continue):
- Performance variations
- Non-critical warnings
- Coverage fluctuations
```

## Immediate Bug Fixing Workflow

### Step 1: Error Capture and Analysis
1. **Parse Terminal Output**
```bash
# Capture and parse test output
pytest --tb=long --verbose 2>&1 | tee test_output.log
grep -E "(FAILED|ERROR|Exception)" test_output.log
```

2. **Extract Error Details**
- Identify failing test name and location
- Capture full stack trace
- Extract assertion failure details
- Identify error type and message
- Locate relevant code lines

3. **Contextual Information Gathering**
- Read the failing test file
- Examine the code under test
- Check related test fixtures and data
- Review recent code changes
- Analyze test dependencies

### Step 2: Immediate Debugging Actions
1. **Code Investigation**
- Read the failing test method completely
- Examine the implementation being tested
- Check for recent changes in git history
- Verify test data and fixtures
- Analyze mock configurations

2. **Root Cause Analysis**
- Identify if issue is in test or implementation
- Check for environment-specific problems
- Verify data integrity and test assumptions
- Analyze timing and concurrency issues
- Review dependency versions and compatibility

3. **Quick Fix Implementation**
- Fix obvious syntax errors immediately
- Correct import statements and dependencies
- Update test assertions for changed behavior
- Fix mock configurations and test data
- Resolve environment setup issues

### Step 3: Verification and Continuation
1. **Immediate Re-test**
```bash
# Re-run specific failing test
pytest path/to/failing_test.py::TestClass::test_method -v

# Re-run related tests
pytest path/to/failing_test.py -v

# Full suite if critical fixes
pytest
```

2. **Fix Validation**
- Confirm the specific test now passes
- Verify no new tests are broken
- Check for any performance degradation
- Validate fix doesn't introduce regressions
- Ensure test coverage is maintained

3. **Documentation and Learning**
- Log the error and fix for future reference
- Update test documentation if needed
- Improve error handling if applicable
- Add additional test cases to prevent regression
- Share knowledge with team if relevant

## Automated Implementation

### Terminal Output Monitoring Script
```python
import subprocess
import re
import sys
from pathlib import Path

def monitor_test_execution(test_command):
"""Monitor test execution and interrupt on errors."""
process = subprocess.Popen(
test_command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)

error_patterns = [
r'FAILED.*::(.*)',
r'ERROR.*::(.*)',
r'(.*Error:.*)',
r'(.*Exception:.*)',
r'ImportError:.*',
r'ModuleNotFoundError:.*'
]

output_lines = []
for line in process.stdout:
output_lines.append(line.strip())
print(line, end='') # Real-time output

# Check for critical errors
for pattern in error_patterns:
if re.search(pattern, line, re.IGNORECASE):
# Interrupt and analyze
process.terminate()
analyze_and_fix_error(output_lines, line)
return False

return process.returncode == 0

def analyze_and_fix_error(output_lines, error_line):
"""Analyze error and attempt immediate fix."""
# Implementation for error analysis and fixing
pass
```

### Integration with Development Workflow
1. **Pre-commit Hooks**
- Run quick tests before committing
- Monitor for immediate failures
- Prevent commits with failing tests

2. **IDE Integration**
- Configure IDE to run tests on save
- Display test results in IDE
- Highlight failing tests immediately

3. **Continuous Monitoring**
- Run tests on file changes
- Monitor test health continuously
- Alert on test failures immediately

## Test Environment Management

### Environment Validation
1. **Pre-test Checks**
- Verify test database connectivity
- Check required services are running
- Validate test data availability
- Confirm environment variables set
- Verify dependency versions

2. **Resource Management**
- Ensure adequate system resources
- Monitor memory and CPU usage
- Check disk space for test artifacts
- Validate network connectivity
- Confirm permissions and access

3. **Cleanup and Reset**
- Clean test databases between runs
- Reset test fixtures and state
- Clear temporary files and cache
- Restore environment configurations
- Validate clean starting state

## Reporting and Metrics

### Test Execution Reporting
1. **Real-time Feedback**
- Display test progress and results
- Show error details immediately
- Provide fix suggestions
- Track fix success rates
- Monitor test execution time

2. **Error Tracking**
- Log all errors and fixes
- Track error patterns and frequency
- Identify recurring issues
- Monitor fix effectiveness
- Generate error reports

3. **Performance Monitoring**
- Track test execution speed
- Monitor resource usage
- Identify slow tests
- Track coverage changes
- Monitor test stability

## Emergency Procedures

### When Automated Fixes Fail
1. **Manual Intervention Required**
- Escalate to human developer
- Provide detailed error context
- Suggest investigation areas
- Preserve error state for analysis
- Document complex failures

2. **System Recovery**
- Reset test environment
- Restore known good state
- Skip problematic tests temporarily
- Isolate failing components
- Implement workarounds

3. **Team Notification**
- Alert team of critical failures
- Share error details and context
- Coordinate fix efforts
- Document resolution process
- Update team knowledge base

## Implementation Checklist

- [ ] Set up terminal output monitoring
- [ ] Configure error pattern recognition
- [ ] Implement automatic test interruption
- [ ] Create error analysis system
- [ ] Set up immediate fix workflow
- [ ] Configure re-test automation
- [ ] Implement fix validation
- [ ] Set up error reporting
- [ ] Create performance monitoring
- [ ] Configure team notifications

## Success Metrics

- **Error Detection Time**: < 5 seconds after test failure
- **Fix Implementation Time**: < 2 minutes for common errors
- **Re-test Cycle Time**: < 30 seconds
- **Fix Success Rate**: > 80% for automated fixes
- **False Positive Rate**: < 5%
- **Test Suite Recovery Time**: < 5 minutes for critical failures