-
-
Notifications
You must be signed in to change notification settings - Fork 1
Pattern Learning Explained
How pattern learning works in SuperLocalMemory V2 - Multi-dimensional identity extraction with confidence scoring, all processed locally for privacy.
Pattern learning is SuperLocalMemory's ability to automatically detect your coding preferences and style by analyzing the memories you save. It learns what frameworks you prefer, how you write code, what testing approaches you use, and more.
Based on published research: Identity pattern learning from interactions with adaptive confidence scoring, inspired by MemoryBank (Zhong et al., AAAI 2024, arXiv:2305.10250), MACLA (Forouzandeh et al., Dec 2025, arXiv:2512.18950), and Hindsight (Latimer et al., Dec 2025, arXiv:2512.12818).
Example:
After saving 50 memories, SuperLocalMemory learns:
Your Coding Identity:
- Framework preference: React (73% confidence)
- Style: Performance over readability (58% confidence)
- Testing: Jest + React Testing Library (65% confidence)
- API style: REST over GraphQL (81% confidence)
- Language: Python for backends (65% confidence)
Why this matters: Your AI assistant can automatically match your preferences without you re-explaining them every session.
Pattern learning analyzes six categories of patterns:
What it detects:
- Frontend: React, Vue, Angular, Svelte, Next.js, Nuxt, etc.
- Backend: FastAPI, Flask, Django, Express, NestJS, etc.
- Mobile: React Native, Flutter, SwiftUI, etc.
How it works:
Scans memories for framework mentions
Counts frequency of each framework
Calculates confidence = (mentions of X / total framework mentions)
Example:
- React: 15 mentions
- Vue: 3 mentions
- Angular: 2 mentions
Total: 20 mentions
React confidence: 15/20 = 75%
Output:
Framework preference: React (75% confidence)
What it detects:
- Python, JavaScript, TypeScript, Go, Rust, Java, C#, etc.
- Context-aware (API vs frontend vs backend)
Example:
Memories analyzed:
- "Use Python for REST APIs" β Python + backend context
- "TypeScript for React components" β TypeScript + frontend context
- "Python data processing pipeline" β Python + data context
Result:
- Language: Python for backends (73% confidence)
- Language: TypeScript for frontend (65% confidence)
What it detects:
- Microservices vs monolith
- Serverless vs traditional servers
- Event-driven architecture
- REST vs GraphQL
- SQL vs NoSQL
Example:
Memories:
- "Split user service into microservice"
- "Avoid monolith, use microservices"
- "Microservices for scalability"
Result:
Architecture preference: Microservices (58% confidence)
What it detects:
- JWT vs sessions vs OAuth
- API keys vs certificates
- Authentication patterns
- Authorization strategies
Example:
Memories:
- "JWT tokens expire after 24h"
- "Use JWT for API authentication"
- "JWT refresh token strategy"
Result:
Security: JWT tokens (81% confidence)
What it detects:
- Performance vs readability
- TDD vs pragmatic testing
- Functional vs OOP
- Strict typing vs dynamic
Example:
Memories:
- "Optimize for performance"
- "Cache aggressively for speed"
- "Performance is critical here"
- "Readable code is important"
Result:
Style: Performance over readability (60% confidence)
What it detects:
- Project-specific terms
- Industry vocabulary (fintech, healthcare, e-commerce)
- Team conventions
- Internal acronyms
Example:
Memories in fintech project:
- "KYC verification flow"
- "AML compliance check"
- "Transaction reconciliation"
Result:
Domain: Fintech (KYC, AML, reconciliation)
SuperLocalMemory v2.4.0 replaced the frequency-based formula with a Bayesian Beta-Binomial posterior grounded in the MACLA framework (Forouzandeh et al., Dec 2025, arXiv:2512.18950).
Formula:
posterior_mean = (alpha + evidence_count) / (alpha + beta + evidence_count + log2(total_memories))How it works:
- Alpha/Beta priors are pattern-specific: framework preferences (Ξ±=2, Ξ²=3), coding style (Ξ±=1, Ξ²=4), terminology (Ξ±=2, Ξ²=3), testing approach (Ξ±=1, Ξ²=5)
-
Log-scaled competition: The denominator grows with
log2(total_memories), not the raw count β so adding memories doesn't crush existing confidence scores - Recency bonus: Patterns observed in the last 7 days get up to +0.05 boost (decays linearly over 30 days)
- Distribution bonus: Patterns with high consistency get up to +0.03
- Hard cap at 0.95: No pattern can reach 100% β epistemic humility built in
Why this matters:
Old formula: 500 memories, 10 React observations β 2% confidence (too low, unusable)
MACLA formula: Same data β 55% confidence (calibrated, actionable)
The Bayesian approach gives meaningful confidence from the start and converges toward the true proportion as evidence accumulates.
Frequency-based scoring:
confidence = (pattern_mentions / category_total_mentions)With recency weighting:
recent_boost = 1.2 if last_seen < 7_days else 1.0
confidence = (pattern_mentions / category_total_mentions) Γ recent_boostWith statistical significance:
if pattern_mentions < 3:
confidence *= 0.5 # Low confidence if too few samples| Confidence | Meaning | Threshold |
|---|---|---|
| >80% | Very strong preference | Always report |
| 60-80% | Strong preference | Always report |
| 40-60% | Moderate preference | Report if >50% |
| 30-40% | Weak preference | Report only if significant |
| <30% | Too weak to report | Filtered out |
Default reporting threshold: 50%
Scenario: Framework preferences
Memories:
- React: 15 mentions (last: 2 days ago)
- Vue: 3 mentions (last: 45 days ago)
- Angular: 2 mentions (last: 90 days ago)
Calculations:
React confidence:
Base: 15 / 20 = 75%
Recency boost: 1.2 (last seen < 7 days)
Final: 75% Γ 1.2 = 90% (capped at 100%)
Vue confidence:
Base: 3 / 20 = 15%
Recency: 1.0 (last seen > 7 days)
Final: 15% (below threshold, not reported)
Output:
Framework preference: React (90% confidence)
Triggered on every remember operation:
slm remember "We use FastAPI for REST APIs" --tags python,backendWhat happens:
- Content saved to database
- Pattern learner extracts entities
- Updates pattern frequency counts
- Recalculates confidence scores
- Updates learned_patterns table
No manual action required.
# Force pattern update
python3 ~/.claude-memory/pattern_learner.py updateWhen to use:
- After bulk imports
- After database restore
- When patterns seem stale
# Get identity context (confidence threshold: 0.5)
python3 ~/.claude-memory/pattern_learner.py context 0.5Output:
Your Coding Identity:
Framework Preferences:
- React (73% confidence)
- FastAPI (68% confidence)
Language Preferences:
- Python for backends (65% confidence)
- TypeScript for frontend (58% confidence)
Architecture Patterns:
- Microservices (58% confidence)
- REST over GraphQL (81% confidence)
Security Approaches:
- JWT tokens (81% confidence)
Coding Style:
- Performance over readability (58% confidence)
- Async/await preferred (72% confidence)
Testing Preferences:
- Jest + React Testing Library (65% confidence)
- Pytest for Python (71% confidence)
Identity context is a formatted text summary of your learned patterns that can be injected into AI assistant prompts.
Your Coding Identity (learned from 247 memories):
- Framework preference: React (73% confidence)
- Backend: FastAPI (68% confidence)
- Style: Performance-focused (58% confidence)
- Testing: Jest + Pytest (65% confidence)
- API style: REST over GraphQL (81% confidence)
- Security: JWT tokens (81% confidence)
Based on this, when writing code:
1. Use React for frontend
2. Use FastAPI for APIs
3. Optimize for performance
4. Write tests with Jest/Pytest
5. Design REST APIs
6. Use JWT for auth
Manual injection:
# Get context
context=$(python3 ~/.claude-memory/pattern_learner.py context 0.5)
# Use with Claude
echo "$context\n\nNow help me build a new API endpoint."Automatic injection (Cursor/Claude Desktop):
- MCP server automatically includes identity context
- No manual action needed
Aider integration:
# aider-smart wrapper includes context automatically
aider-smartScenario 1: New preference emerges
Month 1: React (90% confidence)
Month 2: React (85%), Vue (15%)
Month 3: React (75%), Vue (25%)
Month 4: React (55%), Vue (45%)
Pattern learning adapts: "Shifting from React to Vue"
Scenario 2: Temporary spike
Week 1-4: Python (90%)
Week 5: JavaScript spike (10 mentions in 1 week)
Week 6: Back to Python
Pattern learning recognizes: "JavaScript was temporary, Python is core"
Recent patterns weighted more heavily:
if last_seen < 7_days:
weight = 1.2 # 20% boost
elif last_seen < 30_days:
weight = 1.0
else:
weight = 0.8 # 20% penaltyPrevents stale patterns from dominating.
Old patterns gradually fade:
if last_seen > 180_days:
confidence *= 0.5 # Reduce confidence by halfEnsures current preferences dominate.
No data leaves your machine:
- All pattern learning happens locally
- No external API calls
- No telemetry
- No cloud sync
Stored in SQLite database:
CREATE TABLE learned_patterns (
id INTEGER PRIMARY KEY,
category TEXT NOT NULL,
pattern TEXT NOT NULL,
confidence REAL NOT NULL,
frequency INTEGER NOT NULL,
last_seen TEXT NOT NULL,
created_at TEXT NOT NULL
);Location: ~/.claude-memory/memory.db
Access control: Standard filesystem permissions
Without pattern learning:
You: "Help me build an API"
AI: "Sure! Which framework? Which language? REST or GraphQL?"
You: *explains preferences again*
With pattern learning:
You: "Help me build an API"
AI: [Reads identity context: FastAPI, Python, REST, JWT]
AI: "I'll create a FastAPI REST endpoint with JWT auth"
Scenario: Multiple team members using SuperLocalMemory
# Share learned patterns
slm remember "Team uses React + TypeScript" --tags team-standard
slm remember "Team prefers REST over GraphQL" --tags team-standard
slm remember "Team uses Jest for testing" --tags team-standard
# Pattern learning ensures consistent recommendationsUse profiles for different projects:
# Work project (React + FastAPI)
slm switch-profile work
slm remember "Work project uses React + FastAPI"
# Personal project (Vue + Flask)
slm switch-profile personal
slm remember "Personal project uses Vue + Flask"
# Each profile learns separate patternsPattern learning knows your typical patterns:
AI: "You typically use JWT auth, but this endpoint uses sessions.
Was this intentional or should I fix it?"
# High confidence only (80%+)
python3 ~/.claude-memory/pattern_learner.py context 0.8
# Low confidence included (30%+)
python3 ~/.claude-memory/pattern_learner.py context 0.3# View all learned patterns (raw)
python3 ~/.claude-memory/pattern_learner.py listOutput:
Category: frameworks
React: 73% (15 mentions, last: 2 days ago)
Vue: 15% (3 mentions, last: 45 days ago)
Category: languages
Python: 65% (22 mentions, last: 1 day ago)
TypeScript: 58% (18 mentions, last: 3 days ago)
Category: architecture
microservices: 58% (12 mentions, last: 5 days ago)
REST: 81% (27 mentions, last: 1 day ago)
# Clear all learned patterns
python3 ~/.claude-memory/pattern_learner.py reset
# Confirmation required
Are you sure? This will delete all learned patterns. [y/N]: y
β Patterns reset successfullyCause: Not enough memories with relevant content
Solution:
# Check memory count
slm status
# Need at least 20-30 memories for meaningful patterns
# Add more memories about your preferences
slm remember "I prefer React for frontend"
slm remember "I use Python for backend APIs"
slm remember "I prefer performance over readability"Cause: Conflicting or outdated memories
Solution:
# Review learned patterns
python3 ~/.claude-memory/pattern_learner.py list
# Delete outdated memories
sqlite3 ~/.claude-memory/memory.db \
"DELETE FROM memories WHERE created_at < date('now', '-180 days');"
# Force pattern update
python3 ~/.claude-memory/pattern_learner.py updateCause: Not enough samples or conflicting signals
Solution:
# Add more memories about your preferences
slm remember "I always use React for frontend" --tags preference
slm remember "React is my go-to framework" --tags preference
# Or lower confidence threshold
python3 ~/.claude-memory/pattern_learner.py context 0.3Good:
slm remember "I prefer React over Vue for this project" --tags preferencePoor:
slm remember "Used React" --tags todoslm remember "Team standard: Use TypeScript" --tags team-standard,preferenceBenefits:
- Easier to find later
- Higher confidence (tagged = intentional)
# Switched from React to Vue
slm remember "Migrated from React to Vue" --tags migration
slm remember "Now using Vue for all frontend" --tags preference
# Force pattern update
python3 ~/.claude-memory/pattern_learner.py update# Work profile learns work patterns
slm switch-profile work
# Personal profile learns personal patterns
slm switch-profile personal| Memories | Update Time |
|---|---|
| 100 | ~0.5s |
| 1,000 | ~2s |
| 5,000 | ~10s |
| 10,000 | ~20s |
Pattern storage:
- ~100 bytes per pattern
- Typical: 50-200 patterns
- Total: 5-20 KB (negligible)
- Quick Start Tutorial - First-time setup
- Knowledge Graph Guide - Graph features
- Multi-Profile Workflows - Profile management
- Why Local Matters - Privacy benefits
- CLI Cheatsheet - Command reference
Created by Varun Pratap Bhardwaj Solution Architect β’ SuperLocalMemory V2
SuperLocalMemory V2.4.1 β Your AI Finally Remembers You
GitHub β’
Support β’
@varun369
100% local. 100% private. 100% free.
Created by Varun Pratap Bhardwaj