-
Notifications
You must be signed in to change notification settings - Fork 5
feat: Memory, Mesh Network with Security, Messaging, MCP Server #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add three major features to SkillKit:
@skillkit/memory - CozoDB-backed semantic memory with embeddings
- 384-dim embeddings via @xenova/transformers (all-MiniLM-L6-v2)
- Memory tiers (warm/long) with promotion logic based on access patterns
- HNSW semantic search for context retrieval
- Storage: ~/.skillkit/agents/{agentId}/memory.db
@skillkit/mesh - Peer mesh network for multi-machine agent distribution
- Local LAN discovery via UDP multicast (239.255.255.250)
- Tailscale VPN support for secure remote connections
- Bidirectional peer registration with health checks
- HTTP and WebSocket transports
- Storage: ~/.skillkit/hosts.json
@skillkit/messaging - Inter-agent messaging system
- Inbox/sent/archived storage with JSON persistence
- Message router for local and cross-host delivery
- Priority levels (low/normal/high/urgent)
- Message types (request/response/notification/update)
- Storage: ~/.skillkit/messages/{inbox|sent|archived}/{agentId}/
CLI Commands:
- skillkit mesh: init, add, remove, list, health, discover, status
- skillkit message: send, inbox, read, reply, archive, forward, sent, status
All packages bumped to version 2.0.0
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThis pull request introduces five new major packages ( Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| _agentId?: string, | ||
| _category?: string, | ||
| _tier?: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 semanticSearch filter parameters are unused, causing incomplete search results
The semanticSearch method in CozoAdapter accepts agentId, category, and tier filter parameters but completely ignores them (prefixed with underscores). The calling code in search.ts passes these filters and expects database-level filtering, but instead fetches limit * 2 unfiltered results and applies filtering in application code.
Click to expand
Problematic Code
In packages/memory/src/database/cozo-adapter.ts:188-212:
async semanticSearch(
embedding: number[],
limit = 10,
_agentId?: string, // UNUSED
_category?: string, // UNUSED
_tier?: string // UNUSED
): Promise<Array<{ id: string; score: number }>> {
const params: Record<string, unknown> = { embedding, limit };
// Query does not use the filter parameters at allIn packages/memory/src/embeddings/search.ts:29-35, the filters are passed but ignored:
const results = await this.adapter.semanticSearch(
queryEmbedding,
limit * 2, // Only fetches 2x limit
agentId,
category,
tier
);Impact
If a user searches for memories with agentId: 'agent-1' and limit: 10, the system fetches 20 unfiltered results. If more than half belong to other agents, the user receives fewer than 10 results even though matching results exist in the database.
Recommendation: Implement actual database-level filtering by incorporating the agentId, category, and tier parameters into the CozoDB query, or increase the fetch multiplier significantly and document the limitation.
Was this helpful? React with 👍 or 👎 to provide feedback.
| export async function addKnownHost(host: Host): Promise<void> { | ||
| const hostsFile = await loadHostsFile(); | ||
|
|
||
| const existingIndex = hostsFile.knownHosts.findIndex(h => h.id === host.id); | ||
| if (existingIndex >= 0) { | ||
| hostsFile.knownHosts[existingIndex] = host; | ||
| } else { | ||
| hostsFile.knownHosts.push(host); | ||
| } | ||
|
|
||
| await saveHostsFile(hostsFile); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Race condition in hosts file operations can cause data loss
Multiple concurrent calls to addKnownHost, removeKnownHost, or updateKnownHost can cause data loss due to an unprotected read-modify-write pattern.
Click to expand
Race Condition Sequence
- Process A calls
addKnownHost(hostA)- reads hosts file with hosts [] - Process B calls
addKnownHost(hostB)- reads hosts file with hosts [] - Process A writes hosts file with [hostA]
- Process B writes hosts file with [hostB] - hostA is lost
This is particularly problematic during network discovery (packages/mesh/src/discovery/local.ts:157) where addKnownHost is called for each discovered host, potentially in rapid succession or concurrently.
Affected Code
packages/mesh/src/config/hosts-config.ts:66-77:
export async function addKnownHost(host: Host): Promise<void> {
const hostsFile = await loadHostsFile(); // Read
// ... modify ...
await saveHostsFile(hostsFile); // Write - no locking
}Recommendation: Implement file locking using a library like proper-lockfile, or use an atomic write pattern with a lock file to prevent concurrent modifications.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/mesh.ts`:
- Around line 108-114: The code parses a port from this.arg into the local
variable port (using parseInt) before persisting hosts via addKnownHost;
validate that parsed port is a finite integer within 1–65535 and handle invalid
cases (e.g., if parseInt yields NaN or out-of-range) by falling back to
DEFAULT_PORT or returning an error to the caller. Update the port determination
logic around the symbols DEFAULT_PORT and randomUUID so you only pass a
validated port to addKnownHost (or abort with a user-friendly message), using
Number.isInteger/Number.isFinite checks and explicit range checks to prevent
invalid host entries.
In `@packages/memory/src/database/cozo-adapter.ts`:
- Around line 35-46: The close() method currently nullifies this.db without
releasing native resources; call this.db.close() (await it if it returns a
Promise) before setting this.db = null and this.initialized = false to properly
release the CozoDb resources; locate the close() method in cozo-adapter (and
related use of this.db), update it to invoke this.db.close() first, and leave
reset() as-is (it already uses this.db for dropSchema and initializeSchema).
In `@packages/memory/src/database/schema.ts`:
- Around line 97-115: The probe in initializeSchema uses a single positional
binding which mismatches multi-column tables and always throws; replace the
probe so it matches table arity or uses the relation-inspection operator. Update
the probe inside initializeSchema (where tables =
['memories','memory_vec','memory_links']) to either use named-column matching
like ?[id] := *${table}{id: _} :limit 1 or call ::columns ${table} to check
existence, leaving the subsequent CREATE_*_TABLE branches unchanged.
- Around line 3-95: The schema strings (MEMORY_SCHEMA, CREATE_MEMORIES_TABLE,
CREATE_MEMORY_VEC_TABLE, CREATE_MEMORY_LINKS_TABLE) currently lack proper
primary-key separators causing all columns to be treated as keys; update each
table definition to place the "=> id" primary-key marker after the id field
(e.g., after "id: String,") so upserts and deletes work by id only; ensure each
of the three create table string constants (and the combined MEMORY_SCHEMA)
include "=> id" and leave the HNSW index string (CREATE_HNSW_INDEX) unchanged.
In `@packages/memory/src/embeddings/encoder.ts`:
- Around line 16-36: initializeEncoder currently leaves the module-level
initPromise set when initialization fails, which causes future calls to await
the rejected promise and never retry; update initializeEncoder (and the inner
async initializer that sets initPromise) to catch errors from the pipeline
initialization, ensure initPromise is cleared (set to undefined/null) on
failure, and rethrow the error so callers see it; also ensure embeddingPipeline
is only assigned on success and not left null on error, referencing the
initPromise and embeddingPipeline variables and the initializeEncoder function
to locate where to add the try/catch and the reset logic.
- Around line 122-129: disposeEncoder currently only nulls references and must
call embeddingPipeline.dispose() to free model/GPU resources; update the
disposeEncoder function so that if embeddingPipeline is non-null it awaits
embeddingPipeline.dispose() before setting embeddingPipeline = null and
initPromise = null (referencing the embeddingPipeline and initPromise variables
and the disposeEncoder function).
In `@packages/mesh/src/discovery/local.ts`:
- Around line 137-166: In handleMessage, validate the parsed DiscoveryMessage
before creating or persisting a Host: ensure message.hostId is a non-empty
string, message.port is a finite integer within 1–65535 (or undefined if
optional), and required string fields like hostName/version are present and of
correct type; prefer rinfo.address over message.address for host.address to
reduce spoofing; if any validation fails simply return without updating
discoveredHosts, calling addKnownHost, or invoking this.options.onDiscover; keep
the existing announce behavior for valid 'query' messages and perform these
checks at the top of handleMessage immediately after parsing.
- Around line 31-64: The start() method leaves the socket open on membership
setup failure, calls announce() inside setInterval without handling promise
rejections, and sets multicast TTL to 128 which may leak beyond the LAN; fix by
wrapping the bind callback membership configuration (this.socket!.addMembership,
setBroadcast, setMulticastTTL) in try/catch and on error close and cleanup
this.socket before rejecting, change setMulticastTTL(128) to a LAN-safe value
(1) via this.socket!.setMulticastTTL(1), and ensure periodic and initial
announce() calls are invoked with proper error handling (await announce() in a
try/catch and in setInterval invoke announce().catch(...) or wrap call in async
IIFE to log errors and avoid unhandled promise rejections), and store/clear
announceInterval so it is cleaned up when socket/starting fails or stop() is
called.
In `@packages/mesh/src/peer/health.ts`:
- Around line 120-122: The setInterval callback currently calls the async method
this.checkAll() without awaiting, which allows overlapping runs; modify the
health scheduler to guard against concurrent executions by tracking a running
flag or active promise (e.g., add a boolean like this.isChecking or a
this.currentCheck Promise) and in the setInterval handler return early if a
check is already in progress, or make the callback async and await
this.checkAll() while ensuring reentrancy is prevented; update the symbols
this.interval, setInterval callback, and the this.checkAll() invocations
accordingly so only one health check runs at a time.
In `@packages/mesh/src/peer/registry.ts`:
- Around line 110-160: resolvePeerAddress currently returns null when a host
qualifier is provided but not found in getKnownHosts; add a local-host fallback
in the hostName branch: when hosts.find(...) yields no host, call
getLocalHostConfig(), compare hostName to localConfig.id and localConfig.name
(case-insensitive for name), and if it matches construct the same Host object
shape used later (id, name, address '127.0.0.1', port, status 'online',
lastSeen) then lookup the peer against this host by calling
this.getPeersByHost(localConfig.id).find(...) using the existing
agentName/agentId/aliases checks; return { host, peer } if found, otherwise
return null. Ensure you reference resolvePeerAddress, getKnownHosts,
getLocalHostConfig, and getPeersByHost when implementing the change.
In `@packages/mesh/src/transport/websocket.ts`:
- Around line 15-89: The close handler currently always calls
scheduleReconnect(), causing an explicit disconnect() to trigger reconnection;
add a manual-close guard (e.g., a private boolean manualClose = false) on the
WebSocketTransport class, set manualClose = true in disconnect(), reset
manualClose = false when a successful connection/open occurs, and update the
socket.on('close') handler and/or scheduleReconnect() to bail out if manualClose
is true so deliberate disconnects do not auto-reconnect; ensure reconnectTimer
handling remains unchanged.
In `@packages/messaging/src/index.ts`:
- Around line 45-47: The three storage instances (InboxStorage, SentStorage,
ArchivedStorage) are all given the same config.storagePath which causes
collisions; update the construction in packages/messaging/src/index.ts so that
when config.storagePath is provided you append a type-specific subdirectory
(e.g., 'inbox', 'sent', 'archived') before passing it into each storage
constructor (use path.join to combine config.storagePath with the subfolder so
existing agentId handling inside InboxStorage/SentStorage/ArchivedStorage
remains unchanged). Ensure you adjust the three calls creating new InboxStorage,
SentStorage, and ArchivedStorage to each receive the respective subpath.
In `@packages/messaging/src/storage/archived.ts`:
- Around line 143-145: The getMessagePath method currently uses messageId
directly which allows path traversal; update getMessagePath to validate/sanitize
messageId before joining: reject or throw if messageId contains path separators
or ".." (or only allow a strict whitelist like [A-Za-z0-9-_]+/UUID/hex), or
replace it with path.basename(messageId) and ensure the basename equals the
original; after computing the path with join, resolve it and assert the resolved
path starts with path.resolve(this.basePath) to guarantee it cannot escape
basePath. Reference: getMessagePath and this.basePath.
In `@packages/messaging/src/storage/inbox.ts`:
- Around line 150-152: The getMessagePath method currently concatenates
messageId into a filesystem path allowing path traversal; update getMessagePath
to validate/sanitize messageId (e.g., reject or normalize inputs containing path
separators, dots, or suspicious characters or enforce an allowlist regex like
alphanumeric + -/_), then construct the path and verify the resolved absolute
path is inside this.basePath (use path.resolve on both and ensure the message
path startsWith the resolved basePath) before returning; reference the
getMessagePath function and the messageId and basePath variables when making
these checks.
In `@packages/messaging/src/storage/sent.ts`:
- Around line 116-118: Guard against path traversal in getMessagePath by
validating/sanitizing the messageId before joining with basePath: inside
getMessagePath (and any callers), ensure messageId does not contain path
separators or traversal segments (e.g. check for path.sep, '/' or '..') or
alternatively compute const safe = basename(messageId) and throw if safe !==
messageId; if invalid throw a clear error. After validation, continue to use
join(this.basePath, `${messageId}.json`) so a crafted ID cannot escape basePath.
In `@packages/messaging/src/transport/remote.ts`:
- Around line 27-31: The got POST call in remote.ts using got.post(...) with
retry: { limit: this.options.retries } won't retry POSTs in got v14; update the
retry config passed to got (the options object in the got.post call) to include
retry.methods that explicitly lists 'POST' (e.g. retry: { limit:
this.options.retries, methods:
['GET','PUT','HEAD','DELETE','OPTIONS','TRACE','POST'] }) so POSTs will be
retried, and ensure the backend consumer handles idempotency/deduplication using
message.id when retries are enabled.
🟡 Minor comments (18)
packages/memory/src/database/cozo-adapter.ts-188-212 (1)
188-212:⚠️ Potential issue | 🟡 MinorUnused filter parameters in
semanticSearch.The parameters
_agentId,_category, and_tierare accepted but not used in the query. This could lead to incorrect search results if callers expect filtering to occur at the database level.Either implement the filtering in the query or remove the parameters and document that filtering happens post-query.
packages/memory/src/stores/memory-store.ts-73-97 (1)
73-97:⚠️ Potential issue | 🟡 MinorInconsistent timestamp between database update and returned object.
The
getmethod callsnew Date().toISOString()twice: once for the database update (Line 79) and once for the returned object (Line 93). These will have slightly different values, causing a mismatch between what's stored and what's returned.Proposed fix
async get(id: string): Promise<PersistentMemory | null> { const row = await this.adapter.getMemory(id); if (!row) return null; + const now = new Date().toISOString(); await this.adapter.updateMemory(id, { accessCount: row.accessCount + 1, - lastAccessedAt: new Date().toISOString(), + lastAccessedAt: now, }); return { id: row.id, agentId: row.agentId, category: row.category as MemoryCategory, tier: row.tier as MemoryTier, content: row.content, embedding: [], reinforcementScore: row.reinforcementScore, createdAt: row.createdAt, updatedAt: row.updatedAt, accessCount: row.accessCount + 1, - lastAccessedAt: new Date().toISOString(), + lastAccessedAt: now, tags: row.tags, metadata: row.metadata, }; }packages/memory/src/stores/memory-store.ts-179-209 (1)
179-209:⚠️ Potential issue | 🟡 Minor
reinforceandweakenmay returnnulldespite non-null assertion.After updating the score, these methods call
this.get(id)which may returnnullif the memory was deleted between the update and the get. The type assertionas Promise<PersistentMemory>masks this potential issue.Proposed fix - re-fetch and validate
async reinforce(id: string, amount = 0.1): Promise<PersistentMemory> { const existing = await this.adapter.getMemory(id); if (!existing) { throw new Error(`Memory ${id} not found`); } const newScore = Math.min(1, existing.reinforcementScore + amount); await this.adapter.updateMemory(id, { reinforcementScore: newScore, updatedAt: new Date().toISOString(), }); - return this.get(id) as Promise<PersistentMemory>; + const result = await this.get(id); + if (!result) { + throw new Error(`Memory ${id} not found after update`); + } + return result; }Apply the same pattern to
weaken.packages/memory/src/stores/memory-store.ts-109-112 (1)
109-112:⚠️ Potential issue | 🟡 MinorTruthy checks prevent clearing optional fields to empty values.
Using
if (input.category)etc. prevents settingtagsto an empty array[]ormetadatato{}. Consider using explicitundefinedchecks instead.Proposed fix
- if (input.category) updates.category = input.category; - if (input.tier) updates.tier = input.tier; - if (input.tags) updates.tags = input.tags; - if (input.metadata) updates.metadata = input.metadata; + if (input.category !== undefined) updates.category = input.category; + if (input.tier !== undefined) updates.tier = input.tier; + if (input.tags !== undefined) updates.tags = input.tags; + if (input.metadata !== undefined) updates.metadata = input.metadata;packages/mesh/src/peer/registry.ts-21-43 (1)
21-43:⚠️ Potential issue | 🟡 MinorValidate
hostIdto avoid silent mismatches in local registration.
registerLocalPeerignores the providedregistration.hostId, which can hide configuration mistakes. Consider validating and failing fast.🛡️ Suggested guard
async registerLocalPeer(registration: PeerRegistration): Promise<Peer> { const localConfig = await getLocalHostConfig(); + if (registration.hostId !== localConfig.id) { + throw new Error( + `Host ID mismatch: expected ${localConfig.id}, got ${registration.hostId}` + ); + }packages/mesh/src/transport/websocket.ts-164-168 (1)
164-168:⚠️ Potential issue | 🟡 MinorFix lint error: avoid implicit returns in
forEachcallbacks (server-side).The callback returns the
handler()call result, which is unused and not idiomatic. Both client-side (line 56) and server-side (line 167) occurrences need the same fix for consistency.🧹 Lint-safe adjustment
- this.messageHandlers.forEach(handler => handler(message, socket)); + this.messageHandlers.forEach(handler => { + handler(message, socket); + });Apply the same fix at line 56 for the client-side handler.
packages/mesh/src/transport/websocket.ts-53-58 (1)
53-58:⚠️ Potential issue | 🟡 MinorFix lint error: both
forEachcallbacks have implicit returns that violateuseIterableCallbackReturn.
Biome's recommended rules includeuseIterableCallbackReturn, which forbids returning values from forEach callbacks. Wrap both handler calls in braces to prevent implicit returns.Both occurrences at lines 56 and 167 need updating:
🧹 Lint-safe adjustment
- this.messageHandlers.forEach(handler => handler(message, this.socket!)); + this.messageHandlers.forEach(handler => { + handler(message, this.socket!); + });(Also apply the same fix at line 167)
packages/messaging/src/transport/remote.ts-34-41 (1)
34-41:⚠️ Potential issue | 🟡 MinorTreat any 2xx as successful delivery.
Line 34 only recognizes 200/201; 202/204 will be reported as failures even when the server accepted the message.
✅ Suggested change
- if (response.statusCode === 200 || response.statusCode === 201) { + if (response.statusCode >= 200 && response.statusCode < 300) {packages/messaging/src/message/router.ts-41-50 (1)
41-50:⚠️ Potential issue | 🟡 MinorReturn a clear error when a remote recipient lacks host info.
Lines 41-50 can call
deliverRemotewithhostundefined whentois not local and lacks@host. Return an explicit error instead of passing an invalid host downstream.✅ Suggested change
await this.config.sentStorage.save(message); if (recipient.isLocal) { return this.deliverLocal(message, recipient.agentId); } + if (!recipient.host) { + return { + messageId: message.id, + delivered: false, + error: 'Remote recipient missing host (expected agent@host:port)', + via: 'remote', + }; + } return this.deliverRemote(message, recipient.host!, recipient.agentId);packages/messaging/src/storage/sent.ts-120-124 (1)
120-124:⚠️ Potential issue | 🟡 Minor
filter.fromis ignored inmatchesFilter.Line 120 skips
from, solist({ from })won’t filter. This is inconsistent withMessageFilter.✅ Suggested change
private matchesFilter(message: Message, filter: MessageFilter): boolean { + if (filter.from && message.from !== filter.from) return false; if (filter.to && message.to !== filter.to) return false;packages/messaging/src/transport/remote.ts-63-76 (1)
63-76:⚠️ Potential issue | 🟡 MinorValidate host/port parsing in
deliverToAgent.Lines 73-76 can yield an empty host or
NaNport, producing invalid URLs. Return a clear error when parsing fails.🛡️ Suggested guard
const [, hostPart] = parts; const [host, portStr] = hostPart.split(':'); - const port = portStr ? parseInt(portStr, 10) : 9876; + if (!host) { + return { + messageId: message.id, + delivered: false, + error: 'Invalid agent address format. Expected: agentId@host:port', + via: 'remote', + }; + } + const port = portStr ? Number.parseInt(portStr, 10) : 9876; + if (!Number.isInteger(port) || port <= 0 || port > 65535) { + return { + messageId: message.id, + delivered: false, + error: 'Invalid agent address format. Expected: agentId@host:port', + via: 'remote', + }; + }packages/mesh/src/peer/health.ts-27-34 (1)
27-34:⚠️ Potential issue | 🟡 MinorIPv6 address handling may break URL construction.
If
host.addresscontains an IPv6 address (e.g.,::1orfe80::1), the URL constructionhttp://${host.address}:${host.port}/healthwill produce an invalid URL likehttp://::1:8080/health. IPv6 addresses must be bracketed:http://[::1]:8080/health.🛠️ Proposed fix
try { - const url = `http://${host.address}:${host.port}/health`; + const address = host.address.includes(':') ? `[${host.address}]` : host.address; + const url = `http://${address}:${host.port}/health`; const response = await got.get(url, {packages/cli/src/commands/mesh.ts-250-252 (1)
250-252:⚠️ Potential issue | 🟡 MinorValidate timeout input to avoid NaN behavior
Invalid values become
NaNand propagate to health checks/discovery. It’s better to reject invalid inputs explicitly.✅ Suggested validation
- const timeout = this.timeout ? parseInt(this.timeout, 10) : undefined; + const timeout = this.timeout ? Number(this.timeout) : undefined; + if (this.timeout && (!Number.isFinite(timeout) || timeout <= 0)) { + console.error(chalk.red(`Invalid timeout: ${this.timeout}`)); + return 1; + }- const timeout = this.timeout ? parseInt(this.timeout, 10) : 5000; + const timeout = this.timeout ? Number(this.timeout) : 5000; + if (this.timeout && (!Number.isFinite(timeout) || timeout <= 0)) { + console.error(chalk.red(`Invalid timeout: ${this.timeout}`)); + return 1; + }Also applies to: 291-292
packages/mesh/src/__tests__/index.test.ts-3-9 (1)
3-9:⚠️ Potential issue | 🟡 MinorConfirm MESH_VERSION intent vs release version
This test hard-codes
'1.0.0'. With the monorepo bump to 2.0.0, please confirm whetherMESH_VERSIONis meant to track the package/protocol version; if so, update the constant (and this assertion) to avoid drift. If it’s intentionally decoupled, consider documenting that in code to prevent future confusion.packages/mesh/src/discovery/local.ts-204-215 (1)
204-215:⚠️ Potential issue | 🟡 MinorEnsure discoverOnce always stops the socket
If
query()or the timeout wait throws, the socket can remain open. Atry/finallykeeps cleanup reliable.✅ Cleanup on failure
export async function discoverOnce(timeout = 5000): Promise<Host[]> { const discovery = new LocalDiscovery(); await discovery.start(); - await discovery.query(); - - await new Promise(resolve => setTimeout(resolve, timeout)); - - const hosts = discovery.getDiscoveredHosts(); - await discovery.stop(); - - return hosts; + try { + await discovery.query(); + await new Promise(resolve => setTimeout(resolve, timeout)); + return discovery.getDiscoveredHosts(); + } finally { + await discovery.stop(); + } }packages/cli/src/commands/message.ts-99-105 (1)
99-105:⚠️ Potential issue | 🟡 MinorUnsafe type casts bypass validation.
The casts
(this.priority as any)and(this.type as any)bypass TypeScript's type checking, potentially passing invalid values to the messaging service.Proposed fix
+const VALID_PRIORITIES = ['low', 'normal', 'high', 'urgent'] as const; +const VALID_TYPES = ['request', 'response', 'notification', 'update'] as const; +// In sendMessage, before service.send: +const priority = this.priority && VALID_PRIORITIES.includes(this.priority as any) + ? this.priority as typeof VALID_PRIORITIES[number] + : 'normal'; +const type = this.type && VALID_TYPES.includes(this.type as any) + ? this.type as typeof VALID_TYPES[number] + : 'notification'; const result = await service.send({ to, subject: this.subject || 'Message from SkillKit', body: this.body, - priority: (this.priority as any) || 'normal', - type: (this.type as any) || 'notification', + priority, + type, });Also applies to: 103-104
packages/cli/src/commands/message.ts-137-139 (1)
137-139:⚠️ Potential issue | 🟡 MinorValidate limit is a positive integer.
parseInton invalid input like"abc"returnsNaN, which would be passed to the service. Add validation.Proposed fix
if (this.limit) { - filter.limit = parseInt(this.limit, 10); + const parsedLimit = parseInt(this.limit, 10); + if (Number.isNaN(parsedLimit) || parsedLimit <= 0) { + console.error(chalk.red('Error: --limit must be a positive integer')); + return 1; + } + filter.limit = parsedLimit; }packages/mesh/src/discovery/tailscale.ts-112-134 (1)
112-134:⚠️ Potential issue | 🟡 MinorOff-by-one in Magic DNS suffix stripping.
When stripping the Magic DNS suffix, the code uses
hostname.slice(0, -status.magicDNSSuffix.length - 1)assuming there's a dot separator. However, theendsWithcheck doesn't verify the dot, so ifmagicDNSSuffixis"example.com"and hostname is"fooexample.com", the slice would be incorrect.Proposed fix
- if (status.magicDNSSuffix && hostname.endsWith(status.magicDNSSuffix)) { - const shortName = hostname.slice(0, -status.magicDNSSuffix.length - 1); + const dnsWithDot = status.magicDNSSuffix ? `.${status.magicDNSSuffix}` : null; + if (dnsWithDot && hostname.endsWith(dnsWithDot)) { + const shortName = hostname.slice(0, -dnsWithDot.length); for (const peer of status.peers) { if (peer.hostname.toLowerCase() === shortName.toLowerCase()) { return peer.tailscaleIP; } } }
🧹 Nitpick comments (22)
packages/memory/src/stores/memory-store.ts (2)
211-243: Missing validation for source and target memory existence before linking.The
linkmethod doesn't verify that bothsourceIdandtargetIdrefer to existing memories, which could create orphaned links.Proposed fix
async link( sourceId: string, targetId: string, relationshipType: MemoryLink['relationshipType'], strength = 1.0 ): Promise<MemoryLink> { + const [source, target] = await Promise.all([ + this.adapter.getMemory(sourceId), + this.adapter.getMemory(targetId), + ]); + if (!source) throw new Error(`Source memory ${sourceId} not found`); + if (!target) throw new Error(`Target memory ${targetId} not found`); + const id = randomUUID(); const now = new Date().toISOString();
245-270:getStatsreturns hardcoded values foravgReinforcementScore,oldestMemory, andnewestMemory.These fields are always
0andnullrespectively, which doesn't reflect actual data. Consider implementing these or documenting the limitation.packages/memory/src/embeddings/encoder.ts (1)
63-85: Sequential encoding inencodeBatchmay be slow for large batches.Processing texts one-by-one in a loop foregoes potential batch optimizations in the transformer pipeline. Consider using the pipeline's native batching if available.
packages/memory/src/embeddings/search.ts (4)
1-9:includeSimilarandmaxSimilarPerResultoptions are defined but never used.The
SemanticSearchOptionsinterface defines these fields, but they're not referenced in any method implementation.Either implement the functionality or remove the unused options to avoid confusion:
export interface SemanticSearchOptions extends MemorySearchOptions { minScore?: number; - includeSimilar?: boolean; - maxSimilarPerResult?: number; }
14-57: Double-fetching memories and redundant filtering insearch.The method passes
agentId,category, andtiertoadapter.semanticSearch(Lines 32-34), then re-checks the same filters after fetching each memory (Lines 46-48). If the adapter already filters correctly, the post-fetch checks are redundant. If it doesn't, the adapter signature is misleading.
95-101: Unsafe type cast insearchByCategory.
category as anybypasses type safety. Use proper type narrowing or acceptMemoryCategorydirectly.Proposed fix
async searchByCategory( query: string, - category: string, + category: MemoryCategory, limit = 10 ): Promise<MemorySearchResult[]> { - return this.search(query, { category: category as any, limit }); + return this.search(query, { category, limit }); }Import
MemoryCategoryfrom types if needed.
59-93: Use the stored embedding instead of re-encoding infindSimilar.Embeddings are already stored in the
memory_vectable (viainsertEmbedding), butgetMemory()doesn't retrieve them sinceMemoryRowlacks an embedding field. The method at line 69 unnecessarily re-encodes the content, which is wasteful and may produce different results if the encoder changes. Consider extendingMemoryRowto include the embedding field and retrieving it from the database.packages/memory/src/tiers/tier-manager.ts (2)
24-49: Hardcoded promotion threshold (0.7) ignores config'sreinforcementScoreThreshold.Line 26 uses a hardcoded
threshold = 0.7, but the class has a config withreinforcementScoreThreshold. Consider using the config value for consistency.Proposed fix
evaluateForPromotion(memory: PersistentMemory): TierEvaluationResult { const score = this.calculatePromotionScore(memory); - const threshold = 0.7; + const threshold = this.config.reinforcementScoreThreshold; const shouldPromote = memory.tier === 'warm' && score >= threshold;
51-77:reinforcementScorecalculation can exceed 1.0 before the weighted sum.Line 54:
memory.reinforcementScore / this.config.reinforcementScoreThresholdcan be > 1 ifreinforcementScoreexceeds the threshold. While it's clamped at Line 73, this inconsistency withaccessScore(clamped at Line 52) is confusing.Proposed fix for consistency
- const reinforcementScore = memory.reinforcementScore / this.config.reinforcementScoreThreshold; + const reinforcementScore = Math.min(1, memory.reinforcementScore / this.config.reinforcementScoreThreshold);Then remove the redundant
Math.min(1, ...)at Line 73.packages/memory/src/database/cozo-adapter.ts (1)
174-186: Non-atomic delete: memory and embedding deletions are separate operations.If
deleteMemorysucceeds butdeleteEmbeddingfails, the system is left in an inconsistent state with an orphaned embedding. Consider wrapping in a transaction if CozoDB supports it.packages/messaging/package.json (1)
29-33: Align tooling devDependencies with workspace root versions.
The devDependencies here are outdated compared to the root workspace (tsup, typescript, vitest are all several patch versions behind), which can lead to inconsistent build/test behavior. Most other packages in the workspace are aligned with the root versions; aligning this package would maintain consistency.♻️ Proposed alignment
"devDependencies": { - "tsup": "^8.0.1", - "typescript": "^5.3.3", - "vitest": "^1.2.0" + "tsup": "^8.3.5", + "typescript": "^5.7.2", + "vitest": "^2.1.8" },packages/messaging/src/message/router.ts (1)
120-126: SetreadAtwhen unarchiving to keep status consistent.Line 124 marks status as
readbut doesn’t setreadAt, which can leave read metadata inconsistent.🕒 Suggested tweak
message.status = 'read'; + message.readAt = new Date().toISOString(); await this.config.inboxStorage.save(message);packages/messaging/src/message/builder.ts (1)
9-20: Consider resetting state or preventing reuse afterbuild().The builder retains its internal state after
build()is called. If a consumer callsbuild()multiple times, the sameidandcreatedAtwill be reused, producing duplicate message identifiers.🛠️ Option: Reset state in build() or throw on reuse
build(): Message { if (!this.message.from) { throw new Error('Message must have a "from" field'); } if (!this.message.to) { throw new Error('Message must have a "to" field'); } if (!this.message.subject) { throw new Error('Message must have a subject'); } if (this.message.body === undefined) { throw new Error('Message must have a body'); } - return this.message as Message; + const result = this.message as Message; + // Reset for potential reuse with a fresh ID + this.message = { + id: randomUUID(), + type: 'notification', + priority: 'normal', + status: 'unread', + createdAt: new Date().toISOString(), + }; + return result; }packages/messaging/src/__tests__/index.test.ts (1)
1-9: Consider expanding test coverage for the messaging public API.This smoke test verifies that exports are defined, which is useful. However, the messaging package has substantial functionality (MessagingService, createMessage, createReply, createForward, storage classes) that would benefit from unit tests validating actual behavior.
Would you like me to help generate more comprehensive unit tests for the messaging package?
packages/mesh/src/discovery/tailscale.ts (3)
25-68: Consider adding a timeout and size limit for the exec call.The
execAsynccall has no timeout, which could hang indefinitely if the Tailscale daemon is unresponsive. Additionally, unbounded stdout could cause memory issues with malformed output.Proposed fix
export async function getTailscaleStatus(): Promise<TailscaleStatus> { try { - const { stdout } = await execAsync('tailscale status --json'); + const { stdout } = await execAsync('tailscale status --json', { + timeout: 10000, + maxBuffer: 1024 * 1024, + }); const status = JSON.parse(stdout);
43-43: Avoidanytype for peer entries.The explicit
as [string, any][]cast loses type safety. Consider defining a minimal interface for the raw Tailscale peer response.Proposed fix
+interface RawTailscalePeer { + HostName?: string; + TailscaleIPs?: string[]; + Online?: boolean; + OS?: string; + LastSeen?: string; +} - for (const [id, peer] of Object.entries(status.Peer) as [string, any][]) { + for (const [id, peer] of Object.entries(status.Peer) as [string, RawTailscalePeer][]) {
70-77: Add timeout to prevent hanging.Same concern as
getTailscaleStatus— iftailscale versionhangs, this will block indefinitely.Proposed fix
export async function isTailscaleAvailable(): Promise<boolean> { try { - await execAsync('tailscale version'); + await execAsync('tailscale version', { timeout: 5000 }); return true; } catch { return false; } }packages/mesh/src/index.ts (1)
57-63: Dynamic imports are redundant since these modules are already statically exported.
initializeHostsFile(Line 14) andgetPeerRegistry(Line 35) are already statically exported at the top of this file, so the dynamic imports add overhead without any tree-shaking or lazy-loading benefit.Proposed fix
-export async function initializeMesh(): Promise<void> { - const { initializeHostsFile } = await import('./config/index.js'); - const { getPeerRegistry } = await import('./peer/index.js'); - - await initializeHostsFile(); - await getPeerRegistry(); -} +export async function initializeMesh(): Promise<void> { + await initializeHostsFile(); + await getPeerRegistry(); +}packages/mesh/src/config/hosts-config.ts (2)
15-26: Silent fallback on parse errors may mask corruption.When
JSON.parsefails, the code silently returns a fresh default file. This could cause data loss if the file exists but is temporarily malformed (e.g., partial write). Consider logging a warning or backing up the corrupted file.Proposed fix
try { const content = await readFile(HOSTS_FILE_PATH, 'utf-8'); return JSON.parse(content) as HostsFile; } catch { + console.warn(`Warning: Could not parse ${HOSTS_FILE_PATH}, creating new hosts file`); return createDefaultHostsFile(); }
28-32: Mutation of input parameter.
saveHostsFilemutates the incominghostsFileobject by settinglastUpdated. This side effect could surprise callers.Proposed fix
export async function saveHostsFile(hostsFile: HostsFile): Promise<void> { await mkdir(dirname(HOSTS_FILE_PATH), { recursive: true }); - hostsFile.lastUpdated = new Date().toISOString(); - await writeFile(HOSTS_FILE_PATH, JSON.stringify(hostsFile, null, 2), 'utf-8'); + const toSave = { ...hostsFile, lastUpdated: new Date().toISOString() }; + await writeFile(HOSTS_FILE_PATH, JSON.stringify(toSave, null, 2), 'utf-8'); }packages/cli/src/commands/message.ts (2)
181-217: Extract duplicated message ID resolution logic.The pattern of resolving partial message IDs via
service.getMessage()followed byservice.getInbox().find(m => m.id.startsWith(id))is repeated inreadMessage,replyMessage,archiveMessage, andforwardMessage. Extract to a helper method.Proposed helper
private async resolveMessageId(service: MessagingService, partialId: string): Promise<string | null> { const message = await service.getMessage(partialId); if (message) return partialId; const messages = await service.getInbox(); const found = messages.find(m => m.id.startsWith(partialId)); return found?.id ?? null; }Then in each handler:
- let messageId = id; - const message = await service.getMessage(id); - if (!message) { - const messages = await service.getInbox(); - const found = messages.find(m => m.id.startsWith(id)); - if (found) { - messageId = found.id; - } else { - console.error(chalk.red(`Message not found: ${id}`)); - return 1; - } - } + const messageId = await this.resolveMessageId(service, id); + if (!messageId) { + console.error(chalk.red(`Message not found: ${id}`)); + return 1; + }Also applies to: 251-302, 304-345, 347-400
443-486: Hardcoded limit of 1000 may cause performance issues.
showStatusfetches up to 1000 sent and archived messages just to count them. If the messaging service supports a count-only API, that would be more efficient.Consider adding a
getCount()method to the messaging service to avoid loading all messages into memory just for counting.
| async close(): Promise<void> { | ||
| if (this.db) { | ||
| this.db = null; | ||
| this.initialized = false; | ||
| } | ||
| } | ||
|
|
||
| async reset(): Promise<void> { | ||
| if (!this.db) throw new Error('Database not initialized'); | ||
| await dropSchema(this.db); | ||
| await initializeSchema(this.db); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
cozo-node CozoDb close dispose method
💡 Result:
In cozo-node, CozoDb exposes a close() method (not dispose()), and you should call it when you’re done to release native resources:
const { CozoDb } = require("cozo-node");
const db = new CozoDb();
try {
const res = await db.run("?[] <- [[1,2,3]]");
console.log(res);
} finally {
db.close(); // important: releasing native resources
}close() is synchronous (close(): void), and simply dropping the JS reference (e.g., delete db) is not sufficient. [1]
Source:
[1] npm package docs for cozo-node (Basic API showing close(): void and the note about releasing native resources) (npmjs.com)
Citations:
🏁 Script executed:
# Let's examine the actual file to understand the context
cat -n packages/memory/src/database/cozo-adapter.ts | head -50Repository: rohitg00/skillkit
Length of output: 1774
🏁 Script executed:
# Check if there are any imports from cozo-node to understand the API
rg "from.*cozo" packages/memory/src/database/cozo-adapter.ts -A 2 -B 2Repository: rohitg00/skillkit
Length of output: 181
🏁 Script executed:
# Look for other close/cleanup patterns in the codebase
rg "\.close\(\)" packages/memory/src/database/ -A 3 -B 1Repository: rohitg00/skillkit
Length of output: 43
Call this.db.close() before nullifying the reference to properly release native resources.
The cozo-node library requires explicitly calling close() on the CozoDb instance to release native resources. Simply setting this.db = null leaves resources unreleased. Call this.db.close() before nullifying the reference.
async close(): Promise<void> {
if (this.db) {
this.db.close();
this.db = null;
this.initialized = false;
}
}
🤖 Prompt for AI Agents
In `@packages/memory/src/database/cozo-adapter.ts` around lines 35 - 46, The
close() method currently nullifies this.db without releasing native resources;
call this.db.close() (await it if it returns a Promise) before setting this.db =
null and this.initialized = false to properly release the CozoDb resources;
locate the close() method in cozo-adapter (and related use of this.db), update
it to invoke this.db.close() first, and leave reset() as-is (it already uses
this.db for dropSchema and initializeSchema).
| this.inbox = new InboxStorage(config.agentId, config.storagePath); | ||
| this.sent = new SentStorage(config.agentId, config.storagePath); | ||
| this.archived = new ArchivedStorage(config.agentId, config.storagePath); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Storage path collision when custom storagePath is provided.
When config.storagePath is provided, it's passed directly to all three storage constructors. Each storage class uses this as its basePath, meaning inbox, sent, and archived messages would all be written to the same directory, causing filename collisions and data corruption.
The individual storage classes default to ~/.skillkit/messages/{inbox|sent|archived}/{agentId}, but this custom path logic doesn't preserve that structure.
🐛 Proposed fix: Append storage type subdirectory
+import { join } from 'node:path';
+
export class MessagingService {
private config: MessagingServiceConfig;
private inbox: InboxStorage;
private sent: SentStorage;
private archived: ArchivedStorage;
private router: MessageRouter;
constructor(config: MessagingServiceConfig) {
this.config = config;
- this.inbox = new InboxStorage(config.agentId, config.storagePath);
- this.sent = new SentStorage(config.agentId, config.storagePath);
- this.archived = new ArchivedStorage(config.agentId, config.storagePath);
+ const basePath = config.storagePath;
+ this.inbox = new InboxStorage(config.agentId, basePath ? join(basePath, 'inbox', config.agentId) : undefined);
+ this.sent = new SentStorage(config.agentId, basePath ? join(basePath, 'sent', config.agentId) : undefined);
+ this.archived = new ArchivedStorage(config.agentId, basePath ? join(basePath, 'archived', config.agentId) : undefined);🤖 Prompt for AI Agents
In `@packages/messaging/src/index.ts` around lines 45 - 47, The three storage
instances (InboxStorage, SentStorage, ArchivedStorage) are all given the same
config.storagePath which causes collisions; update the construction in
packages/messaging/src/index.ts so that when config.storagePath is provided you
append a type-specific subdirectory (e.g., 'inbox', 'sent', 'archived') before
passing it into each storage constructor (use path.join to combine
config.storagePath with the subfolder so existing agentId handling inside
InboxStorage/SentStorage/ArchivedStorage remains unchanged). Ensure you adjust
the three calls creating new InboxStorage, SentStorage, and ArchivedStorage to
each receive the respective subpath.
| private getMessagePath(messageId: string): string { | ||
| return join(this.basePath, `${messageId}.json`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against path traversal via messageId.
Line 144 uses messageId directly in the path. If message IDs are user-controlled, ../ or path separators could escape basePath.
🔒 Suggested safeguard
private getMessagePath(messageId: string): string {
+ if (messageId.includes('..') || messageId.includes('/') || messageId.includes('\\')) {
+ throw new Error('Invalid messageId');
+ }
return join(this.basePath, `${messageId}.json`);
}🤖 Prompt for AI Agents
In `@packages/messaging/src/storage/archived.ts` around lines 143 - 145, The
getMessagePath method currently uses messageId directly which allows path
traversal; update getMessagePath to validate/sanitize messageId before joining:
reject or throw if messageId contains path separators or ".." (or only allow a
strict whitelist like [A-Za-z0-9-_]+/UUID/hex), or replace it with
path.basename(messageId) and ensure the basename equals the original; after
computing the path with join, resolve it and assert the resolved path starts
with path.resolve(this.basePath) to guarantee it cannot escape basePath.
Reference: getMessagePath and this.basePath.
| private getMessagePath(messageId: string): string { | ||
| return join(this.basePath, `${messageId}.json`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against path traversal via messageId.
Line 151 uses messageId directly in the path; a crafted ID could escape basePath.
🔒 Suggested safeguard
private getMessagePath(messageId: string): string {
+ if (messageId.includes('..') || messageId.includes('/') || messageId.includes('\\')) {
+ throw new Error('Invalid messageId');
+ }
return join(this.basePath, `${messageId}.json`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private getMessagePath(messageId: string): string { | |
| return join(this.basePath, `${messageId}.json`); | |
| } | |
| private getMessagePath(messageId: string): string { | |
| if (messageId.includes('..') || messageId.includes('/') || messageId.includes('\\')) { | |
| throw new Error('Invalid messageId'); | |
| } | |
| return join(this.basePath, `${messageId}.json`); | |
| } |
🤖 Prompt for AI Agents
In `@packages/messaging/src/storage/inbox.ts` around lines 150 - 152, The
getMessagePath method currently concatenates messageId into a filesystem path
allowing path traversal; update getMessagePath to validate/sanitize messageId
(e.g., reject or normalize inputs containing path separators, dots, or
suspicious characters or enforce an allowlist regex like alphanumeric + -/_),
then construct the path and verify the resolved absolute path is inside
this.basePath (use path.resolve on both and ensure the message path startsWith
the resolved basePath) before returning; reference the getMessagePath function
and the messageId and basePath variables when making these checks.
| private getMessagePath(messageId: string): string { | ||
| return join(this.basePath, `${messageId}.json`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against path traversal via messageId.
Line 117 uses messageId directly in the path; a crafted ID could escape basePath.
🔒 Suggested safeguard
private getMessagePath(messageId: string): string {
+ if (messageId.includes('..') || messageId.includes('/') || messageId.includes('\\')) {
+ throw new Error('Invalid messageId');
+ }
return join(this.basePath, `${messageId}.json`);
}🤖 Prompt for AI Agents
In `@packages/messaging/src/storage/sent.ts` around lines 116 - 118, Guard against
path traversal in getMessagePath by validating/sanitizing the messageId before
joining with basePath: inside getMessagePath (and any callers), ensure messageId
does not contain path separators or traversal segments (e.g. check for path.sep,
'/' or '..') or alternatively compute const safe = basename(messageId) and throw
if safe !== messageId; if invalid throw a clear error. After validation,
continue to use join(this.basePath, `${messageId}.json`) so a crafted ID cannot
escape basePath.
| const response = await got.post(url, { | ||
| json: message, | ||
| timeout: { request: this.options.timeout }, | ||
| retry: { limit: this.options.retries }, | ||
| throwHttpErrors: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check for got version and examine the file
find . -name "package.json" -type f | xargs grep -l "got" | head -5Repository: rohitg00/skillkit
Length of output: 151
🏁 Script executed:
# Read the specific file mentioned in the review
cat -n packages/messaging/src/transport/remote.ts | head -50Repository: rohitg00/skillkit
Length of output: 1693
🏁 Script executed:
# Check package.json for got version
cat package.json | jq '.dependencies.got, .devDependencies.got' 2>/dev/null || grep -A2 -B2 '"got"' package.jsonRepository: rohitg00/skillkit
Length of output: 70
🏁 Script executed:
# Check the messaging package.json specifically
cat packages/messaging/package.json | grep -A5 -B5 '"got"'Repository: rohitg00/skillkit
Length of output: 308
🌐 Web query:
got v14 retry mechanism POST requests default behavior idempotent methods
💡 Result:
In Got v14, the built-in retry logic is intentionally biased toward idempotent HTTP methods:
- Default behavior: Got does not retry
POSTrequests. So a failedgot.post(...)(network error, certain status codes, etc.) will not be retried unless you opt in. [1][2] - Default “idempotent” retry methods:
GET,PUT,HEAD,DELETE,OPTIONS,TRACE(these are the methods Got retries by default). [3]
Retrying POST anyway (only if your POST is made idempotent)
You must explicitly add POST to retry.methods:
import got from 'got';
await got.post('https://api.example.com/things', {
json: {name: 'x'},
headers: {'idempotency-key': '...'}, // if your API supports it
retry: {
methods: ['GET', 'PUT', 'HEAD', 'DELETE', 'OPTIONS', 'TRACE', 'POST'],
limit: 2
}
});If you provide a custom retry.calculateDelay, note that Got can allow it to override retry rules unless you enable enforcement (enforceRetryRules). [1]
Sources: [1] https://www.jsdocs.io/package/got (v14.6.6 docs) • [2] https://wdk-docs.github.io/got-docs/7-retry/ • [3] https://www.npmjs.com/package/got/v/11.0.1
POST requests will not retry with current configuration in got v14.
The code sets retry: { limit: this.options.retries } on line 30, but got v14 only retries idempotent methods (GET, PUT, HEAD, DELETE, OPTIONS, TRACE) by default. POST requests are not retried regardless of the limit value, making the retries option ineffective for this delivery mechanism.
To enable POST retries, configure retry.methods explicitly:
♻️ Fix POST retry configuration
- retry: { limit: this.options.retries },
+ retry: {
+ limit: this.options.retries,
+ methods: ['POST'],
+ statusCodes: [408, 429, 500, 502, 503, 504],
+ },Ensure the backend implements message deduplication (via message.id) if POST retries are enabled to prevent duplicate message delivery.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await got.post(url, { | |
| json: message, | |
| timeout: { request: this.options.timeout }, | |
| retry: { limit: this.options.retries }, | |
| throwHttpErrors: false, | |
| const response = await got.post(url, { | |
| json: message, | |
| timeout: { request: this.options.timeout }, | |
| retry: { | |
| limit: this.options.retries, | |
| methods: ['POST'], | |
| statusCodes: [408, 429, 500, 502, 503, 504], | |
| }, | |
| throwHttpErrors: false, |
🤖 Prompt for AI Agents
In `@packages/messaging/src/transport/remote.ts` around lines 27 - 31, The got
POST call in remote.ts using got.post(...) with retry: { limit:
this.options.retries } won't retry POSTs in got v14; update the retry config
passed to got (the options object in the got.post call) to include retry.methods
that explicitly lists 'POST' (e.g. retry: { limit: this.options.retries,
methods: ['GET','PUT','HEAD','DELETE','OPTIONS','TRACE','POST'] }) so POSTs will
be retried, and ensure the backend consumer handles idempotency/deduplication
using message.id when retries are enabled.
Implements comprehensive enhancement plan with 9 phases: Phase 1: @skillkit/resources Package - Bundled agents (9): architect, build-error-resolver, code-reviewer, doc-updater, e2e-runner, planner, refactor-cleaner, security-reviewer, tdd-guide - Command templates (10): /tdd, /plan, /e2e, /learn, /build-fix, etc. - Operational profiles (4): dev, review, research, security - Guidelines (5): security, code-style, testing, git, performance - Hook templates (12): typescript-check, eslint-check, security-scan, etc. Phase 2-3: Pattern Learning & Confidence Scoring - skillkit learn: Extract patterns from git history - skillkit pattern status|feedback|approve|reject|export|import|cluster - Confidence evolution: success +0.05, failure -0.10 Phase 4: Session State Persistence - skillkit session status|start|load|list|note|complete|in-progress - Cross-session context preservation in ~/.skillkit/sessions/ Phase 5: Workflow Pipelines - skillkit workflow pipeline list|<name> - Built-in: feature, bugfix, refactor, security-audit, documentation Phase 6: Operational Profiles - skillkit profile list|<name>|create|remove - Mode switching for different work contexts Phase 7: Guidelines System - skillkit guideline list|show|enable|disable|create|remove - Always-on coding standards Phase 8: Hook Templates - skillkit hook template list|apply|show - Pre-built automation patterns Phase 9: Slash Command Templates - skillkit command available|install <id> - Install bundled command templates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if (options.since) { | ||
| args.push(`--since="${options.since}"`); | ||
| } | ||
| if (options.until) { | ||
| args.push(`--until="${options.until}"`); | ||
| } | ||
| if (options.branch) { | ||
| args.push(options.branch); | ||
| } | ||
| if (options.author) { | ||
| args.push(`--author="${options.author}"`); | ||
| } | ||
|
|
||
| const output = runGitCommand(`git ${args.join(' ')}`, repoPath); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Shell injection risk in git history analyzer due to string-concatenated git command
analyzeGitHistory/getGitCommits builds a shell command by concatenating user-provided option strings into git log ... and passes it to execSync.
Actual vs expected:
- Expected: User-supplied values like
--since,--until, and--authorshould be passed to git without being interpreted by the shell. - Actual: Values are embedded into a single command string with quotes and executed via the shell, allowing injection if an attacker can control these option values.
Impact:
- Potential arbitrary command execution when these APIs are used with untrusted input (e.g., a CLI flag or config value).
Click to expand
The command is constructed as a string:
- Options are interpolated directly:
args.push( \--since="${options.since}"`
)etc.packages/core/src/learning/git-analyzer.ts:50-61` - Then executed via
execSync(command, ...):packages/core/src/learning/git-analyzer.ts:22-27 - Full command built as:
runGitCommand(\git ${args.join(' ')}`, repoPath)packages/core/src/learning/git-analyzer.ts:63`
If options.since contains something like "; rm -rf ~; echo ", it can break out of quoting depending on shell parsing.
Recommendation: Avoid shell parsing by using execFileSync('git', args, { cwd, ... }) (or spawnSync) and pass each argument as a separate array element. Do not embed quotes in the args (--since=${value} is sufficient when not using a shell).
Was this helpful? React with 👍 or 👎 to provide feedback.
| export function getInstalledAgentIds(searchDirs?: string[]): string[] { | ||
| const dirs = searchDirs || [ | ||
| join(process.cwd(), '.claude', 'agents'), | ||
| join(homedir(), '.claude', 'agents'), | ||
| ]; | ||
|
|
||
| const installed = new Set<string>(); | ||
|
|
||
| for (const dir of dirs) { | ||
| if (existsSync(dir)) { | ||
| try { | ||
| const files = require('node:fs').readdirSync(dir); | ||
| for (const file of files) { | ||
| if (file.endsWith('.md')) { | ||
| installed.add(file.replace('.md', '')); | ||
| } | ||
| } | ||
| } catch { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Runtime failure in ESM package due to use of require() in @skillkit/resources
@skillkit/resources is an ESM package ("type": "module"), but getInstalledAgentIds() uses require('node:fs'). In ESM, require is not defined by default.
Actual vs expected:
- Expected:
getInstalledAgentIds()works when the package is imported as ESM. - Actual: Calling it throws
ReferenceError: require is not defined, breakinggetAvailableAgents()/isAgentInstalled()and any CLI commands that rely on them.
Impact:
- Breaks
skillkit agent availableand related features at runtime.
Click to expand
The ESM-only usage is here:
const files = require('node:fs').readdirSync(dir);packages/resources/src/agents/index.ts:135
This function is used by:
getAvailableAgents()which callsgetInstalledAgentIds()packages/resources/src/agents/index.ts:150-153isAgentInstalled()which callsgetInstalledAgentIds()packages/resources/src/agents/index.ts:155-156
Recommendation: Replace require('node:fs').readdirSync with an ESM import at top-level (e.g., import { readdirSync } from 'node:fs') or use await import('node:fs') / createRequire(import.meta.url) if you truly need CommonJS interop.
Was this helpful? React with 👍 or 👎 to provide feedback.
Adds @skillkit/mcp-memory package - an MCP (Model Context Protocol) server
that exposes SkillKit's persistent memory system to Claude, Cursor, and
other MCP-compatible AI tools.
Tools:
- memory_store: Store memories with semantic embeddings
- memory_search: Semantic search through memories
- memory_recall: Recall by category/tier/tags
- memory_get: Get memory by ID
- memory_forget: Delete a memory
- memory_link: Create relationships between memories
- memory_reinforce: Strengthen/weaken memories
- memory_stats: Get memory statistics
Resources:
- memory://stats: Current statistics (JSON)
- memory://recent: Recent memories (JSON)
Usage with Claude Desktop:
```json
{
"mcpServers": {
"skillkit-memory": {
"command": "npx",
"args": ["@skillkit/mcp-memory"],
"env": { "SKILLKIT_AGENT_ID": "claude-desktop" }
}
}
}
```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/learn.ts`:
- Around line 213-215: The declared Option.String `category` (the --category, -c
option) is unused in `execute()`; either remove the `category` option
declaration or apply it to filter the candidate patterns before they are
processed/displayed. If implementing filtering, read `category` inside
`execute()`, validate it against the allowed values (error_fix, refactor,
workaround, debugging, convention), then restrict the `patterns` (or whatever
collection is passed to the display/selection logic) via a simple filter like
patterns.filter(p => p.category === category) and proceed with the existing
flow; if removing, delete the `category` Option declaration and any help text
references to avoid dead CLI options.
In `@packages/cli/src/commands/primer.ts`:
- Around line 109-111: The commits value passed into analyzeGitHistory may be
NaN or negative; validate and sanitize this.commits before calling
analyzeGitHistory: parse the input with parseInt, if result is NaN use a safe
default (e.g., 100), clamp the value to a minimum of 1 (and optionally a
reasonable maximum like 1000), and then pass that sanitized integer in the
commits option to analyzeGitHistory so analyzeGitHistory always receives a valid
positive integer.
- Around line 98-144: The runLearn flow currently has no error handling, so
exceptions from analyzePrimer, analyzeGitHistory, or addPattern will crash the
CLI; wrap the core logic of runLearn in a try/catch (like
runAnalysis/runGenerate) so any thrown error is caught, log the error (use the
same logger used elsewhere or console.error) with a clear message referencing
runLearn, and return a non-zero exit code (e.g., return 1) in the catch block;
ensure the try covers calls to analyzePrimer, analyzeGitHistory, and the loops
that call addPattern so all failures are handled.
In `@packages/cli/src/commands/profile.ts`:
- Around line 150-162: The code incorrectly casts this.name to the narrow union
ProfileName in execute() and elsewhere, which breaks when creating/removing
arbitrary custom profiles; update the type system so custom profile identifiers
are not constrained to the literal union — either (a) broaden ProfileName to
string | 'dev' | 'review' | 'research' | 'security' or (b) introduce a separate
CustomProfileName = string and adjust OperationalProfile.name to accept string
(or a union type), then change signatures that accept profile names (e.g.
removeCustomProfile) to accept the broader type and remove the forced cast in
execute(); ensure callers and type annotations for OperationalProfile,
execute(), and removeCustomProfile are updated consistently.
In `@packages/core/src/guidelines/manager.ts`:
- Around line 184-199: loadGuidelineConfig currently returns
DEFAULT_GUIDELINE_CONFIG (or shallow-merges parsed YAML into it), which allows
later mutations to mutate the shared default; change loadGuidelineConfig to
always return a fresh, deep-cloned config object instead of returning
DEFAULT_GUIDELINE_CONFIG directly—e.g., create a deep clone of
DEFAULT_GUIDELINE_CONFIG (using structuredClone or a JSON deep-copy) then merge
parsed yaml.parse(content) into that clone, ensuring arrays (like any guideline
lists) are copied/replaced rather than referenced; also return a deep clone on
the error/no-file paths so getConfigPath, loadGuidelineConfig and
DEFAULT_GUIDELINE_CONFIG never expose shared mutable references.
In `@packages/core/src/guidelines/types.ts`:
- Around line 14-21: DEFAULT_GUIDELINE_CONFIG currently lists category names but
code in manager.ts filters against guideline IDs (g.id); update
DEFAULT_GUIDELINE_CONFIG.enabledGuidelines so it contains actual guideline IDs
or simply initialize it to an empty array to avoid mismatches—edit the export
DEFAULT_GUIDELINE_CONFIG in packages/core/src/guidelines/types.ts, replace
['security','code-style','testing','git'] with either the canonical guideline id
strings used across Guideline objects or with [] so that manager.ts filtering by
g.id works correctly.
In `@packages/core/src/learning/extractor.ts`:
- Around line 315-336: mergePatterns currently treats patterns with the same
problem text as duplicates which can wrongly deduplicate distinct solutions;
update the identity check in mergePatterns so findIndex uses only the unique id
(change the predicate to p.id === pattern.id) and keep the existing
confidence-replacement logic; if IDs can be missing in your data, add a clear
fallback (e.g., generate/require IDs or use a composite key of id+solution) but
do not rely on problem text alone.
In `@packages/core/src/learning/feedback.ts`:
- Around line 25-26: The current logic in recordSuccess/recordFailure always
sets change to 'increased'/'decreased' even when evolving.confidence is clamped
at bounds; capture the previous confidence (oldConfidence), compute the new
confidence with the clamping logic (e.g., newConfidence = Math.min(0.95, old +
0.05) or Math.max(0.1, old - 0.05)), assign evolving.confidence = newConfidence,
and then set change based on comparing newConfidence to oldConfidence
('increased' | 'decreased' | 'unchanged'); update both recordSuccess and
recordFailure to use this pattern so the change field accurately reflects
whether the value actually changed.
- Around line 118-120: inferAction currently returns
pattern.solution.split('\n')[0] which can be empty when the solution starts with
newlines; update inferAction (and reference LearnedPattern.pattern.solution) to
split pattern.solution by '\n', iterate to find the first non-empty line after
trimming whitespace and return that, and only if none found return the fallback
'Apply learned solution'.
In `@packages/core/src/learning/git-analyzer.ts`:
- Line 46: The current git log format uses '|||', which can appear in commit
messages and break parsing; update the format string to use a safe delimiter
such as the null byte (e.g. replace
'--format=%H|||%h|||%an|||%aI|||%s|||%b|||END_COMMIT' with a null-byte based
sequence like using '\x00' between fields and before END_COMMIT), and update the
parser that splits the git output (the split call currently using '|||',
referenced on the split line around line 74) to split on the new null-byte
delimiter (and the trailing END_COMMIT marker) so commits are parsed safely.
In `@packages/core/src/learning/types.ts`:
- Around line 101-108: DEFAULT_LEARNING_CONFIG currently sets outputPath to the
shell-tilde string '~/.skillkit/learned/' which won't be expanded by Node.js;
update the default to an absolute path by expanding the tilde at initialization
(use homedir() to build the path) or replace the literal tilde with a
pre-expanded path string so consumers get a valid home-directory-based path;
change DEFAULT_LEARNING_CONFIG.outputPath accordingly (refer to
DEFAULT_LEARNING_CONFIG, LearningConfig, and the use of homedir()).
In `@packages/resources/src/agents/manifest.ts`:
- Around line 3-13: The BUNDLED_AGENTS manifest entries are missing the
disallowedTools property present in the templates; update the objects in the
BUNDLED_AGENTS array (e.g., the entry with id 'architect' and the entry for
'code-reviewer') to include disallowedTools: ['Edit', 'Write', 'NotebookEdit']
(or the exact restriction list from the front-matter) so the BundledAgent
entries reflect the same tool restrictions as the templates; ensure the
BundledAgent type/interface accepts disallowedTools and add the property
consistently to all affected entries.
In `@packages/resources/src/hooks/index.ts`:
- Around line 32-41: formatHookCommand builds a RegExp from variable keys and
currently uses the raw key which can treat regex metacharacters as special;
update formatHookCommand to escape regex special characters in each key (the key
from vars in the for loop) before constructing new RegExp(`\\{\\{${key}\\}\\}`,
'g') so substitutions are literal; keep using the global flag and leave
template.command/template.variables/variables logic intact.
In `@packages/resources/src/hooks/manifest.ts`:
- Around line 15-63: The pre-commit hook commands should avoid running with
empty file lists and `secret-detection` should exclude deleted files; update the
'eslint-check' hook to compute staged files first (e.g. FILES=$(git diff
--cached --name-only --diff-filter=d | grep -E "\.(js|ts|jsx|tsx)$")) and only
run npx eslint --fix $FILES if FILES is non-empty, and update the
'secret-detection' hook to use --diff-filter=d and similarly guard execution by
computing a FILES variable (e.g. FILES=$(git diff --cached --name-only
--diff-filter=d) && [ -n "$FILES" ] && npx secretlint $FILES) so neither
'eslint-check' nor 'secret-detection' run with empty args and deleted files are
excluded (note: reference hooks by id: 'eslint-check', 'secret-detection';
'prettier-format' already uses --diff-filter=d).
🧹 Nitpick comments (32)
packages/resources/templates/agents/security-reviewer.md (1)
115-123: Minor markdown formatting issues flagged by linter.The table at lines 116-118 should be surrounded by blank lines per MD058, and the table columns have alignment inconsistencies (MD060).
📝 Proposed fix for markdown formatting
### Secrets Detected + | Secret Type | Location | Action | |-------------|----------|--------| | API Key | .env.example | Remove or use placeholder | + ### Recommendationspackages/resources/src/commands/index.ts (1)
7-9: Avoid exposing a mutable shared array. Returning the internal list lets callers mutate global state. Consider returning a copy.♻️ Suggested change
export function getCommandTemplates(): CommandTemplate[] { - return COMMAND_TEMPLATES; + return [...COMMAND_TEMPLATES]; }packages/cli/src/commands/guideline.ts (1)
223-235: Consider validating priority input.
parseInt(this.priority)will returnNaNif the user provides a non-numeric string (e.g.,--priority high). This could result in storingNaNas the priority value.Also, the
enableGuideline(this.id)call on line 235 may be redundant since the guideline object already hasenabled: trueset on line 229. Consider if both are necessary.♻️ Proposed fix to validate priority
- priority: this.priority ? parseInt(this.priority) : 5, + priority: this.priority ? (parseInt(this.priority, 10) || 5) : 5,packages/resources/src/agents/index.ts (1)
124-148: Inconsistent import pattern withrequire('node:fs').Line 135 uses
require('node:fs').readdirSync(dir)instead of importingreaddirSyncfrom 'node:fs' at the top of the file with the other fs imports. This is inconsistent with the ESM import style used elsewhere in the file.♻️ Proposed fix to use consistent imports
-import { readFileSync, existsSync, mkdirSync, writeFileSync } from 'node:fs'; +import { readFileSync, existsSync, mkdirSync, writeFileSync, readdirSync } from 'node:fs';Then update line 135:
- const files = require('node:fs').readdirSync(dir); + const files = readdirSync(dir);packages/cli/src/commands/agent.ts (2)
726-728: Fragile string matching for skip detection.The check
result.message.includes('already exists')relies on the exact error message format frominstallBundledAgent. If that message changes in@skillkit/resources, this logic will silently break, causing skipped agents to be reported as errors.Consider having
installBundledAgentreturn a structured result with an explicitreasonorcodefield (e.g.,'already_exists') instead of parsing the message string.
834-839: Duplicate helper function.
formatCategoryName(lines 834-839) has an identical implementation toformatAgentName(lines 634-639). Consider consolidating these into a singletitleCaseorformatHyphenatedNamehelper to reduce duplication.♻️ Proposed consolidation
-function formatAgentName(name: string): string { - return name - .split('-') - .map(word => word.charAt(0).toUpperCase() + word.slice(1)) - .join(' '); -} +function toTitleCase(hyphenatedName: string): string { + return hyphenatedName + .split('-') + .map(word => word.charAt(0).toUpperCase() + word.slice(1)) + .join(' '); +} -function formatCategoryName(category: string): string { - return category - .split('-') - .map(word => word.charAt(0).toUpperCase() + word.slice(1)) - .join(' '); -} +// Use toTitleCase in place of both formatAgentName and formatCategoryNamepackages/core/src/learning/config.ts (2)
83-98: Non-atomic read-modify-write pattern.
addPatternperforms load → modify → save without file locking. If two processes calladdPatternconcurrently, one's changes may be lost. While unlikely in typical CLI usage, this could cause data loss in scripted batch scenarios.Consider documenting this limitation or implementing file-based locking for the pattern store.
118-134: Repeated store loads for query functions.Each query function (
getAllPatterns,getPatternsByCategory,getApprovedPatterns) loads the entire store from disk. For multiple queries in sequence, this is inefficient. Consider exposing the store object to allow callers to load once and query multiple times.packages/core/src/learning/extractor.ts (2)
261-261: Pattern ID may collide in rapid succession.The ID format
session-${Date.now()}-${messageIndex}can produce duplicates ifcreatePatternis called multiple times within the same millisecond for the same message index (e.g., if patterns are extracted in parallel or in a tight loop).Consider adding a random suffix or using a UUID library for guaranteed uniqueness.
🔧 Suggested fix
- const id = `session-${Date.now()}-${messageIndex}`; + const id = `session-${Date.now()}-${messageIndex}-${Math.random().toString(36).slice(2, 8)}`;
288-291: Duplicatetruncatehelper.This
truncatefunction is identical to the one inpackages/cli/src/commands/agent.ts(line 596-599). Consider extracting to a shared utility module.packages/core/src/learning/git-analyzer.ts (2)
22-28: Silent error suppression may hide legitimate issues.
runGitCommandreturns an empty string for all errors, including permission issues or corrupted repositories. Consider at least logging errors in debug mode or distinguishing between "not a git repo" vs. other failures.
304-307: Framework detection produces false positives.Checking if
file.path.includes('next')orincludes('react')will match any path containing these substrings (e.g.,nextstep/,react-native-config/, or evenprereact/). This could incorrectly identify frameworks.Consider more specific patterns like checking for configuration files (
next.config.js,package.jsondependencies) or using word boundaries.packages/core/src/learning/feedback.ts (1)
25-25: Asymmetric confidence adjustment may be intentional but worth documenting.Success increases confidence by +0.05 (capped at 0.95), while failure decreases by -0.1 (floored at 0.1). This 2:1 penalty ratio means a pattern needs two successes to recover from one failure. If this is intentional (to be conservative), consider adding a comment explaining the rationale.
Also applies to: 52-52
packages/core/src/learning/generator.ts (3)
51-69:getMostCommonCategoryreturns default when called with empty array.If
patternsis empty, the function returns'error_fix'as the default. This shouldn't happen given theminPatternscheck ingenerateSkillFromPatterns, but the implicit default could mask bugs if called directly elsewhere.
98-98:replace('_', ' ')only replaces the first underscore.While current
PatternCategoryvalues have at most one underscore (error_fix), usingreplaceAllwould be more robust for future categories.♻️ Suggested fix
- lines.push(`description: Learned ${category.replace('_', ' ')} patterns from project history`); + lines.push(`description: Learned ${category.replaceAll('_', ' ')} patterns from project history`);Apply similarly to lines 115 and 180.
Also applies to: 115-115, 180-180
140-154:saveGeneratedSkilldoesn't handle write errors.If
writeFileSyncthrows (e.g., disk full, permission denied), the error propagates uncaught. Consider wrapping in try/catch and returning a result object or re-throwing with more context.🔧 Suggested improvement
export function saveGeneratedSkill( skill: LearnedSkillOutput, outputDir?: string -): string { +): { success: true; path: string } | { success: false; error: string } { const dir = outputDir || join(homedir(), '.skillkit', 'skills', 'learned'); - if (!existsSync(dir)) { - mkdirSync(dir, { recursive: true }); - } + try { + if (!existsSync(dir)) { + mkdirSync(dir, { recursive: true }); + } - const filepath = join(dir, skill.filename); - writeFileSync(filepath, skill.content); + const filepath = join(dir, skill.filename); + writeFileSync(filepath, skill.content); - return filepath; + return { success: true, path: filepath }; + } catch (error) { + return { success: false, error: error instanceof Error ? error.message : 'Unknown error' }; + } }packages/resources/src/index.ts (1)
7-7: Consider sourcingRESOURCES_VERSIONfrom package metadata to avoid drift.
Manual updates can desync frompackage.jsonduring releases.packages/core/src/profiles/types.ts (1)
1-22: Avoid duplicate profile type definitions across packages.
ProfileName/OperationalProfileare duplicated here and inpackages/resources/src/profiles/types.ts. This is a drift risk; consider centralizing in a shared package or re-exporting one source.packages/resources/src/profiles/manifest.ts (1)
15-53: Centralize tool identifier definitions for type safety.Tool IDs in
preferredTools/avoidToolsare free-form strings with no central registry or validation. While current usage is consistent (PascalCase: 'Edit', 'Write', 'Bash', 'Read', 'Grep', 'Glob', 'WebSearch', 'WebFetch'), a mismatch in casing or names would silently fail at runtime. Create a sharedconstorenum(e.g.,TOOL_IDS) to enforce consistency and enable compile-time checking.packages/cli/src/commands/workflow/pipeline.ts (2)
4-8: Consider removing redundantBUILTIN_PIPELINESimport.
BUILTIN_PIPELINES(Line 4) is only used in theexecute()method at Line 36, wheregetBuiltinPipelines()could be used instead for consistency with the rest of the code. This would simplify the import surface.Suggested import cleanup
import { - BUILTIN_PIPELINES, getBuiltinPipeline, getBuiltinPipelines, type AgentPipeline, } from '@skillkit/core';Then at Line 36:
- for (const p of BUILTIN_PIPELINES) { + for (const p of getBuiltinPipelines()) {
81-102:runPipelineis markedasyncbut contains noawait.The method doesn't perform any asynchronous operations currently. This is fine if async execution is planned for future implementation, but if not, consider making it synchronous for clarity.
packages/core/src/profiles/manager.ts (1)
9-85: DuplicatedBUILTIN_PROFILESdefinition.This
BUILTIN_PROFILESarray is identical to the one defined inpackages/resources/src/profiles/manifest.ts. Maintaining two copies creates a risk of drift and violates DRY.Consider importing from
@skillkit/resourcesinstead, or extracting to a shared location that both packages can consume.Option: Import from resources package
-const BUILTIN_PROFILES: OperationalProfile[] = [ - { - name: 'dev', - // ... 75+ lines of profile definitions - }, -]; +import { BUILTIN_PROFILES } from '@skillkit/resources';This assumes the dependency graph allows core to depend on resources. If not, consider moving the canonical definition to core and having resources re-export it.
packages/core/src/session/state-file.ts (1)
110-169:parseSessionFilereturns potentially incompleteSessionFileon malformed input.If the session file is corrupted or incomplete, the function returns a
SessionFilewith empty strings for required fields likedate,agent,projectPath, andstartedAt. Callers may not expect these to be empty.Consider either:
- Returning
nullwhen required fields are missing (consistent withloadSessionFile's contract)- Adding validation before returning
Option: Validate required fields
function parseSessionFile(content: string): SessionFile { // ... existing parsing logic ... + // Validate required fields + if (!session.date || !session.agent || !session.projectPath) { + throw new Error('Invalid session file: missing required fields'); + } + return session; }The caller at Line 44 already catches exceptions and returns
null, so this would provide consistent error handling.packages/cli/src/commands/session.ts (3)
206-208: Add radix toparseIntfor explicit base-10 parsing.While modern JavaScript defaults to base 10 for numeric strings, explicitly providing the radix is a best practice that prevents edge cases with leading zeros.
Suggested fix
- const limit = this.limit ? parseInt(this.limit) : 10; + const limit = this.limit ? parseInt(this.limit, 10) : 10;
280-292: Task removal frominProgressbypassesupdateSessionFilelogic.The code correctly removes a completed task from
inProgress, but it mutates theupdatedobject directly after callingupdateSessionFile. This works, but the removal logic could be encapsulated inupdateSessionFilefor cleaner semantics.Additionally, Line 284 checks
session.inProgress(the original), which is correct, but consider adding a comment for clarity sinceupdatedwas just created fromsession.
246-260: Hardcoded default agent name.When no session exists, the code auto-creates one with a hardcoded
'claude-code'agent (also at Lines 277 and 309). Consider extracting this to a constant or making it configurable via environment variable.packages/cli/src/commands/learn.ts (6)
78-81: Missing validation for--commitsoption.If the user passes a non-numeric value (e.g.,
--commits abc),parseIntreturnsNaN, which will be passed toanalyzeGitHistory. Consider validating the parsed value.🛡️ Suggested validation
- const result = analyzeGitHistory(projectPath, { - commits: this.commits ? parseInt(this.commits) : 100, + const commitCount = this.commits ? parseInt(this.commits, 10) : 100; + if (this.commits && isNaN(commitCount)) { + console.log(chalk.red('Invalid --commits value. Must be a number.')); + return 1; + } + + const result = analyzeGitHistory(projectPath, { + commits: commitCount, since: this.since, });
96-96: Unused function call.
loadPatternStore()is called but its return value is discarded. Based on the core implementation, this function has no side effects—it only reads and returns aPatternStoreobject. This line can be removed.🧹 Remove dead code
- loadPatternStore(); let added = 0;
277-285: Consider enforcing mutual exclusivity of--successand--failure.If both flags are specified,
--successsilently takes precedence. This could confuse users who accidentally pass both. Consider warning or erroring when both are provided.🛡️ Add mutual exclusivity check
async execute(): Promise<number> { + if (this.success && this.failure) { + console.log(chalk.yellow('Cannot specify both --success and --failure')); + return 1; + } + if (!this.success && !this.failure) { console.log(chalk.yellow('Please specify --success or --failure')); return 1; }
378-385: Invalid format values are silently treated as JSON.If a user passes an unrecognized format (e.g.,
--format xml), the code defaults to JSON without warning. Consider validating the format option.🛡️ Add format validation
const format = this.format || 'json'; + if (format !== 'json' && format !== 'report') { + console.log(chalk.yellow(`Unknown format '${format}'. Using 'json'.`)); + } let content: string;
463-464:replaceonly substitutes the first underscore.If a category contains multiple underscores, only the first is replaced. Use
replaceAllor a regex with the global flag.🔧 Use replaceAll
- console.log(chalk.bold(`${category.replace('_', ' ')} (${categoryPatterns.length}):`)); + console.log(chalk.bold(`${category.replaceAll('_', ' ')} (${categoryPatterns.length}):`));
473-475: Missing validation for--min-confidenceoption.Similar to
--commits,parseFloaton an invalid string returnsNaN, which could produce unexpected behavior in skill generation.🛡️ Suggested validation
if (this.generate && categoryPatterns.length >= 1) { + const minConf = this.minConfidence ? parseFloat(this.minConfidence) : 0.5; + if (this.minConfidence && isNaN(minConf)) { + console.log(chalk.red('Invalid --min-confidence value. Must be a number.')); + return 1; + } const skill = generateSkillFromPatterns(categoryPatterns, { - minConfidence: this.minConfidence ? parseFloat(this.minConfidence) : 0.5, + minConfidence: minConf, });
| category = Option.String('--category,-c', { | ||
| description: 'Filter by category (error_fix, refactor, workaround, debugging, convention)', | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused --category option.
The category option is declared but never used in execute(). Users would expect filtering by category, but patterns are not filtered. Either implement the filtering logic or remove the unused option.
🔧 Option 1: Remove unused option
- category = Option.String('--category,-c', {
- description: 'Filter by category (error_fix, refactor, workaround, debugging, convention)',
- });
-
json = Option.Boolean('--json,-j', false, {🔧 Option 2: Implement filtering (if intended)
async execute(): Promise<number> {
- const stats = getPatternStats();
+ let stats = getPatternStats();
+
+ // If category filtering is needed, you'd need to modify getPatternStats
+ // or filter patterns before computing statsAlso applies to: 221-253
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/learn.ts` around lines 213 - 215, The declared
Option.String `category` (the --category, -c option) is unused in `execute()`;
either remove the `category` option declaration or apply it to filter the
candidate patterns before they are processed/displayed. If implementing
filtering, read `category` inside `execute()`, validate it against the allowed
values (error_fix, refactor, workaround, debugging, convention), then restrict
the `patterns` (or whatever collection is passed to the display/selection logic)
via a simple filter like patterns.filter(p => p.category === category) and
proceed with the existing flow; if removing, delete the `category` Option
declaration and any help text references to avoid dead CLI options.
| private async runLearn(projectPath: string): Promise<number> { | ||
| console.log(chalk.cyan('Analyzing codebase and extracting patterns...\n')); | ||
|
|
||
| const analysis = analyzePrimer(projectPath); | ||
| if (this.verbose) { | ||
| this.printAnalysis(analysis); | ||
| console.log(); | ||
| } | ||
|
|
||
| console.log(chalk.bold('Analyzing git history for patterns...\n')); | ||
|
|
||
| const gitResult = analyzeGitHistory(projectPath, { | ||
| commits: this.commits ? parseInt(this.commits) : 100, | ||
| }); | ||
|
|
||
| console.log(` Commits analyzed: ${gitResult.commitCount}`); | ||
| console.log(` Languages: ${gitResult.languages.join(', ') || 'N/A'}`); | ||
| console.log(` Frameworks: ${gitResult.frameworks.join(', ') || 'N/A'}`); | ||
| console.log(); | ||
|
|
||
| if (gitResult.patterns.length === 0) { | ||
| console.log(chalk.yellow('No learnable patterns found.')); | ||
| return 0; | ||
| } | ||
|
|
||
| console.log(chalk.bold(`Extracted ${gitResult.patterns.length} patterns:\n`)); | ||
|
|
||
| for (const pattern of gitResult.patterns.slice(0, 10)) { | ||
| const confidence = chalk.blue(`${(pattern.confidence * 100).toFixed(0)}%`); | ||
| console.log(` ${chalk.dim('○')} ${pattern.title} [${pattern.category}] ${confidence}`); | ||
| addPattern(pattern); | ||
| } | ||
|
|
||
| if (gitResult.patterns.length > 10) { | ||
| console.log(chalk.dim(` ... and ${gitResult.patterns.length - 10} more saved`)); | ||
| for (const pattern of gitResult.patterns.slice(10)) { | ||
| addPattern(pattern); | ||
| } | ||
| } | ||
|
|
||
| console.log(); | ||
| console.log(chalk.green(`✓ Saved ${gitResult.patterns.length} patterns`)); | ||
| console.log(chalk.dim('View with: skillkit learn --show')); | ||
| console.log(chalk.dim('Approve with: skillkit pattern approve <id>')); | ||
|
|
||
| return 0; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling to the learn flow. Unhandled exceptions in Line 101–135 will crash the CLI and skip a clean exit code. Mirror the try/catch used in runAnalysis/runGenerate.
🛡️ Suggested fix
private async runLearn(projectPath: string): Promise<number> {
console.log(chalk.cyan('Analyzing codebase and extracting patterns...\n'));
-
- const analysis = analyzePrimer(projectPath);
- if (this.verbose) {
- this.printAnalysis(analysis);
- console.log();
- }
-
- console.log(chalk.bold('Analyzing git history for patterns...\n'));
-
- const gitResult = analyzeGitHistory(projectPath, {
- commits: this.commits ? parseInt(this.commits) : 100,
- });
+ try {
+ const analysis = analyzePrimer(projectPath);
+ if (this.verbose) {
+ this.printAnalysis(analysis);
+ console.log();
+ }
+
+ console.log(chalk.bold('Analyzing git history for patterns...\n'));
+
+ const gitResult = analyzeGitHistory(projectPath, {
+ commits: this.commits ? parseInt(this.commits) : 100,
+ });
console.log(` Commits analyzed: ${gitResult.commitCount}`);
console.log(` Languages: ${gitResult.languages.join(', ') || 'N/A'}`);
console.log(` Frameworks: ${gitResult.frameworks.join(', ') || 'N/A'}`);
console.log();
@@
console.log(chalk.dim('Approve with: skillkit pattern approve <id>'));
return 0;
+ } catch (error) {
+ console.error(chalk.red('Learning failed:'), error instanceof Error ? error.message : error);
+ return 1;
+ }
}🤖 Prompt for AI Agents
In `@packages/cli/src/commands/primer.ts` around lines 98 - 144, The runLearn flow
currently has no error handling, so exceptions from analyzePrimer,
analyzeGitHistory, or addPattern will crash the CLI; wrap the core logic of
runLearn in a try/catch (like runAnalysis/runGenerate) so any thrown error is
caught, log the error (use the same logger used elsewhere or console.error) with
a clear message referencing runLearn, and return a non-zero exit code (e.g.,
return 1) in the catch block; ensure the try covers calls to analyzePrimer,
analyzeGitHistory, and the loops that call addPattern so all failures are
handled.
| const gitResult = analyzeGitHistory(projectPath, { | ||
| commits: this.commits ? parseInt(this.commits) : 100, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate --commits input before passing it to analysis. parseInt can yield NaN or a negative number; this silently propagates into analyzeGitHistory. Consider a safe default + clamp.
✅ Suggested validation
- const gitResult = analyzeGitHistory(projectPath, {
- commits: this.commits ? parseInt(this.commits) : 100,
- });
+ const parsedCommits = this.commits ? Number.parseInt(this.commits, 10) : 100;
+ const commitCount = Number.isFinite(parsedCommits) && parsedCommits > 0 ? parsedCommits : 100;
+ const gitResult = analyzeGitHistory(projectPath, {
+ commits: commitCount,
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const gitResult = analyzeGitHistory(projectPath, { | |
| commits: this.commits ? parseInt(this.commits) : 100, | |
| }); | |
| const parsedCommits = this.commits ? Number.parseInt(this.commits, 10) : 100; | |
| const commitCount = Number.isFinite(parsedCommits) && parsedCommits > 0 ? parsedCommits : 100; | |
| const gitResult = analyzeGitHistory(projectPath, { | |
| commits: commitCount, | |
| }); |
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/primer.ts` around lines 109 - 111, The commits
value passed into analyzeGitHistory may be NaN or negative; validate and
sanitize this.commits before calling analyzeGitHistory: parse the input with
parseInt, if result is NaN use a safe default (e.g., 100), clamp the value to a
minimum of 1 (and optionally a reasonable maximum like 1000), and then pass that
sanitized integer in the commits option to analyzeGitHistory so
analyzeGitHistory always receives a valid positive integer.
| async execute(): Promise<number> { | ||
| if (isBuiltinProfile(this.name as ProfileName)) { | ||
| console.log(chalk.red(`Cannot create profile: ${this.name} is a built-in profile`)); | ||
| return 1; | ||
| } | ||
|
|
||
| const profile: OperationalProfile = { | ||
| name: this.name as ProfileName, | ||
| description: this.description, | ||
| focus: this.focus || this.description, | ||
| behaviors: [], | ||
| priorities: [], | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find ProfileName type definition and usage
rg -n "type ProfileName|interface ProfileName" --type=tsRepository: rohitg00/skillkit
Length of output: 363
🏁 Script executed:
# Check the profile.ts file around the mentioned lines and ProfileName usage
rg -n "ProfileName" packages/cli/src/commands/profile.ts -B2 -A2Repository: rohitg00/skillkit
Length of output: 1286
🏁 Script executed:
# Find the isBuiltinProfile function implementation
rg -n "isBuiltinProfile" --type=ts -B2 -A5Repository: rohitg00/skillkit
Length of output: 2796
🏁 Script executed:
# Check how custom profiles are created and stored
rg -n "custom.*profile|CustomProfile" --type=ts -iRepository: rohitg00/skillkit
Length of output: 1978
Type safety issue: ProfileName type doesn't accommodate arbitrary custom profile names.
The code casts this.name to ProfileName (which is 'dev' | 'review' | 'research' | 'security' | 'custom'), but custom profiles can have arbitrary names like 'my-profile'. This forced cast indicates a type system mismatch—ProfileName is a union of literal values, but custom profiles are not constrained by this union.
The removeCustomProfile function signature also accepts ProfileName, but it's called with arbitrary custom profile names. Consider refactoring the type system to use string for custom profile names or introducing a separate CustomProfileName type distinct from ProfileName.
🤖 Prompt for AI Agents
In `@packages/cli/src/commands/profile.ts` around lines 150 - 162, The code
incorrectly casts this.name to the narrow union ProfileName in execute() and
elsewhere, which breaks when creating/removing arbitrary custom profiles; update
the type system so custom profile identifiers are not constrained to the literal
union — either (a) broaden ProfileName to string | 'dev' | 'review' | 'research'
| 'security' or (b) introduce a separate CustomProfileName = string and adjust
OperationalProfile.name to accept string (or a union type), then change
signatures that accept profile names (e.g. removeCustomProfile) to accept the
broader type and remove the forced cast in execute(); ensure callers and type
annotations for OperationalProfile, execute(), and removeCustomProfile are
updated consistently.
| function getConfigPath(): string { | ||
| return join(homedir(), '.skillkit', 'guidelines.yaml'); | ||
| } | ||
|
|
||
| export function loadGuidelineConfig(): GuidelineConfig { | ||
| const path = getConfigPath(); | ||
|
|
||
| if (!existsSync(path)) { | ||
| return DEFAULT_GUIDELINE_CONFIG; | ||
| } | ||
|
|
||
| try { | ||
| const content = readFileSync(path, 'utf-8'); | ||
| return { ...DEFAULT_GUIDELINE_CONFIG, ...yaml.parse(content) }; | ||
| } catch { | ||
| return DEFAULT_GUIDELINE_CONFIG; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid mutating DEFAULT_GUIDELINE_CONFIG via shared references.
loadGuidelineConfig returns the default object directly (and shallow-merges arrays), so later mutations (e.g., enableGuideline, addCustomGuideline) can mutate the shared defaults for the entire process. This can make defaults drift across calls/tests.
🛠️ Proposed fix: clone/normalize config on load
function getConfigPath(): string {
return join(homedir(), '.skillkit', 'guidelines.yaml');
}
+function cloneGuidelineConfig(config: GuidelineConfig): GuidelineConfig {
+ return {
+ enabledGuidelines: [...config.enabledGuidelines],
+ customGuidelines: [...config.customGuidelines],
+ };
+}
+
export function loadGuidelineConfig(): GuidelineConfig {
const path = getConfigPath();
if (!existsSync(path)) {
- return DEFAULT_GUIDELINE_CONFIG;
+ return cloneGuidelineConfig(DEFAULT_GUIDELINE_CONFIG);
}
try {
const content = readFileSync(path, 'utf-8');
- return { ...DEFAULT_GUIDELINE_CONFIG, ...yaml.parse(content) };
+ const parsed = (yaml.parse(content) ?? {}) as Partial<GuidelineConfig>;
+ return {
+ enabledGuidelines: Array.isArray(parsed.enabledGuidelines)
+ ? [...parsed.enabledGuidelines]
+ : [...DEFAULT_GUIDELINE_CONFIG.enabledGuidelines],
+ customGuidelines: Array.isArray(parsed.customGuidelines)
+ ? [...parsed.customGuidelines]
+ : [...DEFAULT_GUIDELINE_CONFIG.customGuidelines],
+ };
} catch {
- return DEFAULT_GUIDELINE_CONFIG;
+ return cloneGuidelineConfig(DEFAULT_GUIDELINE_CONFIG);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function getConfigPath(): string { | |
| return join(homedir(), '.skillkit', 'guidelines.yaml'); | |
| } | |
| export function loadGuidelineConfig(): GuidelineConfig { | |
| const path = getConfigPath(); | |
| if (!existsSync(path)) { | |
| return DEFAULT_GUIDELINE_CONFIG; | |
| } | |
| try { | |
| const content = readFileSync(path, 'utf-8'); | |
| return { ...DEFAULT_GUIDELINE_CONFIG, ...yaml.parse(content) }; | |
| } catch { | |
| return DEFAULT_GUIDELINE_CONFIG; | |
| function getConfigPath(): string { | |
| return join(homedir(), '.skillkit', 'guidelines.yaml'); | |
| } | |
| function cloneGuidelineConfig(config: GuidelineConfig): GuidelineConfig { | |
| return { | |
| enabledGuidelines: [...config.enabledGuidelines], | |
| customGuidelines: [...config.customGuidelines], | |
| }; | |
| } | |
| export function loadGuidelineConfig(): GuidelineConfig { | |
| const path = getConfigPath(); | |
| if (!existsSync(path)) { | |
| return cloneGuidelineConfig(DEFAULT_GUIDELINE_CONFIG); | |
| } | |
| try { | |
| const content = readFileSync(path, 'utf-8'); | |
| const parsed = (yaml.parse(content) ?? {}) as Partial<GuidelineConfig>; | |
| return { | |
| enabledGuidelines: Array.isArray(parsed.enabledGuidelines) | |
| ? [...parsed.enabledGuidelines] | |
| : [...DEFAULT_GUIDELINE_CONFIG.enabledGuidelines], | |
| customGuidelines: Array.isArray(parsed.customGuidelines) | |
| ? [...parsed.customGuidelines] | |
| : [...DEFAULT_GUIDELINE_CONFIG.customGuidelines], | |
| }; | |
| } catch { | |
| return cloneGuidelineConfig(DEFAULT_GUIDELINE_CONFIG); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@packages/core/src/guidelines/manager.ts` around lines 184 - 199,
loadGuidelineConfig currently returns DEFAULT_GUIDELINE_CONFIG (or
shallow-merges parsed YAML into it), which allows later mutations to mutate the
shared default; change loadGuidelineConfig to always return a fresh, deep-cloned
config object instead of returning DEFAULT_GUIDELINE_CONFIG directly—e.g.,
create a deep clone of DEFAULT_GUIDELINE_CONFIG (using structuredClone or a JSON
deep-copy) then merge parsed yaml.parse(content) into that clone, ensuring
arrays (like any guideline lists) are copied/replaced rather than referenced;
also return a deep clone on the error/no-file paths so getConfigPath,
loadGuidelineConfig and DEFAULT_GUIDELINE_CONFIG never expose shared mutable
references.
| const args: string[] = [ | ||
| 'log', | ||
| `--max-count=${limit}`, | ||
| '--format=%H|||%h|||%an|||%aI|||%s|||%b|||END_COMMIT', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delimiter ||| could appear in commit messages.
The format string uses ||| as a delimiter, but this sequence could legitimately appear in commit messages or bodies, causing incorrect parsing. Consider using a less common delimiter like \x00 (null byte) or a multi-character sequence that's extremely unlikely in text.
🔧 Suggested fix
- '--format=%H|||%h|||%an|||%aI|||%s|||%b|||END_COMMIT',
+ '--format=%H%x00%h%x00%an%x00%aI%x00%s%x00%b%x00END_COMMIT',Then update the split on line 74:
- const parts = headerLine.split('|||');
+ const parts = headerLine.split('\x00');🤖 Prompt for AI Agents
In `@packages/core/src/learning/git-analyzer.ts` at line 46, The current git log
format uses '|||', which can appear in commit messages and break parsing; update
the format string to use a safe delimiter such as the null byte (e.g. replace
'--format=%H|||%h|||%an|||%aI|||%s|||%b|||END_COMMIT' with a null-byte based
sequence like using '\x00' between fields and before END_COMMIT), and update the
parser that splits the git output (the split call currently using '|||',
referenced on the split line around line 74) to split on the new null-byte
delimiter (and the trailing END_COMMIT marker) so commits are parsed safely.
| export const DEFAULT_LEARNING_CONFIG: LearningConfig = { | ||
| minSessionLength: 10, | ||
| sensitivity: 'medium', | ||
| autoApprove: false, | ||
| outputPath: '~/.skillkit/learned/', | ||
| categories: ['error_fix', 'refactor', 'workaround', 'debugging', 'convention'], | ||
| ignorePatterns: ['trivial_typo', 'one_off_fix', 'wip', 'merge'], | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tilde (~) in outputPath won't be expanded automatically.
The default outputPath: '~/.skillkit/learned/' uses a shell-style tilde, but Node.js file system APIs don't expand ~ to the home directory. This will create a literal directory named ~ in the current working directory.
The code in config.ts uses homedir() directly for path construction, but consumers using this default value directly would encounter this issue.
🔧 Suggested fix
Either document that consumers must expand the path, or remove the tilde:
export const DEFAULT_LEARNING_CONFIG: LearningConfig = {
minSessionLength: 10,
sensitivity: 'medium',
autoApprove: false,
- outputPath: '~/.skillkit/learned/',
+ outputPath: '.skillkit/learned/', // Relative to project, or use homedir() at runtime
categories: ['error_fix', 'refactor', 'workaround', 'debugging', 'convention'],
ignorePatterns: ['trivial_typo', 'one_off_fix', 'wip', 'merge'],
};Or handle tilde expansion in consuming code.
🤖 Prompt for AI Agents
In `@packages/core/src/learning/types.ts` around lines 101 - 108,
DEFAULT_LEARNING_CONFIG currently sets outputPath to the shell-tilde string
'~/.skillkit/learned/' which won't be expanded by Node.js; update the default to
an absolute path by expanding the tilde at initialization (use homedir() to
build the path) or replace the literal tilde with a pre-expanded path string so
consumers get a valid home-directory-based path; change
DEFAULT_LEARNING_CONFIG.outputPath accordingly (refer to
DEFAULT_LEARNING_CONFIG, LearningConfig, and the use of homedir()).
| export const BUNDLED_AGENTS: BundledAgent[] = [ | ||
| { | ||
| id: 'architect', | ||
| name: 'Architect', | ||
| description: 'Software architecture specialist for system design, scalability, and technical decision-making', | ||
| category: 'planning', | ||
| model: 'opus', | ||
| tools: ['Read', 'Grep', 'Glob'], | ||
| tags: ['architecture', 'design', 'planning', 'scalability'], | ||
| version: '1.0.0', | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistency: disallowedTools missing from manifest entries.
The architect and code-reviewer templates define disallowedTools: [Edit, Write, NotebookEdit] in their front-matter, but these restrictions are not reflected in their corresponding manifest entries. This could lead to agents having more permissive tool access than intended.
🔧 Proposed fix to add disallowedTools
{
id: 'architect',
name: 'Architect',
description: 'Software architecture specialist for system design, scalability, and technical decision-making',
category: 'planning',
model: 'opus',
tools: ['Read', 'Grep', 'Glob'],
+ disallowedTools: ['Edit', 'Write', 'NotebookEdit'],
tags: ['architecture', 'design', 'planning', 'scalability'],
version: '1.0.0',
}, {
id: 'code-reviewer',
name: 'Code Reviewer',
description: 'Expert code review specialist. Reviews code for quality, security, and maintainability',
category: 'review',
model: 'sonnet',
tools: ['Read', 'Grep', 'Glob', 'Bash'],
+ disallowedTools: ['Edit', 'Write', 'NotebookEdit'],
tags: ['review', 'quality', 'best-practices'],
version: '1.0.0',
},Also applies to: 24-33
🤖 Prompt for AI Agents
In `@packages/resources/src/agents/manifest.ts` around lines 3 - 13, The
BUNDLED_AGENTS manifest entries are missing the disallowedTools property present
in the templates; update the objects in the BUNDLED_AGENTS array (e.g., the
entry with id 'architect' and the entry for 'code-reviewer') to include
disallowedTools: ['Edit', 'Write', 'NotebookEdit'] (or the exact restriction
list from the front-matter) so the BundledAgent entries reflect the same tool
restrictions as the templates; ensure the BundledAgent type/interface accepts
disallowedTools and add the property consistently to all affected entries.
| export function formatHookCommand(template: HookTemplate, variables?: Record<string, string>): string { | ||
| let command = template.command; | ||
| const vars = { ...template.variables, ...variables }; | ||
|
|
||
| for (const [key, value] of Object.entries(vars)) { | ||
| command = command.replace(new RegExp(`\\{\\{${key}\\}\\}`, 'g'), value); | ||
| } | ||
|
|
||
| return command; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Escape regex special characters in variable keys.
The static analysis flagged potential ReDoS (CWE-1333), but the actual risk is low since the pattern \{\{key\}\} lacks nested quantifiers. However, if a variable key contains regex metacharacters (e.g., $, ., *), it could cause unexpected substitution behavior.
🛡️ Proposed fix to escape special regex characters
export function formatHookCommand(template: HookTemplate, variables?: Record<string, string>): string {
let command = template.command;
const vars = { ...template.variables, ...variables };
for (const [key, value] of Object.entries(vars)) {
- command = command.replace(new RegExp(`\\{\\{${key}\\}\\}`, 'g'), value);
+ const escapedKey = key.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
+ command = command.replace(new RegExp(`\\{\\{${escapedKey}\\}\\}`, 'g'), value);
}
return command;
}🧰 Tools
🪛 ast-grep (0.40.5)
[warning] 36-36: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${key}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🤖 Prompt for AI Agents
In `@packages/resources/src/hooks/index.ts` around lines 32 - 41,
formatHookCommand builds a RegExp from variable keys and currently uses the raw
key which can treat regex metacharacters as special; update formatHookCommand to
escape regex special characters in each key (the key from vars in the for loop)
before constructing new RegExp(`\\{\\{${key}\\}\\}`, 'g') so substitutions are
literal; keep using the global flag and leave
template.command/template.variables/variables logic intact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| export function getInstalledAgentIds(searchDirs?: string[]): string[] { | ||
| const dirs = searchDirs || [ | ||
| join(process.cwd(), '.claude', 'agents'), | ||
| join(homedir(), '.claude', 'agents'), | ||
| ]; | ||
|
|
||
| const installed = new Set<string>(); | ||
|
|
||
| for (const dir of dirs) { | ||
| if (existsSync(dir)) { | ||
| try { | ||
| const files = require('node:fs').readdirSync(dir); | ||
| for (const file of files) { | ||
| if (file.endsWith('.md')) { | ||
| installed.add(file.replace('.md', '')); | ||
| } | ||
| } | ||
| } catch { | ||
| continue; | ||
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Bundled agent listing uses CommonJS require() in ESM module and crashes
@skillkit/resources is declared as ESM ("type": "module"), but getInstalledAgentIds() calls require('node:fs'), which is not defined in ESM.
Actual: calling getInstalledAgentIds(), getAvailableAgents(), or isAgentInstalled() throws at runtime (ReferenceError: require is not defined), breaking skillkit agent available / agent install flows.
Expected: these functions should work in ESM without crashing.
Where it happens
// ESM file
const files = require('node:fs').readdirSync(dir);Recommendation: Replace require('node:fs').readdirSync with an ESM import (import { readdirSync } from 'node:fs') or use the already-imported node:fs functions.
Was this helpful? React with 👍 or 👎 to provide feedback.
| constructor(config: MessagingServiceConfig) { | ||
| this.config = config; | ||
|
|
||
| this.inbox = new InboxStorage(config.agentId, config.storagePath); | ||
| this.sent = new SentStorage(config.agentId, config.storagePath); | ||
| this.archived = new ArchivedStorage(config.agentId, config.storagePath); | ||
|
|
||
| this.router = new MessageRouter({ | ||
| localAgentId: config.agentId, | ||
| inboxStorage: this.inbox, | ||
| sentStorage: this.sent, | ||
| archivedStorage: this.archived, | ||
| }); | ||
| } | ||
|
|
||
| async initialize(): Promise<void> { | ||
| await this.inbox.initialize(); | ||
| await this.sent.initialize(); | ||
| await this.archived.initialize(); | ||
| } | ||
|
|
||
| async send(input: MessageCreateInput): Promise<MessageDeliveryResult> { | ||
| const message = _createMessage(this.config.agentId, input); | ||
| return this.router.route(message); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 MessagingService never configures remote delivery, so most sends fail
MessagingService constructs a MessageRouter but never sets a remote delivery handler. The router always saves to sent storage and then:
- Treats any recipient that is not the sender itself as remote (because only
localAgentIdis registered by default) - Returns
delivered: falsewitherror: 'No remote delivery handler configured'
Actual: MessagingService.send() cannot deliver messages to other agents unless external code calls service.getRouter().setRemoteHandler(...) and/or registerLocalAgent(...). The CLI (skillkit message send ...) simply creates the service and calls send(), so messages will not be delivered in normal usage.
Expected: local delivery should work at least for same-machine agents (or the service should wire up RemoteTransport/mesh routing by default).
Code path
- Router only has
localAgentIdregistered:packages/messaging/src/index.ts:49-55 - Router routes non-local recipients to
deliverRemote(...):packages/messaging/src/message/router.ts:41-51 deliverRemotefails whenremoteHandleris unset:packages/messaging/src/message/router.ts:89-96
Recommendation: In MessagingService.initialize() (or constructor), configure a default remote handler using RemoteTransport + mesh host resolution, and/or register a shared LocalTransport so multiple local agent inboxes can be reached.
Was this helpful? React with 👍 or 👎 to provide feedback.
| async route(message: Message): Promise<MessageDeliveryResult> { | ||
| const recipient = this.parseRecipient(message.to); | ||
|
|
||
| await this.config.sentStorage.save(message); | ||
|
|
||
| if (recipient.isLocal) { | ||
| return this.deliverLocal(message, recipient.agentId); | ||
| } | ||
|
|
||
| return this.deliverRemote(message, recipient.host!, recipient.agentId); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MessageRouter passes host string (may include port) into remote handler as hostAddress
parseRecipient() treats to values like agentId@host:port by splitting on @ and returning host as the entire RHS (e.g. "host:9876"). deliverRemote() then calls this.remoteHandler(message, hostAddress) with that raw string.
Actual: a remote handler that expects a host/IP (without port) will receive host:port and likely construct invalid URLs like http://host:9876:9876/... or fail DNS resolution.
Expected: router should either parse out host+port separately, or define the handler contract to accept the full host:port.
Where it happens
parseRecipient:packages/messaging/src/message/router.ts:144-147deliverRemoteinvocation:packages/messaging/src/message/router.ts:50-51
Recommendation: Change parseRecipient() to parse agentId@host:port into {host, port} (or store hostAddress as host only), and update the remote handler signature accordingly.
Was this helpful? React with 👍 or 👎 to provide feedback.
| async reinforce(id: string, amount = 0.1): Promise<PersistentMemory> { | ||
| const existing = await this.adapter.getMemory(id); | ||
| if (!existing) { | ||
| throw new Error(`Memory ${id} not found`); | ||
| } | ||
|
|
||
| const newScore = Math.min(1, existing.reinforcementScore + amount); | ||
|
|
||
| await this.adapter.updateMemory(id, { | ||
| reinforcementScore: newScore, | ||
| updatedAt: new Date().toISOString(), | ||
| }); | ||
|
|
||
| return this.get(id) as Promise<PersistentMemory>; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Memory reinforcement can write negative reinforcementScore
MemoryStore.reinforce() clamps the upper bound to 1 but does not clamp the lower bound to 0. Since the public API allows negative reinforcement amounts (e.g. MCP schema allows amount down to -1), calling reinforce(id, -1) can persist a negative reinforcementScore.
Actual: reinforcementScore can become negative in storage, which can skew tier promotion/demotion logic (e.g., TierManager uses memory.reinforcementScore / threshold).
Expected: reinforcement scores should remain within a defined range (likely [0, 1]).
Where it happens
const newScore = Math.min(1, existing.reinforcementScore + amount);Recommendation: Clamp both bounds: Math.max(0, Math.min(1, existing.reinforcementScore + amount)), or route negative amounts through weaken() consistently.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@packages/mcp-memory/README.md`:
- Around line 59-130: The fenced code blocks in README examples (e.g.,
memory_store, memory_search, memory_recall, memory_get, memory_forget,
memory_link, memory_reinforce, memory_stats) are missing language identifiers;
update each triple-backtick fence to include a language (suggest js or json) so
markdownlint stops flagging them and tooling can properly syntax-highlight the
examples—locate the blocks around the functions named above and add the
appropriate language token immediately after the opening ``` for each block.
In `@packages/mcp-memory/src/index.ts`:
- Around line 23-30: Concurrent calls to getMemoryStore can race and create
multiple MemoryStore instances; fix by introducing a memoized initialization
promise (e.g., memoryStoreInitPromise) alongside memoryStore so getMemoryStore
first returns memoryStore if set, otherwise if memoryStoreInitPromise exists
await and return it, and if neither exists set memoryStoreInitPromise =
createMemoryStore(AGENT_ID, DB_PATH), await it, assign the resolved value to
memoryStore, clear memoryStoreInitPromise, and return memoryStore; update
references to memoryStore and getMemoryStore accordingly to ensure only a single
store is created.
- Around line 376-393: The catch block in the 'memory_reinforce' case currently
converts any error from store.reinforce into a "Memory not found" response;
change this to either (A) verify existence first by calling store.get or
store.exists with input.id and return the not-found response only if that check
fails, then call store.reinforce, or (B) catch only a specific NotFound/NotExist
error thrown by store.reinforce and rethrow or surface any other errors; update
the catch to return a clear error payload (or rethrow) for real storage failures
instead of masking them. Ensure you modify the 'memory_reinforce' case handling
and touches to store.reinforce and any existence-check helper so only genuine
not-found situations produce the "Memory not found: ${input.id}" message.
- Around line 484-493: The module currently calls main() at module scope which
starts the StdioServerTransport as a side effect on import; wrap the invocation
in a "run when executed directly" guard so it only runs as a CLI: replace the
unconditional main().catch(...) with a conditional block such as if
(require.main === module) { main().catch((error) => { console.error('Fatal
error:', error); process.exit(1); }); } (or use if (import.meta?.main) for ESM
builds) so server, main(), and StdioServerTransport are exported safely without
auto-starting on import.
🧹 Nitpick comments (1)
packages/mcp-memory/src/index.ts (1)
279-294: Limit is applied before filters in recall.
Filtering after limiting can under-return results. If the intent is “limit after filters,” consider filtering first (or fetching more) and then slicing to the requested limit.
- Add cryptographic identity layer (Ed25519 keypairs, fingerprints) - Add message encryption (XChaCha20-Poly1305) and signing - Add secure keystore with AES-256-GCM at-rest encryption - Add JWT authentication and challenge-response handshake - Add secure WebSocket (WSS) and HTTPS transports - Add signed discovery messages with trust modes (open/signed/trusted-only) - Add TLS certificate management (self-signed) - Add CLI commands: mesh security init/status, mesh peer trust/revoke/list - Add security presets: development, signed, secure, strict - Align all package versions to 1.7.11 Security stack: @noble/ed25519, @noble/curves, @noble/ciphers, @noble/hashes, jose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| export function verifySecureMessage( | ||
| message: SecureTransportMessage | ||
| ): { valid: boolean; error?: string } { | ||
| if (!message.signature || !message.senderPublicKey || !message.senderFingerprint) { | ||
| return { valid: false, error: 'Missing signature fields' }; | ||
| } | ||
|
|
||
| const computedFingerprint = PeerIdentity.computeFingerprint( | ||
| Buffer.from(message.senderPublicKey, 'hex') | ||
| ); | ||
|
|
||
| if (computedFingerprint !== message.senderFingerprint) { | ||
| return { valid: false, error: 'Fingerprint mismatch' }; | ||
| } | ||
|
|
||
| return { valid: true }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Secure HTTP message verification does not verify signatures (auth bypass)
verifySecureMessage() in secure HTTP transport only checks that the claimed senderPublicKey hashes to the claimed senderFingerprint. It does not verify the cryptographic signature over the message contents.
Actual: an attacker can send an arbitrary payload with a self-consistent {senderPublicKey, senderFingerprint} pair and any signature value, and verifySecureMessage() returns { valid: true }.
Expected: the verifier should validate the signature over the canonical message bytes using the sender public key (and typically also verify timestamp/nonce freshness).
Where
// Only checks fingerprint match; never verifies message.signature
const computedFingerprint = PeerIdentity.computeFingerprint(
Buffer.from(message.senderPublicKey, 'hex')
);
...
return { valid: true };packages/mesh/src/transport/secure-http.ts:221-236
Recommendation: Verify message.signature over the message body with PeerIdentity.verify(...) (and reject if invalid). Consider also validating nonce/timestamp to prevent replay.
Was this helpful? React with 👍 or 👎 to provide feedback.
| async getStats(agentId?: string): Promise<MemoryStats> { | ||
| const stats = await this.adapter.getStats(agentId); | ||
|
|
||
| const byCategory: Record<MemoryCategory, number> = { | ||
| fact: stats.byCategory.fact ?? 0, | ||
| decision: stats.byCategory.decision ?? 0, | ||
| preference: stats.byCategory.preference ?? 0, | ||
| pattern: stats.byCategory.pattern ?? 0, | ||
| insight: stats.byCategory.insight ?? 0, | ||
| reasoning: stats.byCategory.reasoning ?? 0, | ||
| }; | ||
|
|
||
| const byTier: Record<MemoryTier, number> = { | ||
| warm: stats.byTier.warm ?? 0, | ||
| long: stats.byTier.long ?? 0, | ||
| }; | ||
|
|
||
| return { | ||
| totalMemories: stats.total, | ||
| byCategory, | ||
| byTier, | ||
| avgReinforcementScore: 0, | ||
| oldestMemory: null, | ||
| newestMemory: null, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Memory statistics always report avgReinforcementScore=0 and no oldest/newest memory
MemoryStore.getStats() returns avgReinforcementScore: 0 and oldestMemory/newestMemory: null regardless of actual stored data.
Actual: memory_stats and memory://stats in the MCP server will misleadingly report 0 average reinforcement and no oldest/newest even when memories exist.
Expected: these fields should reflect real statistics (or be omitted if unsupported).
Where
return {
totalMemories: stats.total,
...,
avgReinforcementScore: 0,
oldestMemory: null,
newestMemory: null,
};packages/memory/src/stores/memory-store.ts:245-269
Recommendation: Compute these values from stored rows (e.g., query avg/min/max) or remove the fields from the API if not supported yet.
Was this helpful? React with 👍 or 👎 to provide feedback.
- Add Mesh Network feature card with P2P description - Add Security feature card with Ed25519/TLS details - Add Mesh command group with security commands
- cli/mesh: validate port input (1-65535 range check) - memory/cozo-adapter: call db.close() before nullifying reference - memory/schema: fix primary key definitions (id => columns) - resources/hooks: guard pre-commit hooks against empty file lists - mcp-memory: prevent double-initialization with memoized promise - mcp-memory: don't mask reinforce failures as "not found" - mcp-memory: guard main() to prevent auto-start on import - mcp-memory: add js language identifiers to README code blocks - mesh/test: update MESH_VERSION test expectation to 1.7.11
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/cli/package.json (1)
3-3:⚠️ Potential issue | 🟡 MinorVersion not bumped to 2.0.0.
The PR summary indicates all packages are bumped to 2.0.0, but this package remains at 1.7.11. Ensure the version is updated to align with the major release.
apps/skillkit/package.json (1)
3-3:⚠️ Potential issue | 🟡 MinorVersion not bumped to 2.0.0.
Same issue as
packages/cli/package.json— the version remains at 1.7.11 instead of the expected 2.0.0 for this major release.
🤖 Fix all issues with AI agents
In `@packages/mcp-memory/package.json`:
- Line 3: Update the package version for mcp-memory by changing the "version"
field in package.json from "1.7.11" to "2.0.0" so it matches the v2.0.0 release
across packages; locate the version property in package.json (the "version" key
for the mcp-memory package) and update its value to "2.0.0", then run your usual
lockfile/package management commands (e.g., npm/yarn/pnpm install or update) to
persist changes.
In `@packages/mesh/src/crypto/identity.ts`:
- Around line 117-125: The deriveSharedSecret implementation uses an Ed25519
private key directly with x25519.scalarMult, which is incorrect; convert Ed25519
keys to X25519 first using edwardsToMontgomeryPriv for this.keypair.privateKey
and edwardsToMontgomeryPub for the peer public key (after hexToBytes in
deriveSharedSecretHex) and then call x25519.scalarMult on the converted keys;
update deriveSharedSecret, deriveSharedSecretHex and any other places that call
x25519.scalarMult with Ed25519 keys (e.g., the analogous functions in
encryption.ts) to perform these conversions and import the conversion utilities
(edwardsToMontgomeryPriv/edwardsToMontgomeryPub) from `@noble/curves/ed25519`.
In `@packages/mesh/src/crypto/keystore.ts`:
- Around line 40-49: The constructor in Keystore (constructor(config:
KeystoreConfig)) currently falls back to generateMachineKey() for
this.passphrase which is predictable; change the logic to require an explicit
config.encryptionKey for secure presets (i.e., throw if not provided when a
secure mode flag is set or when useMachineKey is false) and, if you still want
an automatic fallback, replace generateMachineKey() with generation of a
cryptographically secure random secret (high-entropy) and persist it to a
protected storage (OS keychain or a file created with mode 0o600) instead of
using hostname/user-derived values; update references to passphrase,
generateMachineKey, encryptionKey and useMachineKey accordingly so the code
enforces explicit keys for secure setups and stores any auto-generated key in a
protected location.
- Around line 73-90: The catch block around reading/parsing/decrypting the
keypair is currently swallowing all errors and is regenerating identity on any
failure; change the logic so only a missing-file error triggers identity
generation: use access(this.keypairPath) (or catch its error) to detect ENOENT
and call PeerIdentity.generate() + saveIdentity() only when the file does not
exist, while letting JSON.parse, decryptObject, or other runtime errors
(including wrong passphrase) propagate or be rethrown so the caller sees the
failure; keep the existing use of readFile, isEncryptedFile, decryptObject,
PeerIdentity.fromSerialized, and saveIdentity to implement this behavior.
In `@packages/mesh/src/crypto/signatures.ts`:
- Around line 125-146: isSignedDataExpired currently parses signed.timestamp
into a Date but doesn't handle parsing failures (NaN), so malformed timestamps
are treated as not expired; update isSignedDataExpired to detect invalid
timestamps by checking Number.isNaN(timestamp) (or isNaN) after computing const
timestamp = new Date(signed.timestamp).getTime() and return true if it's NaN
(i.e., treat as expired), otherwise proceed with the existing now - timestamp >
maxAgeMs check; reference function isSignedDataExpired and the
SignedData.signed.timestamp field when making the change.
In `@packages/mesh/src/crypto/storage.ts`:
- Around line 107-121: In decryptData, validate encrypted.algorithm and
encrypted.kdf before deriving the key and calling decrypt: check that
encrypted.algorithm equals the cipher your decrypt() expects (e.g. the AES-GCM
identifier used in your code) and that encrypted.kdf matches the KDF expected by
deriveKey; if either mismatches, throw a clear Error indicating unsupported
algorithm/kdf and include the offending values. Update decryptData to perform
these checks (referencing decryptData, deriveKey, and decrypt) so you only call
deriveKey/decrypt when algorithm/kdf are supported and produce an actionable
error otherwise.
- Around line 140-151: The encrypted output is written without explicit
permissions in encryptFile, which can leave the file world/group-readable;
change encryptFile to create the output directory with restrictive mode (e.g.,
0o700) when calling mkdir and write the encrypted JSON with an explicit file
mode of 0o600 (use writeFile with the options object). If the file may already
exist, ensure you set its permissions to 0o600 (e.g., via chmod) after writing.
Reference: encryptFile, encryptObject, mkdir, writeFile.
In `@packages/mesh/src/discovery/secure-local.ts`:
- Around line 37-56: The code allows security mode 'trusted-only' to run without
a keystore, which downgrades security; update SecureLocalDiscovery so that in
the constructor or initialize() you enforce a keystore when this.securityMode
=== 'trusted-only' by either instantiating a default keystore (e.g., create a
new Keystore if one is available in your codebase) and assigning it to
this.keystore before calling loadOrCreateIdentity(), or throw an explicit error
to fail-fast; ensure you reference and act on this.securityMode, this.keystore,
initialize(), loadOrCreateIdentity(), and PeerIdentity.generate() so identity
creation/trust checks cannot be bypassed when DEFAULT_SECURITY_CONFIG or
options.security.discovery.mode is 'trusted-only'.
In `@packages/mesh/src/security/tls.ts`:
- Around line 180-187: The loadCertificate function currently fabricates
validity dates (notBefore/notAfter); instead, parse the real validity period
from the certificate data and return those real Date values. If cert is an X.509
PEM, use a certificate parser (e.g., Node's crypto.X509Certificate or a library
like node-forge) to read validFrom/validTo and convert to Date for
notBefore/notAfter; if the stored cert is JSON with embedded date strings, parse
those fields (e.g., new Date(storedCert.validFrom) / new
Date(storedCert.validTo)) and return them. Update the returned object (cert,
key, fingerprint, notBefore, notAfter, subject) in loadCertificate to use the
parsed dates instead of new Date() and new Date(Date.now() + ...).
- Around line 84-119: createSimpleCert currently emits base64-encoded JSON
wrapped in PEM headers which is not a valid X.509 cert; replace its
implementation to generate a real self-signed X.509 certificate (PEM) and
matching private key using a proper library (e.g., node-forge or selfsigned) or
by invoking OpenSSL, include subject, issuer, serialNumber, notBefore/notAfter,
publicKey/altNames and return the PEM cert (and provide PEM key where needed);
remove the JSON-to-PEM logic in createSimpleCert and ensure callers (e.g.,
secure-websocket.ts expecting a real cert for https.createServer) receive a real
certificate and key in proper PEM format.
In `@packages/mesh/src/transport/secure-http.ts`:
- Around line 221-237: The verifySecureMessage function currently only checks
senderFingerprint; update verifySecureMessage to be async and additionally
verify the signature over the serialized base TransportMessage using the
senderPublicKey and message.signature (e.g., decode hex to Buffer and use a
proper public-key signature verify routine). Keep the existing fingerprint check
(using PeerIdentity.computeFingerprint(Buffer.from(message.senderPublicKey,
'hex'))) and return { valid: false, error: 'Invalid signature' } if the crypto
verification fails; return { valid: true } only when both fingerprint and
signature verification succeed. Ensure SecureTransportMessage and the base
TransportMessage serialization used for verification match the signer format.
- Around line 17-70: The got client currently hardcodes rejectUnauthorized:
false in SecureHttpTransport constructor, disabling certificate verification;
update the got.extend call to set https.rejectUnauthorized based on
this.security.transport.tls (access via this.security.transport.tls) so that TLS
modes like 'self-signed' or 'ca-signed' default to true (only override when
explicitly configured otherwise), preserving the existing protocol selection
logic in SecureHttpTransport and ensuring this.client uses the computed
rejectUnauthorized value.
In `@packages/mesh/src/transport/secure-websocket.ts`:
- Around line 157-159: The Biome lint rule flags implicit returns inside forEach
callbacks; update the forEach usages that currently call handlers implicitly
(e.g., this.messageHandlers.forEach((handler) => handler(message, this.socket!,
senderFingerprint)) and the similar forEach at the second occurrence) to use a
block-bodied arrow function instead: wrap the callback body with braces and call
the handler inside (no returned value). This means changing the callback from an
expression-bodied arrow to a statement-bodied arrow so the call is executed
without an implicit return.
- Around line 135-155: The message handler currently accepts
SecureTransportMessage and plaintext without verifying signatures or enforcing
encryption; update the logic in the handler that uses this.encryption,
decryptToObject, SecureTransportMessage, senderFingerprint and signature to: if
this.encryption is set, require raw.ciphertext else immediately drop/close the
connection; when ciphertext is present use this.encryption.decryptToObject and
then verify the decrypted senderFingerprint matches the expected identity and
validate any signature fields before dispatch; if no encryption is configured
but raw.signature exists, verify the signature/fingerprint on the
SecureTransportMessage (id/type/from/to/payload/timestamp) and reject messages
with invalid signatures; on any verification failure reject the message and
close the connection to prevent spoofing/downgrade.
- Around line 262-286: The code allows falling back to signature/plaintext when
this.options.security.transport.encryption === 'required' but this.encryption is
null; update the send logic to explicitly enforce required encryption: at the
start of the branch that checks this.options.security.transport.encryption (or
at the top of send), throw or reject with a clear error if transport.encryption
=== 'required' and this.encryption is falsy so we never proceed to the
signed/plaintext branches; reference this.encryption,
this.options.security.transport.encryption, send() and fullMessage when adding
the guard and error.
- Around line 91-94: The WebSocket TLS verification is hardcoded off; update the
wsOptions construction in secure-websocket.ts to honor a new configuration on
this.options (e.g. rejectUnauthorized: boolean defaulting to true, and optional
ca: string|Buffer or fingerprint options) instead of setting rejectUnauthorized:
false; wire those fields into the WebSocket.ClientOptions (use
this.options.rejectUnauthorized, this.options.ca or this.options.certFingerprint
as provided) so callers can supply a CA bundle, certificate pinning/fingerprint,
or explicitly disable verification for testing while production defaults remain
strict; ensure any existing uses of the SecureWebSocket constructor or factory
pass/merge the new options.
In `@packages/mesh/src/types.ts`:
- Around line 120-124: The test expectation needs to be updated to match the
current MESH_VERSION constant: change the expected value in
packages/mesh/src/__tests__/index.test.ts from '1.0.0' to '1.7.11' so it matches
the exported MESH_VERSION; update any assertion referring to MESH_VERSION (e.g.,
comparisons or snapshot values) to use the new '1.7.11' string or import
MESH_VERSION from types.ts and assert against that constant.
In `@packages/resources/package.json`:
- Line 3: Update the "version" field in packages/resources/package.json from
"1.7.11" to "2.0.0" so the package major version matches the PR objectives;
modify the value assigned to the "version" key in that file accordingly.
🧹 Nitpick comments (6)
packages/mesh/src/security/tls.ts (2)
213-230: Server context defaults torejectUnauthorized: false.Setting
rejectUnauthorized: falsedisables certificate validation by default. While this may be necessary for self-signed certs in development, it should be documented clearly. Consider adding a comment or making this configurable based on security mode.
267-272: Singleton ignorescertPathon subsequent calls.
getTLSManager(certPath)only usescertPathon first initialization. Subsequent calls with a different path will return the manager configured with the original path, which could lead to confusion.🔧 Suggested fix
export function getTLSManager(certPath?: string): TLSManager { - if (!globalTLSManager) { + if (!globalTLSManager || (certPath && globalTLSManager['certPath'] !== certPath)) { globalTLSManager = new TLSManager(certPath); } return globalTLSManager; }packages/mesh/src/security/config.ts (1)
121-151: Consider using preset matching fordescribeSecurityLevel.The function uses hardcoded conditions that duplicate the preset definitions. If presets evolve, this could drift out of sync.
♻️ Alternative approach using preset comparison
export function describeSecurityLevel(config: MeshSecurityConfig): string { for (const [name, preset] of Object.entries(SECURITY_PRESETS)) { if ( config.discovery.mode === preset.discovery.mode && config.transport.encryption === preset.transport.encryption && config.transport.tls === preset.transport.tls && config.transport.requireAuth === preset.transport.requireAuth ) { const labels: Record<string, string> = { development: 'development (no security)', signed: 'signed (partial security)', secure: 'secure (recommended)', strict: 'strict (maximum security)', }; return labels[name] ?? name; } } return 'custom'; }packages/mesh/src/crypto/encryption.ts (2)
19-24: Consider explicit salt for HKDF.While
undefinedsalt is valid (HKDF uses a zero-filled salt), being explicit improves clarity. Theinfoparameter'skillkit-mesh-v1'is good for domain separation.♻️ Optional: Explicit salt
+const EMPTY_SALT = new Uint8Array(32); + constructor(sharedSecret: Uint8Array) { - this.key = hkdf(sha256, sharedSecret, undefined, 'skillkit-mesh-v1', 32); + this.key = hkdf(sha256, sharedSecret, EMPTY_SALT, 'skillkit-mesh-v1', 32); }
55-58:decryptToObject/decryptToObjectcould throw on invalid JSON.
JSON.parsewill throw aSyntaxErrorif the decrypted plaintext is not valid JSON. Consider wrapping in try-catch or documenting that callers must handle this.🛡️ Defensive approach
decryptToObject<T = unknown>(encrypted: EncryptedMessage): T { const plaintext = this.decryptToString(encrypted); - return JSON.parse(plaintext) as T; + try { + return JSON.parse(plaintext) as T; + } catch (e) { + throw new Error('Failed to parse decrypted data as JSON', { cause: e }); + } }Also applies to: 127-136
packages/mesh/src/discovery/secure-local.ts (1)
318-333: Optional: dedupegetLocalIPAddresswith the local discovery helper.The same logic exists in
packages/mesh/src/discovery/local.ts; a shared utility would prevent drift.
| @@ -0,0 +1,50 @@ | |||
| { | |||
| "name": "@skillkit/mcp-memory", | |||
| "version": "1.7.11", | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bump this package to v2.0.0 to match the release.
The PR objectives describe a v2.0.0 release across packages, but this package is still set to 1.7.11, which will misalign published versions and tooling expectations.
✅ Proposed fix
- "version": "1.7.11",
+ "version": "2.0.0",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "version": "1.7.11", | |
| "version": "2.0.0", |
🤖 Prompt for AI Agents
In `@packages/mcp-memory/package.json` at line 3, Update the package version for
mcp-memory by changing the "version" field in package.json from "1.7.11" to
"2.0.0" so it matches the v2.0.0 release across packages; locate the version
property in package.json (the "version" key for the mcp-memory package) and
update its value to "2.0.0", then run your usual lockfile/package management
commands (e.g., npm/yarn/pnpm install or update) to persist changes.
| deriveSharedSecret(peerPublicKey: Uint8Array): Uint8Array { | ||
| const x25519PrivateKey = this.keypair.privateKey; | ||
| return x25519.scalarMult(x25519PrivateKey, peerPublicKey); | ||
| } | ||
|
|
||
| deriveSharedSecretHex(peerPublicKeyHex: string): Uint8Array { | ||
| const peerPublicKey = hexToBytes(peerPublicKeyHex); | ||
| return this.deriveSharedSecret(peerPublicKey); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find the identity.ts file and examine its full context
find . -name "identity.ts" -path "*/crypto/*" | head -5Repository: rohitg00/skillkit
Length of output: 99
🏁 Script executed:
# Search for keypair initialization in the identity.ts file
rg -A 20 "keypair.*=" packages/mesh/src/crypto/identity.ts | head -50Repository: rohitg00/skillkit
Length of output: 661
🏁 Script executed:
# Check package.json to see the exact version of `@noble/curves` being used
rg "@noble/curves" -A 2 packages/mesh/package.jsonRepository: rohitg00/skillkit
Length of output: 154
🌐 Web query:
@noble/curves Ed25519 X25519 scalar multiplication key compatibility
💡 Result:
Ed25519 and X25519 use the same underlying Curve25519, but in different forms (Edwards vs Montgomery) and with different key derivations, so their keys are not directly compatible for scalar multiplication/ECDH.
What this means in @noble/curves
x25519.scalarMult(priv, pub)(akax25519.getSharedSecret) expects:- a 32-byte X25519 private scalar (RFC 7748 “clamped”), and
- a 32-byte X25519 public key (Montgomery-u).
It is not the same as multiplying with an Ed25519 public key. [1]
- An Ed25519 “private key” in noble is typically a 32-byte seed used for EdDSA; EdDSA derives its signing scalar via hashing, which differs from X25519’s scalar handling. So you can’t just reuse the bytes as an X25519 scalar. [2]
The supported way: convert Ed25519 ⇄ X25519
@noble/curves/ed25519 provides conversion helpers:
edwardsToMontgomeryPriv(edwardsPriv)→ X25519 private keyedwardsToMontgomeryPub(edwardsPub)→ X25519 public key
These exist specifically for Ed25519 → X25519 conversion. [2][1]
import { ed25519, x25519, edwardsToMontgomeryPriv, edwardsToMontgomeryPub } from '@noble/curves/ed25519';
// Starting from an Ed25519 keypair
const edPriv = ed25519.utils.randomPrivateKey(); // 32-byte Ed25519 seed
const edPub = ed25519.getPublicKey(edPriv); // 32-byte Edwards public key
// Convert to X25519-compatible keys
const xPriv = edwardsToMontgomeryPriv(edPriv);
const xPub = edwardsToMontgomeryPub(edPub);
// Now X25519 operations are compatible
const shared = x25519.getSharedSecret(xPriv, xPub); // == x25519.scalarMult(xPriv, xPub)Practical guidance
- If you want ECDH/key exchange, generate/use X25519 keys (
x25519.utils.randomPrivateKey(),x25519.getPublicKey()). - Only use the Ed25519→X25519 conversion when you must interoperate from existing Ed25519 keys (and understand the protocol/security implications of reusing identity keys across signing + DH). [1][2]
Sources: @noble/curves docs/examples for X25519 aliases and Ed25519↔X25519 conversion utilities. [1][2]
🏁 Script executed:
# Verify if the conversion utilities are already imported or used in identity.ts
cat -n packages/mesh/src/crypto/identity.ts | head -30Repository: rohitg00/skillkit
Length of output: 1109
🏁 Script executed:
# Search for any x25519 or edwardsToMontgomery imports in the codebase
rg "edwardsToMontgomery|x25519" packages/mesh/src/crypto/ -nRepository: rohitg00/skillkit
Length of output: 807
Use X25519-converted keys for ECDH operations.
Line 118 uses an Ed25519 private key directly with x25519.scalarMult(). Ed25519 and X25519 keys are incompatible despite sharing the same underlying curve—Ed25519 keys are 32-byte seeds requiring different scalar derivation than X25519's RFC 7748 clamped scalars. Using them directly produces invalid shared secrets and breaks secure transport handshakes.
Import the conversion utilities and convert keys before ECDH:
import { edwardsToMontgomeryPriv, edwardsToMontgomeryPub } from '@noble/curves/ed25519';
const x25519PrivateKey = edwardsToMontgomeryPriv(this.keypair.privateKey);
return x25519.scalarMult(x25519PrivateKey, edwardsToMontgomeryPub(peerPublicKey));
Also check packages/mesh/src/crypto/encryption.ts (lines 75–76, 112) which has the same pattern.
🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/identity.ts` around lines 117 - 125, The
deriveSharedSecret implementation uses an Ed25519 private key directly with
x25519.scalarMult, which is incorrect; convert Ed25519 keys to X25519 first
using edwardsToMontgomeryPriv for this.keypair.privateKey and
edwardsToMontgomeryPub for the peer public key (after hexToBytes in
deriveSharedSecretHex) and then call x25519.scalarMult on the converted keys;
update deriveSharedSecret, deriveSharedSecretHex and any other places that call
x25519.scalarMult with Ed25519 keys (e.g., the analogous functions in
encryption.ts) to perform these conversions and import the conversion utilities
(edwardsToMontgomeryPriv/edwardsToMontgomeryPub) from `@noble/curves/ed25519`.
| constructor(config: KeystoreConfig = {}) { | ||
| this.path = config.path || DEFAULT_KEYSTORE_PATH; | ||
|
|
||
| if (config.encryptionKey) { | ||
| this.passphrase = config.encryptionKey; | ||
| } else if (config.useMachineKey !== false) { | ||
| this.passphrase = generateMachineKey(); | ||
| } else { | ||
| throw new Error('Encryption key or machine key required'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default machine-derived passphrase is weak.
Line 45 falls back to generateMachineKey() (hostname/user hash). This is predictable and undermines at-rest security. Consider requiring an explicit encryptionKey for secure presets or generating a high-entropy secret stored in a protected location (OS keychain or 0o600 file).
🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/keystore.ts` around lines 40 - 49, The constructor
in Keystore (constructor(config: KeystoreConfig)) currently falls back to
generateMachineKey() for this.passphrase which is predictable; change the logic
to require an explicit config.encryptionKey for secure presets (i.e., throw if
not provided when a secure mode flag is set or when useMachineKey is false) and,
if you still want an automatic fallback, replace generateMachineKey() with
generation of a cryptographically secure random secret (high-entropy) and
persist it to a protected storage (OS keychain or a file created with mode
0o600) instead of using hostname/user-derived values; update references to
passphrase, generateMachineKey, encryptionKey and useMachineKey accordingly so
the code enforces explicit keys for secure setups and stores any auto-generated
key in a protected location.
| try { | ||
| await access(this.keypairPath); | ||
| const content = await readFile(this.keypairPath, 'utf-8'); | ||
| const encrypted = JSON.parse(content); | ||
|
|
||
| if (isEncryptedFile(encrypted)) { | ||
| const serialized = await decryptObject<SerializedIdentity>( | ||
| encrypted, | ||
| this.passphrase | ||
| ); | ||
| this.identity = PeerIdentity.fromSerialized(serialized); | ||
| } else { | ||
| this.identity = PeerIdentity.fromSerialized(encrypted as SerializedIdentity); | ||
| } | ||
| } catch { | ||
| this.identity = await PeerIdentity.generate(); | ||
| await this.saveIdentity(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid silent identity rotation on decrypt/parse errors.
Line 73–90 regenerates a new identity on any error. A wrong passphrase or corrupted file will silently change the fingerprint and break trust with peers. Only generate a new identity when the file is missing; otherwise surface the error.
Suggested fix
- } catch {
- this.identity = await PeerIdentity.generate();
- await this.saveIdentity();
- }
+ } catch (err) {
+ const code = (err as NodeJS.ErrnoException | undefined)?.code;
+ if (code !== 'ENOENT') throw err;
+ this.identity = await PeerIdentity.generate();
+ await this.saveIdentity();
+ }🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/keystore.ts` around lines 73 - 90, The catch block
around reading/parsing/decrypting the keypair is currently swallowing all errors
and is regenerating identity on any failure; change the logic so only a
missing-file error triggers identity generation: use access(this.keypairPath)
(or catch its error) to detect ENOENT and call PeerIdentity.generate() +
saveIdentity() only when the file does not exist, while letting JSON.parse,
decryptObject, or other runtime errors (including wrong passphrase) propagate or
be rethrown so the caller sees the failure; keep the existing use of readFile,
isEncryptedFile, decryptObject, PeerIdentity.fromSerialized, and saveIdentity to
implement this behavior.
| export function isSignedDataExpired( | ||
| signed: SignedData<unknown>, | ||
| maxAgeMs: number = 5 * 60 * 1000 | ||
| ): boolean { | ||
| const timestamp = new Date(signed.timestamp).getTime(); | ||
| const now = Date.now(); | ||
| return now - timestamp > maxAgeMs; | ||
| } | ||
|
|
||
| export function extractSignerFingerprint( | ||
| signed: SignedData<unknown> | ||
| ): string | null { | ||
| try { | ||
| const publicKey = hexToBytes(signed.senderPublicKey); | ||
| const computed = PeerIdentity.computeFingerprint(publicKey); | ||
| if (computed === signed.senderFingerprint) { | ||
| return signed.senderFingerprint; | ||
| } | ||
| return null; | ||
| } catch { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Treat malformed timestamps as expired.
Date parsing failures yield NaN, which currently evaluates to “not expired.” Treat invalid timestamps as expired to avoid accepting malformed signed payloads.
🛠️ Suggested fix
export function isSignedDataExpired(
signed: SignedData<unknown>,
maxAgeMs: number = 5 * 60 * 1000
): boolean {
- const timestamp = new Date(signed.timestamp).getTime();
+ const timestamp = Date.parse(signed.timestamp);
+ if (!Number.isFinite(timestamp)) return true;
const now = Date.now();
return now - timestamp > maxAgeMs;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function isSignedDataExpired( | |
| signed: SignedData<unknown>, | |
| maxAgeMs: number = 5 * 60 * 1000 | |
| ): boolean { | |
| const timestamp = new Date(signed.timestamp).getTime(); | |
| const now = Date.now(); | |
| return now - timestamp > maxAgeMs; | |
| } | |
| export function extractSignerFingerprint( | |
| signed: SignedData<unknown> | |
| ): string | null { | |
| try { | |
| const publicKey = hexToBytes(signed.senderPublicKey); | |
| const computed = PeerIdentity.computeFingerprint(publicKey); | |
| if (computed === signed.senderFingerprint) { | |
| return signed.senderFingerprint; | |
| } | |
| return null; | |
| } catch { | |
| return null; | |
| } | |
| export function isSignedDataExpired( | |
| signed: SignedData<unknown>, | |
| maxAgeMs: number = 5 * 60 * 1000 | |
| ): boolean { | |
| const timestamp = Date.parse(signed.timestamp); | |
| if (!Number.isFinite(timestamp)) return true; | |
| const now = Date.now(); | |
| return now - timestamp > maxAgeMs; | |
| } | |
| export function extractSignerFingerprint( | |
| signed: SignedData<unknown> | |
| ): string | null { | |
| try { | |
| const publicKey = hexToBytes(signed.senderPublicKey); | |
| const computed = PeerIdentity.computeFingerprint(publicKey); | |
| if (computed === signed.senderFingerprint) { | |
| return signed.senderFingerprint; | |
| } | |
| return null; | |
| } catch { | |
| return null; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/signatures.ts` around lines 125 - 146,
isSignedDataExpired currently parses signed.timestamp into a Date but doesn't
handle parsing failures (NaN), so malformed timestamps are treated as not
expired; update isSignedDataExpired to detect invalid timestamps by checking
Number.isNaN(timestamp) (or isNaN) after computing const timestamp = new
Date(signed.timestamp).getTime() and return true if it's NaN (i.e., treat as
expired), otherwise proceed with the existing now - timestamp > maxAgeMs check;
reference function isSignedDataExpired and the SignedData.signed.timestamp field
when making the change.
| if (this.encryption && raw.ciphertext) { | ||
| const decrypted = this.encryption.decryptToObject<TransportMessage>({ | ||
| nonce: raw.nonce, | ||
| ciphertext: raw.ciphertext, | ||
| }); | ||
| message = decrypted; | ||
| senderFingerprint = raw.senderFingerprint; | ||
| } else if (raw.signature) { | ||
| const secure = raw as SecureTransportMessage; | ||
| senderFingerprint = secure.senderFingerprint; | ||
| message = { | ||
| id: secure.id, | ||
| type: secure.type, | ||
| from: secure.from, | ||
| to: secure.to, | ||
| payload: secure.payload, | ||
| timestamp: secure.timestamp, | ||
| }; | ||
| } else { | ||
| message = raw as TransportMessage; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify signatures (and reject plaintext when encryption is required) before dispatch.
Both client and server handlers accept SecureTransportMessage without validating the signature or fingerprint, and they still accept plaintext when encryption is required. This allows spoofing and downgrade attacks.
Suggested fix (apply to both client and server message handlers)
- } else if (raw.signature) {
- const secure = raw as SecureTransportMessage;
- senderFingerprint = secure.senderFingerprint;
- message = {
- id: secure.id,
- type: secure.type,
- from: secure.from,
- to: secure.to,
- payload: secure.payload,
- timestamp: secure.timestamp,
- };
- } else {
+ } else if (raw.signature) {
+ const secure = raw as SecureTransportMessage;
+ const reconstructed: TransportMessage = {
+ id: secure.id,
+ type: secure.type,
+ from: secure.from,
+ to: secure.to,
+ payload: secure.payload,
+ timestamp: secure.timestamp,
+ };
+ const valid = await PeerIdentity.verify(
+ hexToBytes(secure.signature),
+ new TextEncoder().encode(JSON.stringify(reconstructed)),
+ hexToBytes(secure.senderPublicKey)
+ );
+ if (!valid) return;
+ senderFingerprint = secure.senderFingerprint;
+ message = reconstructed;
+ } else {
message = raw as TransportMessage;
}Also add an early guard when encryption is required but raw.ciphertext is missing, and drop/close the connection in that case.
Also applies to: 449-468
🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-websocket.ts` around lines 135 - 155, The
message handler currently accepts SecureTransportMessage and plaintext without
verifying signatures or enforcing encryption; update the logic in the handler
that uses this.encryption, decryptToObject, SecureTransportMessage,
senderFingerprint and signature to: if this.encryption is set, require
raw.ciphertext else immediately drop/close the connection; when ciphertext is
present use this.encryption.decryptToObject and then verify the decrypted
senderFingerprint matches the expected identity and validate any signature
fields before dispatch; if no encryption is configured but raw.signature exists,
verify the signature/fingerprint on the SecureTransportMessage
(id/type/from/to/payload/timestamp) and reject messages with invalid signatures;
on any verification failure reject the message and close the connection to
prevent spoofing/downgrade.
| this.messageHandlers.forEach((handler) => | ||
| handler(message, this.socket!, senderFingerprint) | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid implicit returns in forEach callbacks (Biome).
Biome flags the implicit return of the handler call; wrap the callback body with braces.
Suggested fix
- this.messageHandlers.forEach((handler) =>
- handler(message, this.socket!, senderFingerprint)
- );
+ this.messageHandlers.forEach((handler) => {
+ handler(message, this.socket!, senderFingerprint);
+ });- this.messageHandlers.forEach((handler) =>
- handler(message, socket, senderFingerprint)
- );
+ this.messageHandlers.forEach((handler) => {
+ handler(message, socket, senderFingerprint);
+ });Also applies to: 470-472
🧰 Tools
🪛 Biome (2.3.13)
[error] 157-157: This callback passed to forEach() iterable method should not return a value.
Either remove this return or remove the returned value.
(lint/suspicious/useIterableCallbackReturn)
🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-websocket.ts` around lines 157 - 159, The
Biome lint rule flags implicit returns inside forEach callbacks; update the
forEach usages that currently call handlers implicitly (e.g.,
this.messageHandlers.forEach((handler) => handler(message, this.socket!,
senderFingerprint)) and the similar forEach at the second occurrence) to use a
block-bodied arrow function instead: wrap the callback body with braces and call
the handler inside (no returned value). This means changing the callback from an
expression-bodied arrow to a statement-bodied arrow so the call is executed
without an implicit return.
| if ( | ||
| this.encryption && | ||
| this.options.security.transport.encryption === 'required' | ||
| ) { | ||
| const encrypted = this.encryption.encryptObject(fullMessage); | ||
| dataToSend = JSON.stringify({ | ||
| id: fullMessage.id, | ||
| senderFingerprint: this.identity!.fingerprint, | ||
| nonce: encrypted.nonce, | ||
| ciphertext: encrypted.ciphertext, | ||
| timestamp: fullMessage.timestamp, | ||
| }); | ||
| } else if (this.identity) { | ||
| const signature = await this.identity.signObject(fullMessage); | ||
| const secureMessage: SecureTransportMessage = { | ||
| ...fullMessage, | ||
| signature, | ||
| senderFingerprint: this.identity.fingerprint, | ||
| senderPublicKey: this.identity.publicKeyHex, | ||
| nonce: randomUUID(), | ||
| }; | ||
| dataToSend = JSON.stringify(secureMessage); | ||
| } else { | ||
| dataToSend = JSON.stringify(fullMessage); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce required encryption before sending.
If transport.encryption is 'required' but this.encryption is null, send() falls back to signed/plaintext (Line 262+), violating the security contract.
Suggested fix
async send(
message: Omit<TransportMessage, 'id' | 'timestamp'>
): Promise<void> {
@@
- let dataToSend: string;
+ if (
+ this.options.security.transport.encryption === 'required' &&
+ !this.encryption
+ ) {
+ throw new Error('Encryption required but no session key established');
+ }
+
+ let dataToSend: string;🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-websocket.ts` around lines 262 - 286, The
code allows falling back to signature/plaintext when
this.options.security.transport.encryption === 'required' but this.encryption is
null; update the send logic to explicitly enforce required encryption: at the
start of the branch that checks this.options.security.transport.encryption (or
at the top of send), throw or reject with a clear error if transport.encryption
=== 'required' and this.encryption is falsy so we never proceed to the
signed/plaintext branches; reference this.encryption,
this.options.security.transport.encryption, send() and fullMessage when adding
the guard and error.
| export const DEFAULT_PORT = 9876; | ||
| export const DEFAULT_DISCOVERY_PORT = 9877; | ||
| export const HEALTH_CHECK_TIMEOUT = 5000; | ||
| export const DISCOVERY_INTERVAL = 30000; | ||
| export const MESH_VERSION = '1.7.11'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check package.json for version information
fd package.json | head -5 | xargs -I {} sh -c 'echo "=== {} ===" && head -20 {}'Repository: rohitg00/skillkit
Length of output: 2900
🏁 Script executed:
# Check current version in packages/mesh
cat packages/mesh/package.json 2>/dev/null | grep -A 2 '"version"'Repository: rohitg00/skillkit
Length of output: 178
🏁 Script executed:
# Search for how MESH_VERSION is used in the codebase
rg "MESH_VERSION" --type ts --type tsx -B 2 -A 2Repository: rohitg00/skillkit
Length of output: 88
🏁 Script executed:
# Check if there are any version-related files or constants
fd -e json | xargs grep -l "2.0.0\|1.7.11" 2>/dev/null | head -10Repository: rohitg00/skillkit
Length of output: 321
🏁 Script executed:
# Search for MESH_VERSION usage in TypeScript files
rg "MESH_VERSION" -t ts -B 3 -A 3Repository: rohitg00/skillkit
Length of output: 6023
🏁 Script executed:
# Also search for versioning strategy or comments about versioning
rg "version|VERSION" packages/mesh/src -A 2 -B 2 | head -50Repository: rohitg00/skillkit
Length of output: 2391
🏁 Script executed:
# Check the mesh package.json to confirm version
cat -n packages/mesh/package.jsonRepository: rohitg00/skillkit
Length of output: 1623
Update test expectation for MESH_VERSION. The constant correctly matches the mesh package version (1.7.11), which is independently versioned from the monorepo root (2.0.0). This is intentional for monorepo package management. However, packages/mesh/src/__tests__/index.test.ts expects MESH_VERSION to be '1.0.0' while the actual constant is '1.7.11'—update the test to reflect the current version.
🤖 Prompt for AI Agents
In `@packages/mesh/src/types.ts` around lines 120 - 124, The test expectation
needs to be updated to match the current MESH_VERSION constant: change the
expected value in packages/mesh/src/__tests__/index.test.ts from '1.0.0' to
'1.7.11' so it matches the exported MESH_VERSION; update any assertion referring
to MESH_VERSION (e.g., comparisons or snapshot values) to use the new '1.7.11'
string or import MESH_VERSION from types.ts and assert against that constant.
| @@ -0,0 +1,57 @@ | |||
| { | |||
| "name": "@skillkit/resources", | |||
| "version": "1.7.11", | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Version not bumped to 2.0.0.
This new package is set to 1.7.11 but should be 2.0.0 to match the major release version mentioned in the PR objectives.
🤖 Prompt for AI Agents
In `@packages/resources/package.json` at line 3, Update the "version" field in
packages/resources/package.json from "1.7.11" to "2.0.0" so the package major
version matches the PR objectives; modify the value assigned to the "version"
key in that file accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| const certPem = createSimpleCert({ | ||
| publicKey: publicKey as string, | ||
| privateKey: privateKey as string, | ||
| subject: `CN=${hostName},O=SkillKit Mesh,OU=${hostId}`, | ||
| issuer: `CN=${hostName},O=SkillKit Mesh,OU=${hostId}`, | ||
| serialNumber, | ||
| notBefore, | ||
| notAfter, | ||
| altNames: ['localhost', '127.0.0.1', hostName], | ||
| }); | ||
|
|
||
| return { | ||
| cert: certPem, | ||
| key: privateKey as string, | ||
| }; | ||
| } | ||
|
|
||
| interface CertParams { | ||
| publicKey: string; | ||
| privateKey: string; | ||
| subject: string; | ||
| issuer: string; | ||
| serialNumber: string; | ||
| notBefore: Date; | ||
| notAfter: Date; | ||
| altNames: string[]; | ||
| } | ||
|
|
||
| function createSimpleCert(params: CertParams): string { | ||
| const base64Encode = (str: string): string => | ||
| Buffer.from(str).toString('base64'); | ||
|
|
||
| const formatDate = (date: Date): string => { | ||
| const y = date.getUTCFullYear().toString().slice(-2); | ||
| const m = (date.getUTCMonth() + 1).toString().padStart(2, '0'); | ||
| const d = date.getUTCDate().toString().padStart(2, '0'); | ||
| const h = date.getUTCHours().toString().padStart(2, '0'); | ||
| const min = date.getUTCMinutes().toString().padStart(2, '0'); | ||
| const s = date.getUTCSeconds().toString().padStart(2, '0'); | ||
| return `${y}${m}${d}${h}${min}${s}Z`; | ||
| }; | ||
|
|
||
| const certInfo = { | ||
| version: 3, | ||
| serialNumber: params.serialNumber, | ||
| subject: params.subject, | ||
| issuer: params.issuer, | ||
| notBefore: formatDate(params.notBefore), | ||
| notAfter: formatDate(params.notAfter), | ||
| publicKey: params.publicKey, | ||
| altNames: params.altNames, | ||
| }; | ||
|
|
||
| const certData = JSON.stringify(certInfo); | ||
| const certBase64 = base64Encode(certData); | ||
|
|
||
| const lines: string[] = ['-----BEGIN CERTIFICATE-----']; | ||
| for (let i = 0; i < certBase64.length; i += 64) { | ||
| lines.push(certBase64.slice(i, i + 64)); | ||
| } | ||
| lines.push('-----END CERTIFICATE-----'); | ||
|
|
||
| return lines.join('\n'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 TLSManager generates a non-X.509 'certificate' that Node TLS will reject
TLSManager.generateCertificate() claims to generate a self-signed TLS certificate, but it uses createSimpleCert() which base64-encodes a JSON blob and wraps it in -----BEGIN CERTIFICATE----- markers. This is not a valid X.509 certificate, so creating an HTTPS/WSS server with it will fail during TLS handshake.
Actual: consumers like SecureWebSocketServer.start() (uses createHttpsServer({ cert, key })) will error / clients won't connect.
Expected: a real X.509 certificate PEM.
Where it happens
generateSelfSignedCertificate() delegates to createSimpleCert():
const certPem = createSimpleCert({ ... });
return { cert: certPem, key: privateKey };packages/mesh/src/security/tls.ts:56-70
createSimpleCert() base64-encodes JSON, not an X.509 DER structure:
packages/mesh/src/security/tls.ts:98-118
Recommendation: Use a real X.509 cert generator (e.g., node-forge, OpenSSL via child_process, or a proper ASN.1 implementation). Alternatively, store and use pre-generated cert/key pairs. Do not fabricate PEM with JSON.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
🤖 Fix all issues with AI agents
In `@packages/cli/src/commands/mesh.ts`:
- Around line 519-528: The JSON block currently runs after human-readable output
when this.json is true; make JSON mode exclusive by short-circuiting the
human-readable flow: when this.json is true, load identity via
keystore.loadOrCreateIdentity() as used now and print the JSON (with
hasIdentity, identity?.fingerprint, hasCert, securityEnabled using
isSecurityEnabled(DEFAULT_SECURITY_CONFIG), and config: DEFAULT_SECURITY_CONFIG)
then immediately return/exit the function (or wrap the rest of the
human-readable logic in an else), ensuring no additional console output occurs
in JSON mode; update the code around the this.json check in mesh command to
enforce this exclusive behavior.
In `@packages/mcp-memory/src/index.ts`:
- Around line 231-441: The handler calls getMemoryStore() before entering the
try block, so initialization failures escape the handler; move the await
getMemoryStore() call into the try inside
server.setRequestHandler(CallToolRequestSchema, ...) so any errors are caught
and return the MCP error payload, i.e., wrap or relocate the store
initialization into the try that surrounds the switch (refer to getMemoryStore()
and the existing catch block) and ensure subsequent code uses the local store
variable initialized inside that try.
In `@packages/mesh/src/crypto/identity.ts`:
- Around line 112-115: The signObject method uses JSON.stringify which can
produce non-deterministic key order; replace that with a deterministic canonical
serializer (e.g., call the canonicalize function from signatures.ts) so the same
canonical string is produced across engines, then pass the canonical string into
signString; update imports to bring in canonicalize and ensure signObject uses
canonicalize(obj) instead of JSON.stringify(obj).
- Around line 117-120: deriveSharedSecret is using an Ed25519 private key
directly with x25519.scalarMult; convert the Ed25519 private key to X25519
format before scalar multiplication by calling edwardsToMontgomeryPriv (from
`@noble/curves/ed25519`) on this.keypair.privateKey and pass the converted scalar
to x25519.scalarMult(peerPublicKey) instead of the raw Ed25519 private key to
produce a correct shared secret.
In `@packages/mesh/src/crypto/keystore.ts`:
- Around line 40-49: The Keystore constructor currently defaults to a
machine-derived passphrase; change it to require an explicit encryptionKey
unless the caller explicitly opts in by setting config.useMachineKey === true:
in the Keystore constructor (class Keystore, method constructor, and using
KeystoreConfig and generateMachineKey) only call generateMachineKey() when
config.useMachineKey is true (not when it is omitted), otherwise throw an error
if no config.encryptionKey is provided; update the thrown Error message to
clearly state that an explicit encryptionKey is required unless useMachineKey is
explicitly true.
- Around line 121-127: In deleteIdentity(), stop swallowing all errors: catch
the thrown error as e (or err) from unlink(this.keypairPath) and if e.code !==
'ENOENT' rethrow or propagate it so callers see permission or I/O failures; only
ignore the ENOENT case (file not found). Ensure this.identity is set to null
after successful deletion or if ENOENT is encountered, and keep the reference to
unlink and this.keypairPath so the fix is applied in that method.
- Around line 66-90: The loadOrCreateIdentity method currently swallows all
errors and unconditionally generates a new identity; change the error handling
so only a missing-file error (ENOENT) triggers generating and saving a new
identity. Specifically, in loadOrCreateIdentity wrap
access/readFile/JSON.parse/decryptObject/PeerIdentity.fromSerialized so that if
an error is thrown you inspect err.code === 'ENOENT' (or equivalent) and only
then call PeerIdentity.generate() and saveIdentity(); for any other error (JSON
parse, decryptObject failure, deserialization, or other I/O errors) rethrow the
error to surface passphrase/corruption issues. Keep existing calls to
isEncryptedFile, decryptObject, PeerIdentity.fromSerialized, access, readFile
and saveIdentity but ensure non-ENOENT errors are not swallowed.
- Around line 129-145: The loadKeystoreData method currently swallows all errors
and initializes empty KeystoreData, which resets trust state; change it so only
ENOENT (file-not-found) leads to initializing defaults and any other errors
(e.g., JSON.parse failures, permission errors) are re-thrown so
corruption/tampering surface. Specifically, inside loadKeystoreData (use
keystoreDataPath and KeystoreData), catch errors from
access/readFile/JSON.parse, check error.code === 'ENOENT' to set the default
trustedPeers/revokedFingerprints, otherwise throw the error; this preserves
expected behavior of isTrusted/isRevoked by not silently clearing state.
In `@packages/mesh/src/crypto/storage.ts`:
- Around line 178-180: The function hashPassphrase currently returns a truncated
SHA-256 (first 16 hex chars) which reduces security; update hashPassphrase to
stop truncating and either return the full SHA-256 hex digest or, if this is
used for key derivation/authentication, replace it with a proper KDF (e.g.,
PBKDF2/scrypt/argon2) with a salt and adequate iterations/work factor; if the
short string is only for display, document that clearly in the function comment
and provide a separate non-sensitive display helper (do not use the truncated
value for cryptographic operations). Ensure changes are applied to the
hashPassphrase implementation and any callers that assume a 16-char string.
- Around line 182-187: The generateMachineKey function currently derives a
predictable key from HOSTNAME/USER; replace this with a secure, persistent
machine-specific secret: change generateMachineKey to first attempt to read a
stored key from a secure location (OS secret store via keytar or a file in the
app data dir with strict permissions), and if missing generate a
cryptographically strong random key (e.g., crypto.randomBytes), persist it for
future runs, and return its hex. Stop using HOSTNAME/USER as entropy and ensure
file persistence uses atomic write + 0600 perms or use keytar/OS credential APIs
for storage; update references to generateMachineKey and remove reliance on
createHash('sha256') over environment variables.
In `@packages/mesh/src/index.ts`:
- Around line 145-164: The initializeSecureMesh function currently instantiates
a new SecureKeystore directly which bypasses the global keystore singleton;
replace the direct construction with the singleton getter by importing and
calling getKeystore(...) (passing the same options like { path:
securityConfig?.identityPath }) and then call keystore.loadOrCreateIdentity() on
that returned singleton instead of new SecureKeystore; update references in
initializeSecureMesh to use getKeystore and remove the direct new SecureKeystore
usage so the whole mesh shares the same keystore instance.
In `@packages/mesh/src/security/auth.ts`:
- Around line 44-47: The constructor currently uses randomBytes(32) for
jwtSecret which makes tokens unverifiable across instances; change the logic in
the constructor of the AuthManager (the constructor(identity: PeerIdentity,
jwtSecret?: Uint8Array) and the jwtSecret field) to either load a persisted
secret from the keystore (e.g., call a getJwtSecretFromKeystore(identity) and
use it when present, otherwise persist one) or deterministically derive the
secret from the identity's private key (implement
deriveJwtSecretFromPrivateKey(identity.privateKey) using a KDF/HKDF or
HMAC-SHA256 over the private key to produce 32 bytes) and use that instead of
randomBytes(32) so tokens are verifiable across instances.
In `@packages/mesh/src/security/config.ts`:
- Around line 53-59: DEFAULT_SECURITY_CONFIG and getSecurityPreset currently
perform shallow copies so nested objects from SECURITY_PRESETS are shared and
can be mutated by callers; change both to return deep clones of the preset
objects (e.g., use structuredClone or a safe deep-clone helper) when
constructing DEFAULT_SECURITY_CONFIG and in getSecurityPreset(preset:
SecurityPreset) to ensure MeshSecurityConfig instances are independent copies.
- Around line 82-108: The validator validateSecurityConfig should reject configs
that enable authentication over plaintext: add checks in validateSecurityConfig
to push an error when config.auth?.requireAuth === true and config.transport.tls
=== 'none' (e.g., "requireAuth requires TLS"); also add an optional sanity check
that warns/errors when TLS is enabled (config.transport.tls !== 'none') but
config.transport.encryption === 'none' (e.g., "TLS enabled but
transport.encryption is 'none'"); implement these checks alongside the existing
transport/tls/encryption validations so they use the same errors array and
reference config.auth, config.transport.tls and config.transport.encryption.
In `@packages/mesh/src/security/tls.ts`:
- Around line 180-187: The loadCertificate function is returning fabricated
validity dates (notBefore/new Date() and notAfter=one year ahead) instead of
extracting them from the X.509 certificate; update loadCertificate to parse the
certificate's actual validity period and set the notBefore and notAfter fields
from the parsed X.509 validity values (e.g., use a proper X.509/ASN.1 parser or
Node's X509Certificate utilities) so fingerprint, cert, key, notBefore, and
notAfter reflect the real certificate metadata; if parsing isn't possible yet,
persist or accept explicit validity metadata alongside the certificate and
document this limitation in loadCertificate's contract.
- Around line 267-272: The getTLSManager singleton currently ignores the
certPath parameter after the first call; update getTLSManager to handle certPath
consistently: when globalTLSManager is undefined, construct new
TLSManager(certPath) as before, but when globalTLSManager already exists check
its configured cert path (expose or read the instance property on TLSManager)
and if the incoming certPath is different either (a) throw a clear error
explaining that a different certPath cannot be used for the existing singleton,
or (b) recreate the singleton with the new certPath (choose one policy), and
document the behavior. Reference TLSManager, globalTLSManager and getTLSManager
when making the change so callers either get an error or see the singleton
replaced/validated rather than silently ignoring certPath.
- Around line 84-119: The createSimpleCert function currently builds a JSON
blob, base64-encodes it, and wraps it in PEM headers which is not a valid X.509
cert; replace this logic so createSimpleCert returns a real PEM-encoded X.509
certificate (and matching private key) by using a proper certificate library
(e.g., node-forge, `@peculiar/x509`, or selfsigned) to generate a self-signed cert
with the provided CertParams (serialNumber, subject, issuer, notBefore,
notAfter, publicKey/altNames or generate a keypair if needed). Update callers
createServerContext and createClientContext to accept the library-generated PEM
cert/key (or convert formats if library returns DER) and remove the JSON+base64
approach; ensure the generated certificate includes TBSCertificate,
signatureAlgorithm, and signature so Node.js tls.createServer /
tls.createSecureContext will accept it.
In `@packages/mesh/src/transport/secure-http.ts`:
- Around line 62-64: The code currently sets https: { rejectUnauthorized: false
} which disables TLS certificate validation; change this to be configurable
instead of hard-coded: read a flag (e.g., an env var or config option like
allowInsecureTLS or NODE_ENV === 'development') and only set rejectUnauthorized:
false when that flag explicitly permits it (development or explicit self-signed
option), otherwise leave rejectUnauthorized undefined/true to enforce
validation; alternatively provide an option to supply custom CA certificates
(trustedRoots) rather than disabling validation so the transport in
secure-http.ts can support self-signed certs safely.
- Around line 221-237: verifySecureMessage currently only checks presence of
signature fields and fingerprint but never cryptographically verifies the
signature; update verifySecureMessage to call PeerIdentity.verify to validate
message.signature against the sender's public key and the original signed
content (the field that was actually signed, e.g., message.payload or
message.body). Use
PeerIdentity.computeFingerprint(Buffer.from(message.senderPublicKey, 'hex')) as
you already do to check fingerprint first, then call PeerIdentity.verify with
the sender public key (hex -> Buffer), the signature (properly decoded), and the
signed payload; if PeerIdentity.verify returns false, return { valid: false,
error: 'Invalid signature' }, otherwise return { valid: true }.
In `@packages/mesh/src/transport/secure-websocket.ts`:
- Around line 91-94: The wsOptions object in secure-websocket.ts sets
rejectUnauthorized: false which disables TLS certificate validation; change this
to read from a configurable setting (e.g., this.options.rejectUnauthorized or
this.options.securityPreset) and default it to true so certificate validation is
enabled unless explicitly turned off by a security-aware configuration; update
the code that constructs wsOptions (the wsOptions variable in the secure
WebSocket transport) and the transport options interface/constructor to accept
and document the new flag or preset.
🧹 Nitpick comments (14)
packages/mcp-memory/src/index.ts (1)
56-229: Consolidate tool JSON schemas to avoid drift with validators.
The JSON schemas here duplicate the validation rules inpackages/mcp-memory/src/types.ts. Consider deriving both from a single shared definition so updates don’t desync.packages/mesh/src/__tests__/index.test.ts (1)
7-8: Consider using package.json version or a constant for version assertion.Hardcoding
'1.7.11'in the test creates a maintenance burden—each version bump requires updating this assertion. Consider importing the version frompackage.jsonor a shared constant to keep tests in sync automatically.packages/mesh/src/crypto/identity.ts (1)
70-73: Fingerprint truncation to 8 bytes (64 bits) is relatively short.While 64 bits provides reasonable collision resistance for display/identification purposes, it may be insufficient for security-critical fingerprint comparisons (e.g., TOFU pinning). Consider using a longer fingerprint (e.g., 16 or 32 bytes) for security decisions, or document this limitation.
packages/mesh/src/crypto/signatures.ts (1)
20-24: Relying on globalcryptomay fail in older Node.js versions.
crypto.getRandomValuesassumes the Web Crypto API is globally available. While Node.js 19+ exposesglobalThis.crypto, older LTS versions (e.g., Node 18 before 18.19) may not. Consider importing fromnode:cryptoor@noble/ciphers/webcryptofor consistency with other modules in this codebase.🔧 Proposed fix
+import { randomBytes } from '@noble/ciphers/webcrypto'; + function generateNonce(): string { - const bytes = new Uint8Array(16); - crypto.getRandomValues(bytes); + const bytes = randomBytes(16); return bytesToHex(bytes); }packages/mesh/src/crypto/storage.ts (1)
30-35: Scrypt cost factor (N=16384) may be insufficient for high-value secrets.While N=2^14 is acceptable for general use, OWASP recommends N=2^16 (65536) or higher for sensitive data protection. Consider making this configurable or increasing the default for production use.
packages/cli/src/commands/mesh.ts (1)
578-584: Empty string passed aspublicKeywhen trusting a peer.
addTrustedPeeris called with an empty string for thepublicKeyparameter. This may cause issues when the trusted peer list is later used for signature verification or encryption key derivation, as the public key would be missing.Consider either:
- Requiring the user to provide the public key along with the fingerprint
- Documenting that TOFU (Trust On First Use) will populate the public key on first contact
♻️ Suggested enhancement
+ publicKey = Option.String('--public-key', { description: 'Public key of the peer (hex)' }); + private async trustPeer(): Promise<number> { const fingerprint = this.subArg; if (!fingerprint) { console.error(chalk.red('Error: Fingerprint is required')); - console.log(chalk.gray('Usage: skillkit mesh peer trust <fingerprint> [--name <name>]')); + console.log(chalk.gray('Usage: skillkit mesh peer trust <fingerprint> [--name <name>] [--public-key <key>]')); return 1; } // ... - await keystore.addTrustedPeer(fingerprint, '', this.name); + await keystore.addTrustedPeer(fingerprint, this.publicKey ?? '', this.name); + if (!this.publicKey) { + console.log(chalk.yellow('Note: Public key not provided. Will be populated on first contact (TOFU).')); + }packages/mesh/src/crypto/encryption.ts (1)
69-87: Ephemeral private key should be zeroed after deriving shared secret.The
ephemeralPrivateKeyat line 74 is sensitive key material that should be explicitly zeroed after deriving the shared secret to minimize the window of exposure in memory.🛡️ Suggested improvement
static encrypt( message: Uint8Array, recipientPublicKey: Uint8Array ): PublicKeyEncryptedMessage { const ephemeralPrivateKey = randomBytes(32); const ephemeralPublicKey = x25519.scalarMultBase(ephemeralPrivateKey); const sharedSecret = x25519.scalarMult(ephemeralPrivateKey, recipientPublicKey); + + // Zero the ephemeral private key after use + ephemeralPrivateKey.fill(0); + const key = hkdf(sha256, sharedSecret, undefined, 'skillkit-mesh-pk-v1', 32); + sharedSecret.fill(0); // Zero shared secret after key derivation + const nonce = randomBytes(24); const cipher = xchacha20poly1305(key, nonce); const ciphertext = cipher.encrypt(message); + + key.fill(0); // Zero derived key after encryption return { ephemeralPublicKey: bytesToHex(ephemeralPublicKey), nonce: bytesToHex(nonce), ciphertext: bytesToHex(ciphertext), }; }packages/mesh/src/transport/secure-websocket.ts (4)
160-162: Silent error swallowing makes debugging difficult.Empty catch blocks hide errors. At minimum, consider logging at debug level or emitting an error event for observability.
♻️ Suggested fix
- } catch { - } + } catch (err) { + // Log parse errors for debugging malformed messages + console.debug('Failed to parse WebSocket message:', err); + }
473-475: Silent error swallowing on server message handling.Same issue as the client side - empty catch blocks hide errors and make debugging difficult.
♻️ Suggested fix
- } catch { - } + } catch (err) { + console.debug('Failed to process WebSocket message:', err); + }
155-162: Static analysis:forEachcallback implicitly returns a value.The
forEachcallback doesn't need to return anything. This can be addressed by using a block body orfor...ofloop.♻️ Suggested fix using for...of
- this.messageHandlers.forEach((handler) => - handler(message, this.socket!, senderFingerprint) - ); + for (const handler of this.messageHandlers) { + handler(message, this.socket!, senderFingerprint); + }
468-475: Static analysis: SameforEachcallback return value issue on server side.♻️ Suggested fix
- this.messageHandlers.forEach((handler) => - handler(message, socket, senderFingerprint) - ); + for (const handler of this.messageHandlers) { + handler(message, socket, senderFingerprint); + }packages/mesh/src/discovery/secure-local.ts (2)
273-275: Silent error swallowing in message handler hides parsing/verification failures.Empty catch blocks make it difficult to diagnose issues with malformed discovery messages or verification failures.
♻️ Suggested fix
- } catch { - } + } catch (err) { + // Log for debugging malformed/invalid discovery messages + console.debug('Failed to handle discovery message:', err); + }
224-231: TOFU auto-trust should be logged for security auditing.When
autoTrustFirstis enabled and a new peer is automatically trusted, this security-relevant event should be logged so administrators can audit which peers were auto-trusted.🛡️ Suggested enhancement
if (!isTrusted) { if (this.options.security.trust.autoTrustFirst) { await this.keystore.addTrustedPeer( fingerprint, signedMsg.publicKey, signedMsg.hostName ); + console.info(`Auto-trusted new peer: ${signedMsg.hostName} (${fingerprint.slice(0, 16)}...)`); } else { return; } }packages/mesh/src/security/auth.ts (1)
179-186: Pending challenges only cleaned up when creating new challenges.The
cleanupExpiredChallengesmethod is only called fromcreateChallenge. If a server receives many challenges but creates few, expired entries may accumulate. Consider also cleaning up duringverifyChallengeResponse.♻️ Suggested fix
async verifyChallengeResponse( response: AuthChallengeResponse ): Promise<AuthResult> { + this.cleanupExpiredChallenges(); + const pending = this.pendingChallenges.get(response.challenge);
| if (this.json) { | ||
| const identity = hasIdentity ? await keystore.loadOrCreateIdentity() : null; | ||
| console.log(JSON.stringify({ | ||
| hasIdentity, | ||
| fingerprint: identity?.fingerprint, | ||
| hasCertificate: hasCert, | ||
| securityEnabled: isSecurityEnabled(DEFAULT_SECURITY_CONFIG), | ||
| config: DEFAULT_SECURITY_CONFIG, | ||
| }, null, 2)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON output printed after human-readable output when --json flag is set.
When this.json is true, the code already outputs human-readable information on lines 468-517, then additionally outputs JSON. This appears inconsistent with other methods where JSON mode replaces human-readable output entirely (e.g., lines 200-203, 272-275).
♻️ Suggested fix to make JSON output exclusive
private async showSecurityStatus(): Promise<number> {
try {
const {
SecureKeystore,
TLSManager,
getLocalHostConfig,
describeSecurityLevel,
DEFAULT_SECURITY_CONFIG,
isSecurityEnabled,
} = await import('@skillkit/mesh');
const keystore = new SecureKeystore();
+ const hasIdentity = await keystore.hasIdentity();
+ if (this.json) {
+ const identity = hasIdentity ? await keystore.loadOrCreateIdentity() : null;
+ const localConfig = await getLocalHostConfig();
+ const tlsManager = new TLSManager();
+ const hasCert = await tlsManager.hasCertificate(localConfig.id);
+ console.log(JSON.stringify({
+ hasIdentity,
+ fingerprint: identity?.fingerprint,
+ hasCertificate: hasCert,
+ securityEnabled: isSecurityEnabled(DEFAULT_SECURITY_CONFIG),
+ config: DEFAULT_SECURITY_CONFIG,
+ }, null, 2));
+ return 0;
+ }
+
console.log(chalk.bold('\nMesh Security Status\n'));
-
- const hasIdentity = await keystore.hasIdentity();
// ... rest of human-readable output🤖 Prompt for AI Agents
In `@packages/cli/src/commands/mesh.ts` around lines 519 - 528, The JSON block
currently runs after human-readable output when this.json is true; make JSON
mode exclusive by short-circuiting the human-readable flow: when this.json is
true, load identity via keystore.loadOrCreateIdentity() as used now and print
the JSON (with hasIdentity, identity?.fingerprint, hasCert, securityEnabled
using isSecurityEnabled(DEFAULT_SECURITY_CONFIG), and config:
DEFAULT_SECURITY_CONFIG) then immediately return/exit the function (or wrap the
rest of the human-readable logic in an else), ensuring no additional console
output occurs in JSON mode; update the code around the this.json check in mesh
command to enforce this exclusive behavior.
| server.setRequestHandler(CallToolRequestSchema, async (request) => { | ||
| const { name, arguments: args } = request.params; | ||
| const store = await getMemoryStore(); | ||
|
|
||
| try { | ||
| switch (name) { | ||
| case 'memory_store': { | ||
| const input = StoreMemoryInputSchema.parse(args); | ||
| const memory = await store.create({ | ||
| agentId: AGENT_ID, | ||
| content: input.content, | ||
| category: input.category, | ||
| tags: input.tags, | ||
| metadata: input.metadata, | ||
| }); | ||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Memory stored successfully.\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nTags: ${memory.tags.join(', ') || 'none'}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_search': { | ||
| const input = SearchMemoryInputSchema.parse(args); | ||
| const results = await store.semanticSearch(input.query, { | ||
| agentId: AGENT_ID, | ||
| category: input.category, | ||
| tier: input.tier, | ||
| tags: input.tags, | ||
| limit: input.limit, | ||
| threshold: input.threshold, | ||
| }); | ||
|
|
||
| if (results.length === 0) { | ||
| return { | ||
| content: [{ type: 'text', text: 'No memories found matching your query.' }], | ||
| }; | ||
| } | ||
|
|
||
| const formatted = results | ||
| .map( | ||
| (r, i) => | ||
| `${i + 1}. [${r.memory.category}] (score: ${r.score.toFixed(3)})\n ID: ${r.memory.id}\n ${r.memory.content}\n Tags: ${r.memory.tags.join(', ') || 'none'}` | ||
| ) | ||
| .join('\n\n'); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Found ${results.length} memories:\n\n${formatted}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_recall': { | ||
| const input = RecallMemoryInputSchema.parse(args); | ||
| let memories = await store.list(AGENT_ID, input.limit ?? 20); | ||
|
|
||
| if (input.category) { | ||
| memories = memories.filter(m => m.category === input.category); | ||
| } | ||
| if (input.tier) { | ||
| memories = memories.filter(m => m.tier === input.tier); | ||
| } | ||
| if (input.tags && input.tags.length > 0) { | ||
| memories = memories.filter(m => | ||
| input.tags!.some(tag => m.tags.includes(tag)) | ||
| ); | ||
| } | ||
|
|
||
| if (memories.length === 0) { | ||
| return { | ||
| content: [{ type: 'text', text: 'No memories found with the specified filters.' }], | ||
| }; | ||
| } | ||
|
|
||
| const formatted = memories | ||
| .map( | ||
| (m, i) => | ||
| `${i + 1}. [${m.category}] (${m.tier})\n ID: ${m.id}\n ${m.content}\n Score: ${m.reinforcementScore.toFixed(2)} | Access: ${m.accessCount}` | ||
| ) | ||
| .join('\n\n'); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Recalled ${memories.length} memories:\n\n${formatted}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_get': { | ||
| const input = GetMemoryInputSchema.parse(args); | ||
| const memory = await store.get(input.id); | ||
|
|
||
| if (!memory) { | ||
| return { | ||
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | ||
| isError: true, | ||
| }; | ||
| } | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Memory Details:\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nContent: ${memory.content}\nTags: ${memory.tags.join(', ') || 'none'}\nReinforcement: ${memory.reinforcementScore.toFixed(2)}\nAccess Count: ${memory.accessCount}\nCreated: ${memory.createdAt}\nLast Accessed: ${memory.lastAccessedAt}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_forget': { | ||
| const input = ForgetMemoryInputSchema.parse(args); | ||
| const existing = await store.get(input.id); | ||
|
|
||
| if (!existing) { | ||
| return { | ||
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | ||
| isError: true, | ||
| }; | ||
| } | ||
|
|
||
| await store.delete(input.id); | ||
|
|
||
| return { | ||
| content: [{ type: 'text', text: `Memory deleted: ${input.id}` }], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_link': { | ||
| const input = LinkMemoriesInputSchema.parse(args); | ||
| const link = await store.link( | ||
| input.sourceId, | ||
| input.targetId, | ||
| input.relationshipType, | ||
| input.strength | ||
| ); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Link created:\n\nID: ${link.id}\n${input.sourceId} --[${input.relationshipType}]--> ${input.targetId}\nStrength: ${link.strength}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_reinforce': { | ||
| const input = ReinforceMemoryInputSchema.parse(args); | ||
| const existing = await store.get(input.id); | ||
| if (!existing) { | ||
| return { | ||
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | ||
| isError: true, | ||
| }; | ||
| } | ||
| const memory = await store.reinforce(input.id, input.amount ?? 0.1); | ||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Memory reinforced:\n\nID: ${memory.id}\nNew Score: ${memory.reinforcementScore.toFixed(2)}\nTier: ${memory.tier}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| case 'memory_stats': { | ||
| const stats = await store.getStats(AGENT_ID); | ||
|
|
||
| const categoryBreakdown = Object.entries(stats.byCategory) | ||
| .map(([cat, count]) => ` ${cat}: ${count}`) | ||
| .join('\n'); | ||
|
|
||
| const tierBreakdown = Object.entries(stats.byTier) | ||
| .map(([tier, count]) => ` ${tier}: ${count}`) | ||
| .join('\n'); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: 'text', | ||
| text: `Memory Statistics:\n\nTotal Memories: ${stats.totalMemories}\nAverage Reinforcement: ${stats.avgReinforcementScore.toFixed(2)}\n\nBy Category:\n${categoryBreakdown}\n\nBy Tier:\n${tierBreakdown}\n\nOldest: ${stats.oldestMemory || 'N/A'}\nNewest: ${stats.newestMemory || 'N/A'}`, | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| default: | ||
| return { | ||
| content: [{ type: 'text', text: `Unknown tool: ${name}` }], | ||
| isError: true, | ||
| }; | ||
| } | ||
| } catch (error) { | ||
| const message = error instanceof Error ? error.message : 'Unknown error'; | ||
| return { | ||
| content: [{ type: 'text', text: `Error: ${message}` }], | ||
| isError: true, | ||
| }; | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle store initialization failures inside the tool handler.
getMemoryStore() (Line 233) runs before the try, so init errors bypass the handler’s error response and can bubble unhandled. Wrap store creation in the try block so tool calls return a proper MCP error payload.
🔧 Suggested fix
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
- const store = await getMemoryStore();
-
try {
+ const store = await getMemoryStore();
switch (name) {
case 'memory_store': {
const input = StoreMemoryInputSchema.parse(args);
const memory = await store.create({📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| server.setRequestHandler(CallToolRequestSchema, async (request) => { | |
| const { name, arguments: args } = request.params; | |
| const store = await getMemoryStore(); | |
| try { | |
| switch (name) { | |
| case 'memory_store': { | |
| const input = StoreMemoryInputSchema.parse(args); | |
| const memory = await store.create({ | |
| agentId: AGENT_ID, | |
| content: input.content, | |
| category: input.category, | |
| tags: input.tags, | |
| metadata: input.metadata, | |
| }); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory stored successfully.\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nTags: ${memory.tags.join(', ') || 'none'}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_search': { | |
| const input = SearchMemoryInputSchema.parse(args); | |
| const results = await store.semanticSearch(input.query, { | |
| agentId: AGENT_ID, | |
| category: input.category, | |
| tier: input.tier, | |
| tags: input.tags, | |
| limit: input.limit, | |
| threshold: input.threshold, | |
| }); | |
| if (results.length === 0) { | |
| return { | |
| content: [{ type: 'text', text: 'No memories found matching your query.' }], | |
| }; | |
| } | |
| const formatted = results | |
| .map( | |
| (r, i) => | |
| `${i + 1}. [${r.memory.category}] (score: ${r.score.toFixed(3)})\n ID: ${r.memory.id}\n ${r.memory.content}\n Tags: ${r.memory.tags.join(', ') || 'none'}` | |
| ) | |
| .join('\n\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Found ${results.length} memories:\n\n${formatted}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_recall': { | |
| const input = RecallMemoryInputSchema.parse(args); | |
| let memories = await store.list(AGENT_ID, input.limit ?? 20); | |
| if (input.category) { | |
| memories = memories.filter(m => m.category === input.category); | |
| } | |
| if (input.tier) { | |
| memories = memories.filter(m => m.tier === input.tier); | |
| } | |
| if (input.tags && input.tags.length > 0) { | |
| memories = memories.filter(m => | |
| input.tags!.some(tag => m.tags.includes(tag)) | |
| ); | |
| } | |
| if (memories.length === 0) { | |
| return { | |
| content: [{ type: 'text', text: 'No memories found with the specified filters.' }], | |
| }; | |
| } | |
| const formatted = memories | |
| .map( | |
| (m, i) => | |
| `${i + 1}. [${m.category}] (${m.tier})\n ID: ${m.id}\n ${m.content}\n Score: ${m.reinforcementScore.toFixed(2)} | Access: ${m.accessCount}` | |
| ) | |
| .join('\n\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Recalled ${memories.length} memories:\n\n${formatted}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_get': { | |
| const input = GetMemoryInputSchema.parse(args); | |
| const memory = await store.get(input.id); | |
| if (!memory) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory Details:\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nContent: ${memory.content}\nTags: ${memory.tags.join(', ') || 'none'}\nReinforcement: ${memory.reinforcementScore.toFixed(2)}\nAccess Count: ${memory.accessCount}\nCreated: ${memory.createdAt}\nLast Accessed: ${memory.lastAccessedAt}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_forget': { | |
| const input = ForgetMemoryInputSchema.parse(args); | |
| const existing = await store.get(input.id); | |
| if (!existing) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| await store.delete(input.id); | |
| return { | |
| content: [{ type: 'text', text: `Memory deleted: ${input.id}` }], | |
| }; | |
| } | |
| case 'memory_link': { | |
| const input = LinkMemoriesInputSchema.parse(args); | |
| const link = await store.link( | |
| input.sourceId, | |
| input.targetId, | |
| input.relationshipType, | |
| input.strength | |
| ); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Link created:\n\nID: ${link.id}\n${input.sourceId} --[${input.relationshipType}]--> ${input.targetId}\nStrength: ${link.strength}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_reinforce': { | |
| const input = ReinforceMemoryInputSchema.parse(args); | |
| const existing = await store.get(input.id); | |
| if (!existing) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| const memory = await store.reinforce(input.id, input.amount ?? 0.1); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory reinforced:\n\nID: ${memory.id}\nNew Score: ${memory.reinforcementScore.toFixed(2)}\nTier: ${memory.tier}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_stats': { | |
| const stats = await store.getStats(AGENT_ID); | |
| const categoryBreakdown = Object.entries(stats.byCategory) | |
| .map(([cat, count]) => ` ${cat}: ${count}`) | |
| .join('\n'); | |
| const tierBreakdown = Object.entries(stats.byTier) | |
| .map(([tier, count]) => ` ${tier}: ${count}`) | |
| .join('\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory Statistics:\n\nTotal Memories: ${stats.totalMemories}\nAverage Reinforcement: ${stats.avgReinforcementScore.toFixed(2)}\n\nBy Category:\n${categoryBreakdown}\n\nBy Tier:\n${tierBreakdown}\n\nOldest: ${stats.oldestMemory || 'N/A'}\nNewest: ${stats.newestMemory || 'N/A'}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| default: | |
| return { | |
| content: [{ type: 'text', text: `Unknown tool: ${name}` }], | |
| isError: true, | |
| }; | |
| } | |
| } catch (error) { | |
| const message = error instanceof Error ? error.message : 'Unknown error'; | |
| return { | |
| content: [{ type: 'text', text: `Error: ${message}` }], | |
| isError: true, | |
| }; | |
| } | |
| }); | |
| server.setRequestHandler(CallToolRequestSchema, async (request) => { | |
| const { name, arguments: args } = request.params; | |
| try { | |
| const store = await getMemoryStore(); | |
| switch (name) { | |
| case 'memory_store': { | |
| const input = StoreMemoryInputSchema.parse(args); | |
| const memory = await store.create({ | |
| agentId: AGENT_ID, | |
| content: input.content, | |
| category: input.category, | |
| tags: input.tags, | |
| metadata: input.metadata, | |
| }); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory stored successfully.\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nTags: ${memory.tags.join(', ') || 'none'}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_search': { | |
| const input = SearchMemoryInputSchema.parse(args); | |
| const results = await store.semanticSearch(input.query, { | |
| agentId: AGENT_ID, | |
| category: input.category, | |
| tier: input.tier, | |
| tags: input.tags, | |
| limit: input.limit, | |
| threshold: input.threshold, | |
| }); | |
| if (results.length === 0) { | |
| return { | |
| content: [{ type: 'text', text: 'No memories found matching your query.' }], | |
| }; | |
| } | |
| const formatted = results | |
| .map( | |
| (r, i) => | |
| `${i + 1}. [${r.memory.category}] (score: ${r.score.toFixed(3)})\n ID: ${r.memory.id}\n ${r.memory.content}\n Tags: ${r.memory.tags.join(', ') || 'none'}` | |
| ) | |
| .join('\n\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Found ${results.length} memories:\n\n${formatted}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_recall': { | |
| const input = RecallMemoryInputSchema.parse(args); | |
| let memories = await store.list(AGENT_ID, input.limit ?? 20); | |
| if (input.category) { | |
| memories = memories.filter(m => m.category === input.category); | |
| } | |
| if (input.tier) { | |
| memories = memories.filter(m => m.tier === input.tier); | |
| } | |
| if (input.tags && input.tags.length > 0) { | |
| memories = memories.filter(m => | |
| input.tags!.some(tag => m.tags.includes(tag)) | |
| ); | |
| } | |
| if (memories.length === 0) { | |
| return { | |
| content: [{ type: 'text', text: 'No memories found with the specified filters.' }], | |
| }; | |
| } | |
| const formatted = memories | |
| .map( | |
| (m, i) => | |
| `${i + 1}. [${m.category}] (${m.tier})\n ID: ${m.id}\n ${m.content}\n Score: ${m.reinforcementScore.toFixed(2)} | Access: ${m.accessCount}` | |
| ) | |
| .join('\n\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Recalled ${memories.length} memories:\n\n${formatted}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_get': { | |
| const input = GetMemoryInputSchema.parse(args); | |
| const memory = await store.get(input.id); | |
| if (!memory) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory Details:\n\nID: ${memory.id}\nCategory: ${memory.category}\nTier: ${memory.tier}\nContent: ${memory.content}\nTags: ${memory.tags.join(', ') || 'none'}\nReinforcement: ${memory.reinforcementScore.toFixed(2)}\nAccess Count: ${memory.accessCount}\nCreated: ${memory.createdAt}\nLast Accessed: ${memory.lastAccessedAt}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_forget': { | |
| const input = ForgetMemoryInputSchema.parse(args); | |
| const existing = await store.get(input.id); | |
| if (!existing) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| await store.delete(input.id); | |
| return { | |
| content: [{ type: 'text', text: `Memory deleted: ${input.id}` }], | |
| }; | |
| } | |
| case 'memory_link': { | |
| const input = LinkMemoriesInputSchema.parse(args); | |
| const link = await store.link( | |
| input.sourceId, | |
| input.targetId, | |
| input.relationshipType, | |
| input.strength | |
| ); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Link created:\n\nID: ${link.id}\n${input.sourceId} --[${input.relationshipType}]--> ${input.targetId}\nStrength: ${link.strength}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_reinforce': { | |
| const input = ReinforceMemoryInputSchema.parse(args); | |
| const existing = await store.get(input.id); | |
| if (!existing) { | |
| return { | |
| content: [{ type: 'text', text: `Memory not found: ${input.id}` }], | |
| isError: true, | |
| }; | |
| } | |
| const memory = await store.reinforce(input.id, input.amount ?? 0.1); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory reinforced:\n\nID: ${memory.id}\nNew Score: ${memory.reinforcementScore.toFixed(2)}\nTier: ${memory.tier}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| case 'memory_stats': { | |
| const stats = await store.getStats(AGENT_ID); | |
| const categoryBreakdown = Object.entries(stats.byCategory) | |
| .map(([cat, count]) => ` ${cat}: ${count}`) | |
| .join('\n'); | |
| const tierBreakdown = Object.entries(stats.byTier) | |
| .map(([tier, count]) => ` ${tier}: ${count}`) | |
| .join('\n'); | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: `Memory Statistics:\n\nTotal Memories: ${stats.totalMemories}\nAverage Reinforcement: ${stats.avgReinforcementScore.toFixed(2)}\n\nBy Category:\n${categoryBreakdown}\n\nBy Tier:\n${tierBreakdown}\n\nOldest: ${stats.oldestMemory || 'N/A'}\nNewest: ${stats.newestMemory || 'N/A'}`, | |
| }, | |
| ], | |
| }; | |
| } | |
| default: | |
| return { | |
| content: [{ type: 'text', text: `Unknown tool: ${name}` }], | |
| isError: true, | |
| }; | |
| } | |
| } catch (error) { | |
| const message = error instanceof Error ? error.message : 'Unknown error'; | |
| return { | |
| content: [{ type: 'text', text: `Error: ${message}` }], | |
| isError: true, | |
| }; | |
| } | |
| }); |
🤖 Prompt for AI Agents
In `@packages/mcp-memory/src/index.ts` around lines 231 - 441, The handler calls
getMemoryStore() before entering the try block, so initialization failures
escape the handler; move the await getMemoryStore() call into the try inside
server.setRequestHandler(CallToolRequestSchema, ...) so any errors are caught
and return the MCP error payload, i.e., wrap or relocate the store
initialization into the try that surrounds the switch (refer to getMemoryStore()
and the existing catch block) and ensure subsequent code uses the local store
variable initialized inside that try.
| async signObject(obj: object): Promise<string> { | ||
| const message = JSON.stringify(obj); | ||
| return await this.signString(message); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON.stringify order is non-deterministic for object signing.
JSON.stringify does not guarantee property order across JavaScript engines. If signObject is used to sign data that will be verified on different systems, the canonical representation may differ, causing signature verification failures.
Consider using a deterministic serialization approach (similar to the canonicalize function in signatures.ts).
🔧 Proposed fix
async signObject(obj: object): Promise<string> {
- const message = JSON.stringify(obj);
+ // Use deterministic serialization for cross-platform compatibility
+ const message = this.canonicalize(obj);
return await this.signString(message);
}
+
+private canonicalize(obj: unknown): string {
+ if (obj === null || obj === undefined) return 'null';
+ if (typeof obj !== 'object') return JSON.stringify(obj);
+ if (Array.isArray(obj)) return '[' + obj.map(v => this.canonicalize(v)).join(',') + ']';
+ const keys = Object.keys(obj as Record<string, unknown>).sort();
+ const pairs = keys.map(k => JSON.stringify(k) + ':' + this.canonicalize((obj as Record<string, unknown>)[k]));
+ return '{' + pairs.join(',') + '}';
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async signObject(obj: object): Promise<string> { | |
| const message = JSON.stringify(obj); | |
| return await this.signString(message); | |
| } | |
| async signObject(obj: object): Promise<string> { | |
| // Use deterministic serialization for cross-platform compatibility | |
| const message = this.canonicalize(obj); | |
| return await this.signString(message); | |
| } | |
| private canonicalize(obj: unknown): string { | |
| if (obj === null || obj === undefined) return 'null'; | |
| if (typeof obj !== 'object') return JSON.stringify(obj); | |
| if (Array.isArray(obj)) return '[' + obj.map(v => this.canonicalize(v)).join(',') + ']'; | |
| const keys = Object.keys(obj as Record<string, unknown>).sort(); | |
| const pairs = keys.map(k => JSON.stringify(k) + ':' + this.canonicalize((obj as Record<string, unknown>)[k])); | |
| return '{' + pairs.join(',') + '}'; | |
| } |
🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/identity.ts` around lines 112 - 115, The signObject
method uses JSON.stringify which can produce non-deterministic key order;
replace that with a deterministic canonical serializer (e.g., call the
canonicalize function from signatures.ts) so the same canonical string is
produced across engines, then pass the canonical string into signString; update
imports to bring in canonicalize and ensure signObject uses canonicalize(obj)
instead of JSON.stringify(obj).
| deriveSharedSecret(peerPublicKey: Uint8Array): Uint8Array { | ||
| const x25519PrivateKey = this.keypair.privateKey; | ||
| return x25519.scalarMult(x25519PrivateKey, peerPublicKey); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/mesh/src/crypto/identity.ts | head -130Repository: rohitg00/skillkit
Length of output: 4587
🏁 Script executed:
rg "import.*x25519" packages/mesh/src/crypto/identity.ts -A 2 -B 2Repository: rohitg00/skillkit
Length of output: 260
🏁 Script executed:
rg "edwardsToMontgomery" packages/mesh/src/crypto/ -r .Repository: rohitg00/skillkit
Length of output: 43
🌐 Web query:
@noble/curves ed25519 edwardsToMontgomeryPriv X25519 conversion documentation
💡 Result:
@noble/curves exposes built-in Ed25519 ⇄ X25519 conversion helpers in the ed25519 module: edwardsToMontgomeryPriv (private key) and edwardsToMontgomeryPub (public key). [1][2]
import {
ed25519,
x25519,
edwardsToMontgomeryPriv,
edwardsToMontgomeryPub,
} from '@noble/curves/ed25519.js'; // v2 uses .js extension [3]
// Ed25519 keys
const edPriv = ed25519.utils.randomPrivateKey();
const edPub = ed25519.getPublicKey(edPriv);
// Convert to X25519
const xPriv = edwardsToMontgomeryPriv(edPriv);
const xPub1 = x25519.getPublicKey(xPriv);
// (Optional) convert Ed25519 public key directly to X25519 public key
const xPub2 = edwardsToMontgomeryPub(edPub);Notes:
- In
@noble/curvesv2, imports require the.jsextension and many APIs expectUint8Arrayinputs (not hex strings). [3] - The package README/docs include this conversion snippet under the X25519 section. [1][2]
Sources: [1] npm package docs for @noble/curves (X25519 + conversion snippet), [2] generated docs page listing edwardsToMontgomeryPriv/Pub signatures and examples, [3] noble-curves release notes mentioning the .js-extension + input-type changes.
Ed25519 private keys must be converted before use with X25519 operations.
The keypair's private key is generated using Ed25519 (line 28: ed25519.getPublicKeyAsync), but deriveSharedSecret uses it directly with x25519.scalarMult without conversion. Ed25519 and X25519 use different scalar formats—directly passing an Ed25519 private key to X25519 operations will produce incorrect shared secrets and fail interoperability.
Use edwardsToMontgomeryPriv from @noble/curves/ed25519 to convert the Ed25519 private key to X25519 format before the scalar multiplication.
🔧 Proposed fix
-import { x25519 } from '@noble/curves/ed25519';
+import { x25519, edwardsToMontgomeryPriv } from '@noble/curves/ed25519';
deriveSharedSecret(peerPublicKey: Uint8Array): Uint8Array {
- const x25519PrivateKey = this.keypair.privateKey;
+ const x25519PrivateKey = edwardsToMontgomeryPriv(this.keypair.privateKey);
return x25519.scalarMult(x25519PrivateKey, peerPublicKey);
}🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/identity.ts` around lines 117 - 120,
deriveSharedSecret is using an Ed25519 private key directly with
x25519.scalarMult; convert the Ed25519 private key to X25519 format before
scalar multiplication by calling edwardsToMontgomeryPriv (from
`@noble/curves/ed25519`) on this.keypair.privateKey and pass the converted scalar
to x25519.scalarMult(peerPublicKey) instead of the raw Ed25519 private key to
produce a correct shared secret.
| constructor(config: KeystoreConfig = {}) { | ||
| this.path = config.path || DEFAULT_KEYSTORE_PATH; | ||
|
|
||
| if (config.encryptionKey) { | ||
| this.passphrase = config.encryptionKey; | ||
| } else if (config.useMachineKey !== false) { | ||
| this.passphrase = generateMachineKey(); | ||
| } else { | ||
| throw new Error('Encryption key or machine key required'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid defaulting to a predictable machine‑derived passphrase.
The machine key is derived from host/user data and is guessable on a compromised host, which weakens at‑rest encryption by default. Consider requiring an explicit encryption key unless callers explicitly opt in to a machine‑derived key.
🔐 Proposed change (require explicit opt‑in)
- } else if (config.useMachineKey !== false) {
- this.passphrase = generateMachineKey();
- } else {
- throw new Error('Encryption key or machine key required');
- }
+ } else if (config.useMachineKey === true) {
+ this.passphrase = generateMachineKey();
+ } else {
+ throw new Error(
+ 'Encryption key required (set useMachineKey=true to opt into a machine-derived key)'
+ );
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| constructor(config: KeystoreConfig = {}) { | |
| this.path = config.path || DEFAULT_KEYSTORE_PATH; | |
| if (config.encryptionKey) { | |
| this.passphrase = config.encryptionKey; | |
| } else if (config.useMachineKey !== false) { | |
| this.passphrase = generateMachineKey(); | |
| } else { | |
| throw new Error('Encryption key or machine key required'); | |
| } | |
| constructor(config: KeystoreConfig = {}) { | |
| this.path = config.path || DEFAULT_KEYSTORE_PATH; | |
| if (config.encryptionKey) { | |
| this.passphrase = config.encryptionKey; | |
| } else if (config.useMachineKey === true) { | |
| this.passphrase = generateMachineKey(); | |
| } else { | |
| throw new Error( | |
| 'Encryption key required (set useMachineKey=true to opt into a machine-derived key)' | |
| ); | |
| } |
🤖 Prompt for AI Agents
In `@packages/mesh/src/crypto/keystore.ts` around lines 40 - 49, The Keystore
constructor currently defaults to a machine-derived passphrase; change it to
require an explicit encryptionKey unless the caller explicitly opts in by
setting config.useMachineKey === true: in the Keystore constructor (class
Keystore, method constructor, and using KeystoreConfig and generateMachineKey)
only call generateMachineKey() when config.useMachineKey is true (not when it is
omitted), otherwise throw an error if no config.encryptionKey is provided;
update the thrown Error message to clearly state that an explicit encryptionKey
is required unless useMachineKey is explicitly true.
| return { | ||
| cert, | ||
| key, | ||
| fingerprint, | ||
| notBefore: new Date(), | ||
| notAfter: new Date(Date.now() + 365 * 24 * 60 * 60 * 1000), | ||
| subject: hostId, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
loadCertificate returns fabricated validity dates instead of parsing the actual certificate.
The notBefore and notAfter fields are set to hardcoded values (new Date() and one year from now) rather than being extracted from the loaded certificate. This means certificate expiration checks will be incorrect.
🔧 Suggestion
If valid X.509 certificates are generated (see previous comment), parse the actual validity dates from the certificate using a proper ASN.1/X.509 parser. Until then, consider storing metadata alongside the certificate or documenting this limitation.
🤖 Prompt for AI Agents
In `@packages/mesh/src/security/tls.ts` around lines 180 - 187, The
loadCertificate function is returning fabricated validity dates (notBefore/new
Date() and notAfter=one year ahead) instead of extracting them from the X.509
certificate; update loadCertificate to parse the certificate's actual validity
period and set the notBefore and notAfter fields from the parsed X.509 validity
values (e.g., use a proper X.509/ASN.1 parser or Node's X509Certificate
utilities) so fingerprint, cert, key, notBefore, and notAfter reflect the real
certificate metadata; if parsing isn't possible yet, persist or accept explicit
validity metadata alongside the certificate and document this limitation in
loadCertificate's contract.
| export function getTLSManager(certPath?: string): TLSManager { | ||
| if (!globalTLSManager) { | ||
| globalTLSManager = new TLSManager(certPath); | ||
| } | ||
| return globalTLSManager; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Singleton ignores certPath parameter on subsequent calls.
Once globalTLSManager is instantiated, the certPath parameter is ignored on subsequent calls. This can lead to unexpected behavior if different callers expect different certificate paths.
🔧 Proposed fix
export function getTLSManager(certPath?: string): TLSManager {
- if (!globalTLSManager) {
+ if (!globalTLSManager || (certPath && globalTLSManager['certPath'] !== certPath)) {
globalTLSManager = new TLSManager(certPath);
}
return globalTLSManager;
}Alternatively, document that certPath is only used on first initialization, or remove the parameter and always use the default path.
🤖 Prompt for AI Agents
In `@packages/mesh/src/security/tls.ts` around lines 267 - 272, The getTLSManager
singleton currently ignores the certPath parameter after the first call; update
getTLSManager to handle certPath consistently: when globalTLSManager is
undefined, construct new TLSManager(certPath) as before, but when
globalTLSManager already exists check its configured cert path (expose or read
the instance property on TLSManager) and if the incoming certPath is different
either (a) throw a clear error explaining that a different certPath cannot be
used for the existing singleton, or (b) recreate the singleton with the new
certPath (choose one policy), and document the behavior. Reference TLSManager,
globalTLSManager and getTLSManager when making the change so callers either get
an error or see the singleton replaced/validated rather than silently ignoring
certPath.
| https: { | ||
| rejectUnauthorized: false, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disabling TLS certificate validation allows MITM attacks.
Setting rejectUnauthorized: false disables certificate validation, making the connection vulnerable to man-in-the-middle attacks. This defeats the purpose of using HTTPS/TLS.
Consider making this configurable based on the security preset, only allowing it in development mode:
🛡️ Suggested approach
https: {
- rejectUnauthorized: false,
+ rejectUnauthorized: this.security.transport.tls === 'strict',
},Or provide an explicit option for self-signed certificates while still validating known certificates.
🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-http.ts` around lines 62 - 64, The code
currently sets https: { rejectUnauthorized: false } which disables TLS
certificate validation; change this to be configurable instead of hard-coded:
read a flag (e.g., an env var or config option like allowInsecureTLS or NODE_ENV
=== 'development') and only set rejectUnauthorized: false when that flag
explicitly permits it (development or explicit self-signed option), otherwise
leave rejectUnauthorized undefined/true to enforce validation; alternatively
provide an option to supply custom CA certificates (trustedRoots) rather than
disabling validation so the transport in secure-http.ts can support self-signed
certs safely.
| export function verifySecureMessage( | ||
| message: SecureTransportMessage | ||
| ): { valid: boolean; error?: string } { | ||
| if (!message.signature || !message.senderPublicKey || !message.senderFingerprint) { | ||
| return { valid: false, error: 'Missing signature fields' }; | ||
| } | ||
|
|
||
| const computedFingerprint = PeerIdentity.computeFingerprint( | ||
| Buffer.from(message.senderPublicKey, 'hex') | ||
| ); | ||
|
|
||
| if (computedFingerprint !== message.senderFingerprint) { | ||
| return { valid: false, error: 'Fingerprint mismatch' }; | ||
| } | ||
|
|
||
| return { valid: true }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
verifySecureMessage does not actually verify the signature.
The function only checks that signature fields are present and that the fingerprint matches the public key, but it never calls PeerIdentity.verify() to cryptographically verify the signature. This provides no actual security guarantee.
🐛 Proposed fix to add signature verification
-export function verifySecureMessage(
+export async function verifySecureMessage(
message: SecureTransportMessage
-): { valid: boolean; error?: string } {
+): Promise<{ valid: boolean; error?: string }> {
if (!message.signature || !message.senderPublicKey || !message.senderFingerprint) {
return { valid: false, error: 'Missing signature fields' };
}
const computedFingerprint = PeerIdentity.computeFingerprint(
Buffer.from(message.senderPublicKey, 'hex')
);
if (computedFingerprint !== message.senderFingerprint) {
return { valid: false, error: 'Fingerprint mismatch' };
}
+ // Verify the actual signature
+ const { signature, senderPublicKey, senderFingerprint, nonce, ...baseMessage } = message;
+ const messageBytes = new TextEncoder().encode(JSON.stringify(baseMessage));
+ const signatureBytes = Buffer.from(signature, 'hex');
+ const publicKeyBytes = Buffer.from(senderPublicKey, 'hex');
+
+ const isValid = await PeerIdentity.verify(signatureBytes, messageBytes, publicKeyBytes);
+ if (!isValid) {
+ return { valid: false, error: 'Invalid signature' };
+ }
+
return { valid: true };
}🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-http.ts` around lines 221 - 237,
verifySecureMessage currently only checks presence of signature fields and
fingerprint but never cryptographically verifies the signature; update
verifySecureMessage to call PeerIdentity.verify to validate message.signature
against the sender's public key and the original signed content (the field that
was actually signed, e.g., message.payload or message.body). Use
PeerIdentity.computeFingerprint(Buffer.from(message.senderPublicKey, 'hex')) as
you already do to check fingerprint first, then call PeerIdentity.verify with
the sender public key (hex -> Buffer), the signature (properly decoded), and the
signed payload; if PeerIdentity.verify returns false, return { valid: false,
error: 'Invalid signature' }, otherwise return { valid: true }.
| const wsOptions: WebSocket.ClientOptions = { | ||
| handshakeTimeout: this.options.timeout, | ||
| rejectUnauthorized: false, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same TLS certificate validation concern as in secure-http.ts.
rejectUnauthorized: false disables certificate validation, allowing MITM attacks. Consider making this configurable based on security preset.
🤖 Prompt for AI Agents
In `@packages/mesh/src/transport/secure-websocket.ts` around lines 91 - 94, The
wsOptions object in secure-websocket.ts sets rejectUnauthorized: false which
disables TLS certificate validation; change this to read from a configurable
setting (e.g., this.options.rejectUnauthorized or this.options.securityPreset)
and default it to true so certificate validation is enabled unless explicitly
turned off by a security-aware configuration; update the code that constructs
wsOptions (the wsOptions variable in the secure WebSocket transport) and the
transport options interface/constructor to accept and document the new flag or
preset.
Memory package: - encoder: reset initPromise on failure to allow retry - encoder: call pipeline.dispose() in disposeEncoder to free GPU resources - schema: fix probe query to use named-column matching for correct arity Mesh package: - hosts-config: add file locking to prevent race conditions - local discovery: reduce multicast TTL to 1 for LAN-only traffic - local discovery: handle announce errors in setInterval - local discovery: close socket on membership setup failure - local discovery: validate inbound packets (type, hostId, port range) - local discovery: use rinfo.address to prevent spoofing - health: guard against overlapping health checks with checking flag - websocket: prevent auto-reconnect on explicit disconnect()
…ter coverage Increase from 2x to 5x results to reduce likelihood of incomplete results when agentId/category/tier filters exclude many candidates. Filters are applied in application code after HNSW vector search.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
packages/mesh/src/security/auth.ts
Outdated
| constructor(identity: PeerIdentity, jwtSecret?: Uint8Array) { | ||
| this.identity = identity; | ||
| this.jwtSecret = jwtSecret || randomBytes(32); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 SecureHttpTransport authentication tokens can’t be validated because each AuthManager instance uses a random secret
AuthManager defaults jwtSecret to randomBytes(32) per instance. SecureHttpTransport.initialize() creates a token with new AuthManager(this.identity) and sends it as a Bearer token. Any server-side verifier that creates its own AuthManager instance will have a different secret, so verifyToken() will always fail.
Actual vs expected:
- Actual: "requireAuth" mode is not interoperable across processes/hosts; tokens are effectively unverifiable outside the instance that minted them.
- Expected: tokens should be verifiable by the receiving side (shared secret, asymmetric JWT, or a handshake-derived session key).
Where it happens
- Random secret per AuthManager:
packages/mesh/src/security/auth.ts:44-47 - Token minting in secure HTTP transport:
packages/mesh/src/transport/secure-http.ts:73-85
Recommendation: Persist/share the JWT verification key (e.g., store in keystore), or switch to asymmetric signing (Ed25519/JWK) where each host can verify tokens using the sender's public key, or derive a shared secret via a handshake and use that as the JWT key.
Was this helpful? React with 👍 or 👎 to provide feedback.
Tokens are now signed with the sender's Ed25519 private key and verified using the sender's public key embedded in the token. This fixes the issue where tokens couldn't be verified across different AuthManager instances/hosts because each had a random symmetric secret. Changes: - Switch from HS256 (symmetric) to EdDSA (asymmetric) algorithm - Include sender's publicKey in JWT payload - Add privateKey/privateKeyHex getters to PeerIdentity - Verify fingerprint matches publicKey before accepting token
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| export function verifySecureMessage( | ||
| message: SecureTransportMessage | ||
| ): { valid: boolean; error?: string } { | ||
| if (!message.signature || !message.senderPublicKey || !message.senderFingerprint) { | ||
| return { valid: false, error: 'Missing signature fields' }; | ||
| } | ||
|
|
||
| const computedFingerprint = PeerIdentity.computeFingerprint( | ||
| Buffer.from(message.senderPublicKey, 'hex') | ||
| ); | ||
|
|
||
| if (computedFingerprint !== message.senderFingerprint) { | ||
| return { valid: false, error: 'Fingerprint mismatch' }; | ||
| } | ||
|
|
||
| return { valid: true }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 Secure HTTP message verification never verifies the signature (only checks fingerprint)
verifySecureMessage claims to verify a SecureTransportMessage, but it only checks that senderPublicKey hashes to senderFingerprint and never verifies message.signature against the message contents.
Actual vs expected:
- Actual: any attacker can forge
payload/to/from/etc. and setsenderPublicKey/senderFingerprintconsistently;verifySecureMessagereturns{ valid: true }. - Expected: signature should be verified over a canonical serialization of the message (or the exact bytes originally signed), and ideally
nonce/timestamp should be validated for replay.
Impact: downstream code that relies on verifySecureMessage for integrity/authentication gets no integrity protection.
Code
export function verifySecureMessage(message: SecureTransportMessage) {
// only checks presence + fingerprint
const computedFingerprint = PeerIdentity.computeFingerprint(
Buffer.from(message.senderPublicKey, 'hex')
);
if (computedFingerprint !== message.senderFingerprint) return { valid: false };
return { valid: true };
}Recommendation: Verify message.signature using the sender public key and the same canonical bytes used when signing (e.g., PeerIdentity.verify(signatureBytes, TextEncoder().encode(JSON.stringify(messageWithoutSigFields)))). Also consider replay protection using nonce + timestamp.
Was this helpful? React with 👍 or 👎 to provide feedback.
| function createSimpleCert(params: CertParams): string { | ||
| const base64Encode = (str: string): string => | ||
| Buffer.from(str).toString('base64'); | ||
|
|
||
| const formatDate = (date: Date): string => { | ||
| const y = date.getUTCFullYear().toString().slice(-2); | ||
| const m = (date.getUTCMonth() + 1).toString().padStart(2, '0'); | ||
| const d = date.getUTCDate().toString().padStart(2, '0'); | ||
| const h = date.getUTCHours().toString().padStart(2, '0'); | ||
| const min = date.getUTCMinutes().toString().padStart(2, '0'); | ||
| const s = date.getUTCSeconds().toString().padStart(2, '0'); | ||
| return `${y}${m}${d}${h}${min}${s}Z`; | ||
| }; | ||
|
|
||
| const certInfo = { | ||
| version: 3, | ||
| serialNumber: params.serialNumber, | ||
| subject: params.subject, | ||
| issuer: params.issuer, | ||
| notBefore: formatDate(params.notBefore), | ||
| notAfter: formatDate(params.notAfter), | ||
| publicKey: params.publicKey, | ||
| altNames: params.altNames, | ||
| }; | ||
|
|
||
| const certData = JSON.stringify(certInfo); | ||
| const certBase64 = base64Encode(certData); | ||
|
|
||
| const lines: string[] = ['-----BEGIN CERTIFICATE-----']; | ||
| for (let i = 0; i < certBase64.length; i += 64) { | ||
| lines.push(certBase64.slice(i, i + 64)); | ||
| } | ||
| lines.push('-----END CERTIFICATE-----'); | ||
|
|
||
| return lines.join('\n'); | ||
| } | ||
|
|
||
| export class TLSManager { | ||
| private certPath: string; | ||
|
|
||
| constructor(certPath?: string) { | ||
| this.certPath = certPath || DEFAULT_CERT_PATH; | ||
| } | ||
|
|
||
| async ensureDirectory(): Promise<void> { | ||
| if (!existsSync(this.certPath)) { | ||
| await mkdir(this.certPath, { recursive: true, mode: 0o700 }); | ||
| } | ||
| } | ||
|
|
||
| async generateCertificate( | ||
| hostId: string, | ||
| hostName: string = 'localhost', | ||
| validDays: number = 365 | ||
| ): Promise<CertificateInfo> { | ||
| await this.ensureDirectory(); | ||
|
|
||
| const { cert, key } = generateSelfSignedCertificate( | ||
| hostId, | ||
| hostName, | ||
| validDays | ||
| ); | ||
|
|
||
| const certFile = join(this.certPath, `${hostId}.crt`); | ||
| const keyFile = join(this.certPath, `${hostId}.key`); | ||
|
|
||
| await writeFile(certFile, cert, { mode: 0o644 }); | ||
| await writeFile(keyFile, key, { mode: 0o600 }); | ||
|
|
||
| const fingerprint = createHash('sha256').update(cert).digest('hex'); | ||
| const notBefore = new Date(); | ||
| const notAfter = new Date(); | ||
| notAfter.setDate(notAfter.getDate() + validDays); | ||
|
|
||
| return { | ||
| cert, | ||
| key, | ||
| fingerprint, | ||
| notBefore, | ||
| notAfter, | ||
| subject: `CN=${hostName},O=SkillKit Mesh,OU=${hostId}`, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 TLSManager generates a non-X.509 “certificate” that Node TLS cannot use
TLSManager.generateCertificate() uses createSimpleCert() to create a PEM block that is just base64-encoded JSON, not a valid X.509 certificate.
Actual vs expected:
- Actual: when
SecureWebSocketServerenables TLS and passescertInfo.cert/certInfo.keyintohttps.createServer, Node’s TLS stack will reject the invalid certificate, causing server startup failures. - Expected: generate a real self-signed X.509 certificate (or use a library / OpenSSL) compatible with Node TLS.
Impact: any “secure” mode that tries to use TLS will fail at runtime, breaking secure transport.
Code
createSimpleCert()returns-----BEGIN CERTIFICATE-----with JSON inside (packages/mesh/src/security/tls.ts:84-119).generateCertificate()writes that to${hostId}.crtand later uses it as TLS cert (packages/mesh/src/security/tls.ts:134-166).
Recommendation: Replace createSimpleCert with real X.509 generation (e.g., use selfsigned/node-forge, or shell out to openssl during init, or store user-provided certs). Also parse/load actual notBefore/notAfter from the cert rather than using new Date() placeholders.
Was this helpful? React with 👍 or 👎 to provide feedback.
| export function getInstalledAgentIds(searchDirs?: string[]): string[] { | ||
| const dirs = searchDirs || [ | ||
| join(process.cwd(), '.claude', 'agents'), | ||
| join(homedir(), '.claude', 'agents'), | ||
| ]; | ||
|
|
||
| const installed = new Set<string>(); | ||
|
|
||
| for (const dir of dirs) { | ||
| if (existsSync(dir)) { | ||
| try { | ||
| const files = require('node:fs').readdirSync(dir); | ||
| for (const file of files) { | ||
| if (file.endsWith('.md')) { | ||
| installed.add(file.replace('.md', '')); | ||
| } | ||
| } | ||
| } catch { | ||
| continue; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return Array.from(installed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 ESM module uses require('node:fs'), which will crash at runtime
packages/resources/src/agents/index.ts is an ESM module (type: "module" in package.json), but it uses require('node:fs') inside getInstalledAgentIds.
Actual vs expected:
- Actual: calling
getInstalledAgentIds()in ESM will throwReferenceError: require is not defined. - Expected: use
import { readdirSync } from 'node:fs'(already ESM-compatible) or dynamicawait import('node:fs').
Impact: CLI commands that call getAvailableAgents() / isAgentInstalled() can crash.
Code
const files = require('node:fs').readdirSync(dir);(packages/resources/src/agents/index.ts:135)
Recommendation: Replace the require usage with ESM imports (e.g., import readdirSync at top) or await import('node:fs') and use fs.readdirSync.
Was this helpful? React with 👍 or 👎 to provide feedback.
| async reinforce(id: string, amount = 0.1): Promise<PersistentMemory> { | ||
| const existing = await this.adapter.getMemory(id); | ||
| if (!existing) { | ||
| throw new Error(`Memory ${id} not found`); | ||
| } | ||
|
|
||
| const newScore = Math.min(1, existing.reinforcementScore + amount); | ||
|
|
||
| await this.adapter.updateMemory(id, { | ||
| reinforcementScore: newScore, | ||
| updatedAt: new Date().toISOString(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MemoryStore.reinforce can persist negative reinforcementScore values
MemoryStore.reinforce clamps the upper bound of reinforcementScore to 1, but does not clamp the lower bound. If callers pass a negative amount (the MCP tool allows negative values to weaken), this can write negative scores to the DB.
Actual vs expected:
- Actual:
newScore = Math.min(1, existing.reinforcementScore + amount)can be < 0 whenamountis negative enough. - Expected: reinforcement score should remain within a defined range (typically
[0, 1]).
Impact: stats/tier logic may behave incorrectly (e.g., demotion logic and UI display), and downstream code that assumes non-negative scores can break.
Code
const newScore = Math.min(1, existing.reinforcementScore + amount);
await this.adapter.updateMemory(id, { reinforcementScore: newScore, ... });(packages/memory/src/stores/memory-store.ts:185-190)
Recommendation: Clamp both bounds: const newScore = Math.max(0, Math.min(1, existing.reinforcementScore + amount)); (and similarly in weaken/other score updates).
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
Major enhancement adding persistent memory, secure mesh networking, inter-agent messaging, MCP server integration, and advanced learning/automation capabilities.
All packages at version 1.7.11
Core Packages
@skillkit/memory - Semantic Memory System
@skillkit/mesh - Secure Peer Mesh Network
@skillkit/messaging - Inter-Agent Messaging
@skillkit/mcp-memory - MCP Server
Mesh Security CLI Commands
Security Stack
Security Levels
Summary by CodeRabbit
Release Notes
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.