-
-
Notifications
You must be signed in to change notification settings - Fork 0
✨ tweak: update redis image and config #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates Redis configuration and adds enhanced logging for webhook event processing. It modernizes the Docker setup by upgrading Redis to version 8 and simplifies the compose configuration.
- Upgraded Redis image from version 7 to 8 with improved persistence configuration
- Added detailed debug logging for platform detection and event processing
- Simplified Docker Compose configuration by removing external networks and version declarations
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
src/utils/fileAttachmentCorrelation.ts | Updated comment to clarify buffering behavior |
src/services/webhookService.ts | Added debug logging and tertiary platform detection via isExternal flag |
docker-compose.yml | Upgraded Redis to v8, simplified networking, and improved container configuration |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
📝 WalkthroughWalkthroughThe PR updates docker-compose to rebuild the webhook service locally and adjust Redis image/health/volumes; augments webhookService logging and platform-detection (adds tertiary detection and file-attachment flags); tweaks a comment in fileAttachmentCorrelation; and introduces retry/timeout/socket options for Redis client and retry-wrapped Redis operations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Jarvis: Your changes are compiled and presented, sir/ma'am. Would you like an impact-risk summary next? Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
docker-compose.yml
(2 hunks)src/services/webhookService.ts
(3 hunks)src/utils/fileAttachmentCorrelation.ts
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/services/webhookService.ts (1)
src/config/env.ts (1)
config
(33-40)
🔇 Additional comments (9)
src/utils/fileAttachmentCorrelation.ts (1)
153-153
: Comment refinement noted, sir.The updated comment "buffer the event and wait" provides slightly more clarity about the deferred processing behavior. Documentation enhancement approved.
src/services/webhookService.ts (4)
61-65
: Observability enhancement detected, sir.The addition of debug logging immediately after platform detection is excellent for diagnostic purposes. The inclusion of
hasFiles
alongside event metadata will prove valuable when troubleshooting correlation scenarios.
228-236
: Tertiary detection layer implemented, sir.The addition of
isExternal
flag detection provides an additional signal for platform source determination, particularly useful for file attachment scenarios as noted in the AI summary. The logic correctly returns 'dashboard' when external and falls back totargetPlatform
when internal withsourceType
present.The fallback to 'unknown' when conditions aren't met maintains system resilience.
283-286
: Logging refinement applied, sir.The updated log message "Processing webhook event" with
hasFiles
context provides better observability. The generic terminology is more appropriate since this method handles both immediate and buffered event processing paths.
297-297
: Error messaging aligned, sir.The error log now uses consistent "webhook event" terminology, maintaining uniformity with the success path logging.
docker-compose.yml (4)
46-49
: Local build configuration engaged, sir.The switch to building from the local Dockerfile (
build: .
) with a custom tag is well-suited for development and testing purposes. The commented Docker Hub image alternative provides a clear path for production deployment.This configuration aligns perfectly with the PR's objective of testing infrastructure tweaks.
69-70
: Persistent storage configured properly, sir.The
redis_webhook_data
volume is correctly configured to persist Redis data at/data
, working in concert with the append-only mode to ensure webhook queue durability across container restarts.
58-63
: Healthcheck and/health
route confirmed, sir
Theapp.get('/health')
handler insrc/app.ts:38
satisfies the container healthcheck, and its timing parameters align with Redis startup requirements.
26-39
: Verified: Node Redis client v4.7.0 is compatible with Redis 8.x
Proceed with the Redis 8-alpine upgrade and append-only configuration as planned.
env_file: | ||
- .env | ||
environment: | ||
- REDIS_URL=redis://redis-webhook:6379 | ||
depends_on: | ||
redis-webhook: | ||
condition: service_healthy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Network configuration simplified, sir.
The removal of explicit network configuration and the explicit REDIS_URL
environment variable simplifies the setup. Docker Compose's default networking should allow service discovery.
However, I must point out a critical detail: the Redis URL must reference the service name redis-webhook
rather than localhost
when running in Docker.
🏁 Script executed:
#!/bin/bash
# Description: Verify .env.example contains correct Redis URL for Docker deployment
echo "=== Checking .env.example for Redis configuration ==="
if [ -f ".env.example" ]; then
echo "Contents of .env.example:"
cat .env.example
echo ""
echo "=== Checking for redis-webhook service name ==="
grep -i "redis" .env.example || echo "No Redis URL found in .env.example"
else
echo "ERROR: .env.example not found"
fi
echo ""
echo "=== Expected format for Docker deployment ==="
echo "REDIS_URL=redis://redis-webhook:6379"
Length of output: 919
Update .env.example
REDIS_URL for Docker Compose
Sir, the .env.example
comment and default REDIS_URL
still point to redis://redis:6379
; the Compose service is named redis-webhook
, so update both to redis://redis-webhook:6379
for correct resolution.
🤖 Prompt for AI Agents
In docker-compose.yml around lines 52 to 56, the REDIS service in Compose is
named redis-webhook but the project still references redis://redis:6379; update
any .env.example comment and the default REDIS_URL value to
redis://redis-webhook:6379 so the hostname matches the Compose service name and
resolves correctly when using docker-compose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/config/redis.ts (1)
39-46
: Harden socket config: add pingInterval and reconnectStrategy; evaluate offline queue policy
- Add socket.pingInterval to prevent idle disconnects and stuck requests; add socket.reconnectStrategy for controlled exponential backoff. Redis docs also advise managing the offline queue to avoid duplicate effects on non‑idempotent commands. (redis.io)
Apply:
-const client = createClient({ - url: redisConfig.url, - socket: { - connectTimeout: 10000, // 10 seconds for initial connection - Railway optimized - keepAlive: 30000, // Keep connection alive (30 seconds) - noDelay: true, // Disable Nagle's algorithm for better latency - } -}); +const client = createClient({ + url: redisConfig.url, + // Consider a dedicated publisher client with disableOfflineQueue: true if you must avoid replays on reconnect. + // See service-level retry notes below. + // disableOfflineQueue: true, + socket: { + connectTimeout: 10000, // 10s initial connection + keepAlive: 30000, // TCP keep-alive (30s) + noDelay: true, // Lower latency + // Send PING periodically to keep the Redis connection warm through LB/NATs. + pingInterval: 30000, // 30s + reconnectStrategy: (retries) => { + const jitter = Math.floor(Math.random() * 100); + return Math.min(3000, 50 * 2 ** retries) + jitter; // capped exp backoff + jitter + }, + }, +});If you prefer to keep the default reconnection behavior, at least add pingInterval. The option is supported in node‑redis v4+. (github.com)
src/services/redisService.ts (2)
139-144
: Collapse existence + marking into one atomic call (SET NX EX)Replace exists+setEx flow with a single SET key 'processed' NX EX ttl, which is atomic and removes the race window. Use it where you currently invoke eventExists + markEventProcessed. (redis.io)
Example addition:
async markEventIfFirst(eventId: string, ttlSeconds = redisEventConfig.eventTtl): Promise<boolean> { const key = `${redisEventConfig.keyPrefix}${eventId}`; const res = await this.executeWithRetry( (c) => c.set(key, 'processed', { NX: true, EX: ttlSeconds }), `SET NX EX for ${eventId}`, 2 ); return res === 'OK'; }
155-159
: Adjust callsite for new abortable executeWithRetry signatureAfter refactor above, update to pass the client parameter through.
Apply:
-await this.executeWithRetry( - () => this.client.setEx(key, ttl, 'processed'), +await this.executeWithRetry( + (c) => c.setEx(key, ttl, 'processed'), `setEx for ${eventId}`, 2 );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/config/redis.ts
(1 hunks)src/services/redisService.ts
(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/services/redisService.ts (1)
src/config/redis.ts (1)
redisEventConfig
(33-37)
🔇 Additional comments (1)
src/services/redisService.ts (1)
54-59
: Eliminate duplicate enqueues by using abortable commands or idempotent LPUSH
Promise.race timeouts don’t cancel the in-flight LPUSH—if it resolves after the timeout, a retry issues a second push. Likewise, the default offline-queue in node-redis can replay buffered commands on reconnect. Use an AbortSignal (or disable offline queue) on your publisher connection, or wrap LPUSH in a MULTI with a SET NX EX dedupe key to enforce idempotence. Verify that your consumer pairs LPUSH with RPOP/BRPOP to maintain FIFO ordering.
/** | ||
* Railway-optimized retry logic for Redis operations with timeout handling | ||
*/ | ||
private async executeWithRetry<T>( | ||
operation: () => Promise<T>, | ||
operationName: string, | ||
maxRetries: number = 3, | ||
baseDelay: number = 1000, | ||
operationTimeout: number = 8000 // 8 seconds for individual operations | ||
): Promise<T> { | ||
let lastError: Error = new Error('Unknown error'); | ||
|
||
for (let attempt = 1; attempt <= maxRetries; attempt++) { | ||
try { | ||
// Wrap operation with timeout for Railway optimization | ||
const timeoutPromise = new Promise<never>((_, reject) => { | ||
setTimeout(() => { | ||
reject(new Error(`Redis operation ${operationName} timed out after ${operationTimeout}ms`)); | ||
}, operationTimeout); | ||
}); | ||
|
||
return await Promise.race([operation(), timeoutPromise]); | ||
} catch (error) { | ||
lastError = error as Error; | ||
|
||
if (attempt === maxRetries) { | ||
LogEngine.error(`Redis operation ${operationName} failed after ${maxRetries} attempts: ${lastError.message}`); | ||
break; | ||
} | ||
|
||
// Check if it's a timeout or connection error | ||
const isRetryableError = ( | ||
lastError.message.includes('ETIMEDOUT') || | ||
lastError.message.includes('ECONNRESET') || | ||
lastError.message.includes('ENOTFOUND') || | ||
lastError.message.includes('Connection is closed') || | ||
lastError.message.includes('timed out') | ||
); | ||
|
||
if (!isRetryableError) { | ||
LogEngine.error(`Redis operation ${operationName} failed with non-retryable error: ${lastError.message}`); | ||
break; | ||
} | ||
|
||
const delay = baseDelay * Math.pow(2, attempt - 1); // Exponential backoff | ||
LogEngine.warn(`Redis operation ${operationName} failed (attempt ${attempt}/${maxRetries}), retrying in ${delay}ms: ${lastError.message}`); | ||
|
||
await new Promise(resolve => setTimeout(resolve, delay)); | ||
|
||
// Try to reconnect if connection is closed | ||
if (!this.isConnected()) { | ||
try { | ||
await this.connect(); | ||
} catch (reconnectError) { | ||
LogEngine.warn(`Failed to reconnect during retry: ${reconnectError}`); | ||
} | ||
} | ||
} | ||
} | ||
|
||
throw lastError; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unhandled rejection from timeoutPromise and make ops abortable
Current Promise.race setup leaves the timeout timer uncleared; if operation wins, the timer still fires and rejects with no handler → unhandled rejection. Use AbortSignal per command and clear the timer in finally. This also truly aborts pending commands client‑side. (github.com)
Apply:
- private async executeWithRetry<T>(
- operation: () => Promise<T>,
+ private async executeWithRetry<T>(
+ operation: (c: any) => Promise<T>,
operationName: string,
maxRetries: number = 3,
baseDelay: number = 1000,
operationTimeout: number = 8000 // 8 seconds for individual operations
): Promise<T> {
- let lastError: Error = new Error('Unknown error');
+ let lastError: Error = new Error('Unknown error');
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
- // Wrap operation with timeout for Railway optimization
- const timeoutPromise = new Promise<never>((_, reject) => {
- setTimeout(() => {
- reject(new Error(`Redis operation ${operationName} timed out after ${operationTimeout}ms`));
- }, operationTimeout);
- });
-
- return await Promise.race([operation(), timeoutPromise]);
+ // Abortable per-command timeout
+ const ac = new AbortController();
+ const abortingClient = (this.client as any).withAbortSignal?.(ac.signal) ?? this.client;
+ let timer: NodeJS.Timeout | undefined;
+ try {
+ timer = setTimeout(() => ac.abort(), operationTimeout);
+ return await operation(abortingClient);
+ } finally {
+ if (timer) clearTimeout(timer);
+ }
} catch (error) {
lastError = error as Error;
if (attempt === maxRetries) {
LogEngine.error(`Redis operation ${operationName} failed after ${maxRetries} attempts: ${lastError.message}`);
break;
}
- // Check if it's a timeout or connection error
- const isRetryableError = (
- lastError.message.includes('ETIMEDOUT') ||
- lastError.message.includes('ECONNRESET') ||
- lastError.message.includes('ENOTFOUND') ||
- lastError.message.includes('Connection is closed') ||
- lastError.message.includes('timed out')
- );
+ // Retry only on transient errors
+ const msg = (lastError?.message || '').toString();
+ const name = (lastError as any)?.name || '';
+ const isRetryableError =
+ name === 'AbortError' || // aborted by timeout
+ /ETIMEDOUT|ECONNRESET|ECONNREFUSED|EAI_AGAIN|ENOTFOUND|Connection is closed|Socket closed unexpectedly|timed out/i.test(msg);
if (!isRetryableError) {
LogEngine.error(`Redis operation ${operationName} failed with non-retryable error: ${lastError.message}`);
break;
}
const delay = baseDelay * Math.pow(2, attempt - 1); // Exponential backoff
LogEngine.warn(`Redis operation ${operationName} failed (attempt ${attempt}/${maxRetries}), retrying in ${delay}ms: ${lastError.message}`);
await new Promise(resolve => setTimeout(resolve, delay));
// Try to reconnect if connection is closed
- if (!this.isConnected()) {
+ if (!(this.client as any).isReady) { // prefer isReady over isOpen
try {
await this.connect();
} catch (reconnectError) {
LogEngine.warn(`Failed to reconnect during retry: ${reconnectError}`);
}
}
}
}
throw lastError;
}
Note: node‑redis exposes isReady/connection events; prefer isReady checks. (redis.io)
Also consider a dedicated publisher client with disableOfflineQueue: true if you must avoid replays. (redis.io)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
/** | |
* Railway-optimized retry logic for Redis operations with timeout handling | |
*/ | |
private async executeWithRetry<T>( | |
operation: () => Promise<T>, | |
operationName: string, | |
maxRetries: number = 3, | |
baseDelay: number = 1000, | |
operationTimeout: number = 8000 // 8 seconds for individual operations | |
): Promise<T> { | |
let lastError: Error = new Error('Unknown error'); | |
for (let attempt = 1; attempt <= maxRetries; attempt++) { | |
try { | |
// Wrap operation with timeout for Railway optimization | |
const timeoutPromise = new Promise<never>((_, reject) => { | |
setTimeout(() => { | |
reject(new Error(`Redis operation ${operationName} timed out after ${operationTimeout}ms`)); | |
}, operationTimeout); | |
}); | |
return await Promise.race([operation(), timeoutPromise]); | |
} catch (error) { | |
lastError = error as Error; | |
if (attempt === maxRetries) { | |
LogEngine.error(`Redis operation ${operationName} failed after ${maxRetries} attempts: ${lastError.message}`); | |
break; | |
} | |
// Check if it's a timeout or connection error | |
const isRetryableError = ( | |
lastError.message.includes('ETIMEDOUT') || | |
lastError.message.includes('ECONNRESET') || | |
lastError.message.includes('ENOTFOUND') || | |
lastError.message.includes('Connection is closed') || | |
lastError.message.includes('timed out') | |
); | |
if (!isRetryableError) { | |
LogEngine.error(`Redis operation ${operationName} failed with non-retryable error: ${lastError.message}`); | |
break; | |
} | |
const delay = baseDelay * Math.pow(2, attempt - 1); // Exponential backoff | |
LogEngine.warn(`Redis operation ${operationName} failed (attempt ${attempt}/${maxRetries}), retrying in ${delay}ms: ${lastError.message}`); | |
await new Promise(resolve => setTimeout(resolve, delay)); | |
// Try to reconnect if connection is closed | |
if (!this.isConnected()) { | |
try { | |
await this.connect(); | |
} catch (reconnectError) { | |
LogEngine.warn(`Failed to reconnect during retry: ${reconnectError}`); | |
} | |
} | |
} | |
} | |
throw lastError; | |
} | |
/** | |
* Railway-optimized retry logic for Redis operations with timeout handling | |
*/ | |
private async executeWithRetry<T>( | |
operation: (c: any) => Promise<T>, | |
operationName: string, | |
maxRetries: number = 3, | |
baseDelay: number = 1000, | |
operationTimeout: number = 8000 // 8 seconds for individual operations | |
): Promise<T> { | |
let lastError: Error = new Error('Unknown error'); | |
for (let attempt = 1; attempt <= maxRetries; attempt++) { | |
try { | |
// Abortable per-command timeout | |
const ac = new AbortController(); | |
const abortingClient = (this.client as any).withAbortSignal?.(ac.signal) ?? this.client; | |
let timer: NodeJS.Timeout | undefined; | |
try { | |
timer = setTimeout(() => ac.abort(), operationTimeout); | |
return await operation(abortingClient); | |
} finally { | |
if (timer) clearTimeout(timer); | |
} | |
} catch (error) { | |
lastError = error as Error; | |
if (attempt === maxRetries) { | |
LogEngine.error(`Redis operation ${operationName} failed after ${maxRetries} attempts: ${lastError.message}`); | |
break; | |
} | |
// Retry only on transient errors | |
const msg = (lastError?.message || '').toString(); | |
const name = (lastError as any)?.name || ''; | |
const isRetryableError = | |
name === 'AbortError' || // aborted by timeout | |
/ETIMEDOUT|ECONNRESET|ECONNREFUSED|EAI_AGAIN|ENOTFOUND|Connection is closed|Socket closed unexpectedly|timed out/i.test(msg); | |
if (!isRetryableError) { | |
LogEngine.error(`Redis operation ${operationName} failed with non-retryable error: ${lastError.message}`); | |
break; | |
} | |
const delay = baseDelay * Math.pow(2, attempt - 1); // Exponential backoff | |
LogEngine.warn(`Redis operation ${operationName} failed (attempt ${attempt}/${maxRetries}), retrying in ${delay}ms: ${lastError.message}`); | |
await new Promise(resolve => setTimeout(resolve, delay)); | |
// Try to reconnect if connection is closed | |
if (!(this.client as any).isReady) { // prefer isReady over isOpen | |
try { | |
await this.connect(); | |
} catch (reconnectError) { | |
LogEngine.warn(`Failed to reconnect during retry: ${reconnectError}`); | |
} | |
} | |
} | |
} | |
throw lastError; | |
} |
🤖 Prompt for AI Agents
In src/services/redisService.ts around lines 69–130, the current Promise.race
timeout leaves the timer active and can cause unhandled rejections and doesn't
abort the underlying Redis command; replace the timeout Promise with an
AbortController-based approach: create an AbortController for each attempt, pass
its signal into the Redis command options so the client can abort the pending
command, set a timer that calls controller.abort() and logs/rejects with a
timeout error, and clear that timer in a finally block after the race to avoid
stray rejections; also prefer checking client.isReady() (node-redis) instead of
this.isConnected() and keep the existing reconnect attempt logic, and consider
using a dedicated publisher client with disableOfflineQueue: true for
publish-only usage to avoid replaying commands.
Summary by CodeRabbit
New Features
Chores
Documentation