Skip to content

feat(tts): rate limiting, caching, error handling, input sanitization#590

Merged
hman38705 merged 1 commit intosolutions-plug:mainfrom
El-Chapo-Npm:feat/tts-rate-limit-cache-sanitize-errors
Apr 25, 2026
Merged

feat(tts): rate limiting, caching, error handling, input sanitization#590
hman38705 merged 1 commit intosolutions-plug:mainfrom
El-Chapo-Npm:feat/tts-rate-limit-cache-sanitize-errors

Conversation

@El-Chapo-Npm
Copy link
Copy Markdown
Contributor

This PR hardens the TTS service across four dimensions — protecting provider quota, reducing latency on repeated requests, preventing crashes on provider failure, and blocking malicious input before it reaches the API.


What changed

services/tts/src/TTSService.ts
services/tts/src/__tests__/rate-limit-cache-sanitize-errors.test.ts


Rate limiting (#531)

The service had no guard against request floods, making it trivial to exhaust provider quota. A RateLimiter class now enforces a sliding window counter per key (IP address or user ID). Limits are fully configurable via TTSConfig.rateLimit (maxRequests, windowMs). Exceeding the limit throws a RateLimitError which maps to HTTP 429. Expired windows are evicted on a 60s interval to keep memory bounded. Metrics (totalChecks, totalExceeded, currentCounts) are exposed via getRateLimitMetrics().

Audio caching (#532)

Every request was hitting the provider even for identical text. An AudioCache class now caches generated audio buffers keyed by a SHA-256 hash of provider:voiceId:text. Cache TTL and max entry count are configurable via TTSConfig.cache. When the entry limit is reached, the oldest entry is evicted. Cache hit rate and eviction counts are tracked and exposed via getCacheMetrics().

Error handling & fallback (#533)

Provider errors were propagating as unhandled exceptions, crashing the service. All provider calls are now wrapped in structured try/catch blocks. Failures are surfaced as TTSProviderError with the provider name, a descriptive message, and an appropriate status code. When a primary provider fails, the service automatically retries with the secondary provider (ElevenLabs ↔ Google TTS) if configured. Every failure is logged with console.error so callers always receive a meaningful error response rather than an uncaught exception.

Input sanitization (#534)

Input text was passed to providers raw, leaving the door open for SSML injection. A sanitizeInput() function now runs on every enqueue() call before any provider interaction. It strips all SSML/XML tags, enforces a 5000-character maximum (MAX_INPUT_LENGTH), and normalizes whitespace. Invalid input throws InputValidationError (HTTP 400) immediately, before any network call is made.


Testing

All four features are covered in rate-limit-cache-sanitize-errors.test.ts — window resets, per-key isolation, TTL expiry, LRU eviction, SSML stripping, length enforcement, provider error wrapping, and fallback behaviour.


Closes #531
Closes #532
Closes #533
Closes #534

- Issue solutions-plug#531: Per-IP/user rate limiting with configurable window and 429 responses; metrics tracked via RateLimiter
- Issue solutions-plug#532: Audio caching by SHA-256 content hash with configurable TTL and bounded maxEntries; cache hit rate tracked
- Issue solutions-plug#533: Provider errors caught and wrapped as TTSProviderError; automatic fallback to secondary provider with structured logging
- Issue solutions-plug#534: Input sanitization strips SSML/XML tags, enforces MAX_INPUT_LENGTH (5000 chars), normalizes whitespace

Closes solutions-plug#531, solutions-plug#532, solutions-plug#533, solutions-plug#534
@drips-wave
Copy link
Copy Markdown

drips-wave Bot commented Apr 25, 2026

@El-Chapo-Npm Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits.

You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀

Learn more about application limits

@hman38705 hman38705 merged commit 825390c into solutions-plug:main Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

2 participants