feat: store credentials in redis and better ui and optimization#4
feat: store credentials in redis and better ui and optimization#4TimilsinaBimal merged 13 commits intomainfrom
Conversation
…s automatically (#2) * Switch credential storage from URLs to redis+tokens * Add /health endpoint, and refuse to save credentials if TOKEN_SALT hasn't been changed from the default * Lock down model_config in response to PR review * Use X-Forwarded headers to generate HTTPS URL when accessed over SSL-terminating proxy * Add background task to update catalogs (now that we store credentials), validate credentials on setup * Validate tokens before storing them, mask sensitive inputs by default * Don't abandon a catalog when email/pass are not present if authKey IS present * Make TMDB addon URL customizable * Fix bad indentation * Remove unused route (went a different way for the announcement)
…nagement - Migrate settings import paths to a unified structure. - Replace startup/shutdown event handlers with a lifespan context manager for better resource management. - Enhance static file serving logic to ensure compatibility with project structure. - Update dependencies in to include and . - Allow extra fields in settings configuration for flexibility.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Summary of ChangesHello @TimilsinaBimal, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant enhancements to the Watchly Stremio addon, focusing on improved security, user experience, and performance. By storing credentials in Redis and tokenizing URLs, it prevents credential exposure and enhances security. The UI has been modernized, and automatic background catalog refreshes have been implemented to keep content up-to-date. These changes provide a more secure, user-friendly, and efficient experience for Watchly users. Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a significant and well-executed enhancement, migrating from an insecure credential-in-URL system to a secure, token-based authentication flow using Redis. The introduction of background catalog updates, a health check endpoint, and a completely revamped UI are excellent additions. The code quality is high, with good structure and use of modern Python features. My review includes a few suggestions to further improve security, resource management, and code clarity, primarily focusing on ensuring the new TOKEN_SALT is handled with maximum security, proper Redis connection lifecycle management, and refining some redundant code and exception handling.
| ADDON_ID: str = "com.bimal.watchly" | ||
| ADDON_NAME: str = "Watchly" | ||
| REDIS_URL: str = "redis://localhost:6379/0" | ||
| TOKEN_SALT: str = "change-me" |
There was a problem hiding this comment.
The default value "change-me" for TOKEN_SALT is insecure. While the application correctly includes a runtime check to prevent using this default, it's crucial to add a comment here to strongly advise developers to set a unique, secret value in their production environment. An insecure or leaked salt would allow an attacker to forge access tokens.
| TOKEN_SALT: str = "change-me" | |
| TOKEN_SALT: str = "change-me" # IMPORTANT: MUST be replaced with a long, random, secret string in production. |
| class TokenStore: | ||
| """Redis-backed store for user credentials and auth tokens.""" | ||
|
|
||
| KEY_PREFIX = "watchly:token:" | ||
|
|
||
| def __init__(self) -> None: | ||
| self._client: redis.Redis | None = None | ||
| self._cipher: Fernet | None = None | ||
|
|
||
| if not settings.REDIS_URL: | ||
| logger.warning("REDIS_URL is not set. Token storage will fail until a Redis instance is configured.") | ||
| if not settings.TOKEN_SALT or settings.TOKEN_SALT == "change-me": | ||
| logger.warning( | ||
| "TOKEN_SALT is missing or using the default placeholder. Set a strong value to secure tokens." | ||
| ) | ||
|
|
||
| def _ensure_secure_salt(self) -> None: | ||
| if not settings.TOKEN_SALT or settings.TOKEN_SALT == "change-me": | ||
| logger.error("Refusing to store credentials because TOKEN_SALT is unset or using the insecure default.") | ||
| raise RuntimeError( | ||
| "Server misconfiguration: TOKEN_SALT must be set to a non-default value before storing credentials." | ||
| ) | ||
|
|
||
| def _get_cipher(self) -> Fernet: | ||
| """Get or create Fernet cipher instance based on TOKEN_SALT.""" | ||
| if self._cipher is None: | ||
| # Derive a 32-byte key from TOKEN_SALT using SHA256, then URL-safe base64 encode it | ||
| # This ensures we always have a valid Fernet key regardless of the salt's format | ||
| key_bytes = hashlib.sha256(settings.TOKEN_SALT.encode()).digest() | ||
| fernet_key = base64.urlsafe_b64encode(key_bytes) | ||
| self._cipher = Fernet(fernet_key) | ||
| return self._cipher | ||
|
|
||
| async def _get_client(self) -> redis.Redis: | ||
| if self._client is None: | ||
| self._client = redis.from_url(settings.REDIS_URL, decode_responses=True, encoding="utf-8") | ||
| return self._client | ||
|
|
||
| def _hash_token(self, token: str) -> str: | ||
| secret = settings.TOKEN_SALT.encode("utf-8") | ||
| return hmac.new(secret, msg=token.encode("utf-8"), digestmod=hashlib.sha256).hexdigest() | ||
|
|
||
| def _format_key(self, hashed_token: str) -> str: | ||
| return f"{self.KEY_PREFIX}{hashed_token}" | ||
|
|
||
| def _normalize_payload(self, payload: dict[str, Any]) -> dict[str, Any]: | ||
| return { | ||
| "username": (payload.get("username") or "").strip() or None, | ||
| "password": payload.get("password") or None, | ||
| "authKey": (payload.get("authKey") or "").strip() or None, | ||
| "includeWatched": bool(payload.get("includeWatched", False)), | ||
| } | ||
|
|
||
| def _derive_token_value(self, payload: dict[str, Any]) -> str: | ||
| canonical = { | ||
| "username": payload.get("username") or "", | ||
| "password": payload.get("password") or "", | ||
| "authKey": payload.get("authKey") or "", | ||
| "includeWatched": bool(payload.get("includeWatched", False)), | ||
| } | ||
| serialized = json.dumps(canonical, sort_keys=True, separators=(",", ":")) | ||
| secret = settings.TOKEN_SALT.encode("utf-8") | ||
| return hmac.new(secret, serialized.encode("utf-8"), hashlib.sha256).hexdigest() | ||
|
|
||
| async def store_payload(self, payload: dict[str, Any]) -> tuple[str, bool]: | ||
| self._ensure_secure_salt() | ||
| normalized = self._normalize_payload(payload) | ||
| token = self._derive_token_value(normalized) | ||
| hashed = self._hash_token(token) | ||
| key = self._format_key(hashed) | ||
|
|
||
| # JSON Encode -> Encrypt -> Store | ||
| json_str = json.dumps(normalized) | ||
| encrypted_value = self._get_cipher().encrypt(json_str.encode()).decode("utf-8") | ||
|
|
||
| client = await self._get_client() | ||
| existing = await client.exists(key) | ||
|
|
||
| if settings.TOKEN_TTL_SECONDS and settings.TOKEN_TTL_SECONDS > 0: | ||
| await client.setex(key, settings.TOKEN_TTL_SECONDS, encrypted_value) | ||
| logger.info( | ||
| "Stored encrypted credential payload with TTL %s seconds", | ||
| settings.TOKEN_TTL_SECONDS, | ||
| ) | ||
| else: | ||
| await client.set(key, encrypted_value) | ||
| logger.info("Stored encrypted credential payload without expiration") | ||
| return token, not bool(existing) | ||
|
|
||
| async def get_payload(self, token: str) -> dict[str, Any] | None: | ||
| hashed = self._hash_token(token) | ||
| key = self._format_key(hashed) | ||
| client = await self._get_client() | ||
| encrypted_raw = await client.get(key) | ||
|
|
||
| if encrypted_raw is None: | ||
| return None | ||
|
|
||
| try: | ||
| # Decrypt -> JSON Decode | ||
| decrypted_json = self._get_cipher().decrypt(encrypted_raw.encode()).decode("utf-8") | ||
| return json.loads(decrypted_json) | ||
| except (InvalidToken, json.JSONDecodeError, UnicodeDecodeError): | ||
| logger.warning("Failed to decrypt or decode cached payload for token. Key might have changed.") | ||
| return None | ||
|
|
||
| async def delete_token(self, token: str) -> None: | ||
| hashed = self._hash_token(token) | ||
| key = self._format_key(hashed) | ||
| client = await self._get_client() | ||
| await client.delete(key) | ||
|
|
||
| async def iter_payloads(self) -> AsyncIterator[tuple[str, dict[str, Any]]]: | ||
| """Iterate over all stored payloads, yielding key and payload.""" | ||
| try: | ||
| client = await self._get_client() | ||
| except (redis.RedisError, OSError) as exc: | ||
| logger.warning("Skipping credential iteration; Redis unavailable: %s", exc) | ||
| return | ||
|
|
||
| pattern = f"{self.KEY_PREFIX}*" | ||
| cipher = self._get_cipher() | ||
|
|
||
| try: | ||
| async for key in client.scan_iter(match=pattern): | ||
| try: | ||
| encrypted_raw = await client.get(key) | ||
| except (redis.RedisError, OSError) as exc: | ||
| logger.warning("Failed to fetch payload for %s: %s", key, exc) | ||
| continue | ||
|
|
||
| if encrypted_raw is None: | ||
| continue | ||
|
|
||
| try: | ||
| decrypted_json = cipher.decrypt(encrypted_raw.encode()).decode("utf-8") | ||
| payload = json.loads(decrypted_json) | ||
| except (InvalidToken, json.JSONDecodeError, UnicodeDecodeError): | ||
| logger.warning("Failed to decrypt payload for key %s. Skipping.", key) | ||
| continue | ||
|
|
||
| yield key, payload | ||
| except (redis.RedisError, OSError) as exc: | ||
| logger.warning("Failed to scan credential tokens: %s", exc) | ||
|
|
||
|
|
||
| token_store = TokenStore() |
There was a problem hiding this comment.
The TokenStore class creates a Redis client connection pool in _get_client but lacks a corresponding method to close it. Since token_store is a global singleton, this can lead to resource leaks (dangling connections) when the application shuts down.
To fix this, you should add a close method to the TokenStore class and call it during the application's shutdown phase in the FastAPI lifespan manager.
1. Add a close method to TokenStore:
async def close(self):
if self._client:
await self._client.aclose() # Use aclose() for async client
self._client = None
logger.info("Redis client for token store closed.")2. Call this method in app/core/app.py's lifespan context manager:
@asynccontextmanager
async def lifespan(app: FastAPI):
# ... startup logic ...
yield
# ... shutdown logic ...
from app.services.token_store import token_store
await token_store.close()| dynamic_announcement = os.getenv("ANNOUNCEMENT_HTML") | ||
| if dynamic_announcement is None: | ||
| dynamic_announcement = settings.ANNOUNCEMENT_HTML | ||
| announcement_html = (dynamic_announcement or "").strip() | ||
| snippet = "" | ||
| if announcement_html: | ||
| snippet = '\n <div class="announcement">' f"{announcement_html}" "</div>" | ||
| html_content = html_content.replace("<!-- ANNOUNCEMENT_HTML -->", snippet, 1) |
There was a problem hiding this comment.
The logic to get the announcement HTML is a bit redundant. settings.ANNOUNCEMENT_HTML is already populated from environment variables by pydantic-settings, with environment variables taking precedence over .env files. You can simplify this by just using settings.ANNOUNCEMENT_HTML.
Additionally, the f-string concatenation for creating the HTML snippet can be made more readable.
| dynamic_announcement = os.getenv("ANNOUNCEMENT_HTML") | |
| if dynamic_announcement is None: | |
| dynamic_announcement = settings.ANNOUNCEMENT_HTML | |
| announcement_html = (dynamic_announcement or "").strip() | |
| snippet = "" | |
| if announcement_html: | |
| snippet = '\n <div class="announcement">' f"{announcement_html}" "</div>" | |
| html_content = html_content.replace("<!-- ANNOUNCEMENT_HTML -->", snippet, 1) | |
| announcement_html = (settings.ANNOUNCEMENT_HTML or "").strip() | |
| snippet = "" | |
| if announcement_html: | |
| snippet = f'\n <div class="announcement">{announcement_html}</div>' | |
| html_content = html_content.replace("<!-- ANNOUNCEMENT_HTML -->", snippet, 1) |
| except Exception as exc: # pragma: no cover - defensive | ||
| logger.error("Unexpected error while validating credentials: {}", exc, exc_info=True) | ||
| raise HTTPException( | ||
| status_code=502, | ||
| detail="Unable to reach Stremio right now. Please try again later.", | ||
| ) from exc |
There was a problem hiding this comment.
Catching a broad Exception can hide the underlying cause of errors and make debugging difficult. It's better to catch more specific exceptions that you expect, like httpx.RequestError for network issues, and let unexpected exceptions propagate to be handled by a global error handler. This provides better observability into what's failing.
| except Exception as exc: # pragma: no cover - defensive | |
| logger.error("Unexpected error while validating credentials: {}", exc, exc_info=True) | |
| raise HTTPException( | |
| status_code=502, | |
| detail="Unable to reach Stremio right now. Please try again later.", | |
| ) from exc | |
| except httpx.RequestError as exc: # pragma: no cover - defensive | |
| logger.error("Unable to reach Stremio for credential validation: {}", exc, exc_info=True) | |
| raise HTTPException( | |
| status_code=502, | |
| detail="Unable to reach Stremio right now. Please try again later.", | |
| ) from exc |
| except Exception as e: | ||
| logger.error(f"Error authenticating with Stremio: {e}", exc_info=True) | ||
| raise |
There was a problem hiding this comment.
Catching a broad Exception here can obscure the root cause of an authentication failure. It's better to catch specific exceptions like httpx.RequestError for network problems and let other unexpected errors be handled by the caller or a global handler. This will improve logging and make troubleshooting easier.
| except Exception as e: | |
| logger.error(f"Error authenticating with Stremio: {e}", exc_info=True) | |
| raise | |
| except httpx.RequestError as e: | |
| logger.error(f"Network error while authenticating with Stremio: {e}", exc_info=True) | |
| raise |
No description provided.