Harden package bootstrap and remove empty module files#3
Harden package bootstrap and remove empty module files#3aminaallali wants to merge 1 commit intomainfrom
Conversation
Review Summary by QodoHarden package bootstrap, remove empty modules, and implement complete JIBAS backend
WalkthroughsDescription• Implement guarded imports in app/__init__.py with SQLAlchemy shim for constrained environments • Add module docstrings to previously empty package __init__.py files across API, services, tasks, and utils • Create complete JIBAS backend with authentication, image-wheel challenge system, and database models • Add Docker, docker-compose, and database migration configuration for deployment • Include pytest tests validating shuffle engine determinism and response verification Diagramflowchart LR
A["Guarded Imports<br/>Flask/SQLAlchemy"] -->|"Graceful Degradation"| B["SQLAlchemy Shim<br/>for Constrained Envs"]
B -->|"Enables"| C["Pure-Python Module<br/>Imports"]
D["Package __init__.py<br/>Files"] -->|"Add Docstrings"| E["Remove Empty<br/>Modules"]
F["Auth Service<br/>Challenge-Response"] -->|"Uses"| G["Shuffle Engine<br/>Image Wheels"]
G -->|"Generates"| H["Canvas Data<br/>with Tiles"]
I["User Registration"] -->|"Stores"| J["Composite Hash<br/>& Image Set"]
K["Rate Limiter<br/>Middleware"] -->|"Protects"| L["API Routes<br/>/api/v1"]
M["Database Models<br/>User/Challenge"] -->|"Persists"| N["PostgreSQL<br/>+ Redis Cache"]
File Changes1. app/__init__.py
|
📝 WalkthroughWalkthroughIntroduces a complete Flask backend application (JIBAS) implementing a challenge-response image-wheel authentication system. Includes database models, services for authentication/registration/encryption, API routes with rate limiting, Docker deployment configuration, middleware for JWT and rate-limiting, and utility functions for cryptographic and mathematical operations. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as POST /api/v1/register/*
participant RegService as RegistrationService
participant ShuffleEngine
participant ImageService
participant CryptoService
participant Database
participant Redis
rect rgba(100, 150, 200, 0.5)
Note over Client,Redis: Registration Initiation
Client->>API: POST /register/init {username, email}
API->>RegService: initiate_registration(username, email)
RegService->>Database: Check username/email uniqueness
RegService->>ShuffleEngine: generate_seed()
RegService->>ImageService: assign_images_to_user(wheel_size)
RegService->>ShuffleEngine: derive_permutation() x2
RegService->>ImageService: generate_canvas_data()
RegService->>CryptoService: generate_token()
RegService->>Redis: store_temp_data(token, {seed, images...}, ttl)
RegService->>Database: Log registration_initiated event
API-->>Client: {registration_token, canvas_data, wheel_size}
end
rect rgba(150, 200, 100, 0.5)
Note over Client,Redis: Registration Completion
Client->>API: POST /register/complete {registration_token, outer_rotation, inner_rotation}
API->>RegService: complete_registration(token, rotations)
RegService->>Redis: retrieve_temp_data(token)
RegService->>ShuffleEngine: Compute offsets from rotations
RegService->>CryptoService: create_composite_hash(seed, offsets)
RegService->>Database: Create User record with hash
RegService->>Database: Log registration_completed event
RegService->>Redis: delete_temp_data(token)
API-->>Client: {user_id, confirmation_message}
end
sequenceDiagram
participant Client
participant API as POST /api/v1/auth/*
participant AuthService
participant ShuffleEngine
participant ImageService
participant CryptoService
participant Database
participant Redis
rect rgba(200, 150, 100, 0.5)
Note over Client,Redis: Challenge Creation
Client->>API: POST /auth/challenge {username}
API->>AuthService: create_login_challenge(username, ip, user_agent)
AuthService->>Database: Fetch User by username
AuthService->>Database: Check account lockout status
AuthService->>Database: Invalidate old pending challenges
AuthService->>ShuffleEngine: generate_challenge(base_seed, assigned_images, offsets)
AuthService->>ImageService: generate_canvas_data(outer_order, inner_order, session_seed)
AuthService->>Database: Create AuthChallenge record
AuthService->>Database: Log login_attempt event
API-->>Client: {challenge_id, canvas_data, wheel_size, expiry}
end
rect rgba(100, 200, 150, 0.5)
Note over Client,Redis: Challenge Verification
Client->>API: POST /auth/verify {challenge_id, inner_rotation, interaction_data}
API->>AuthService: verify_login_response(challenge_id, response, ip, user_agent)
AuthService->>Database: Fetch and validate AuthChallenge
AuthService->>ShuffleEngine: Verify anti-automation via interaction_data
AuthService->>ShuffleEngine: verify_response(user_response, expected_hash)
alt Response Valid
AuthService->>Database: Update challenge status=answered, was_successful=true
AuthService->>CryptoService: generate_jwt(user_id, username)
AuthService->>Database: Reset user failed_attempts
AuthService->>Database: Log login_success event
API-->>Client: {status: success, token, user_info}
else Response Invalid
AuthService->>Database: Increment user failed_attempts
alt Lockout Threshold Reached
AuthService->>Database: Set locked_until timestamp
end
AuthService->>Database: Update challenge status=answered, was_successful=false
AuthService->>Database: Log login_failure event
API-->>Client: {status: failure, reason, locked_until?}
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes This PR introduces heterogeneous changes across multiple domains (authentication, image handling, cryptography, data models, API design) with substantial logic density in core services (shuffle_engine.py, auth_service.py, registration_service.py). While individual files follow consistent patterns, each service requires separate reasoning due to distinct responsibilities and integration points with the database, Redis, and external libraries (PIL, JWT, PBKDF2). Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Code Review by Qodo
1. Unknown logins not audited
|
| def create_login_challenge(self, username: str, ip_address: str = None, user_agent: str = None) -> dict: | ||
| user = User.query.filter_by(username=username, is_active=True).first() | ||
| if not user: | ||
| return self._generate_dummy_challenge() | ||
|
|
There was a problem hiding this comment.
1. Unknown logins not audited 📘 Rule violation ✧ Quality
create_login_challenge() returns a dummy challenge for unknown users without writing an audit log entry. This leaves critical login-attempt activity missing from the audit trail, reducing incident reconstruction capability.
Agent Prompt
## Issue description
Login attempts for unknown users are not audited because the service returns a dummy challenge early without persisting an `AuditLog` record.
## Issue Context
Compliance requires critical actions like login attempts to be recorded with enough context to reconstruct events.
## Fix Focus Areas
- app/services/auth_service.py[20-24]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| try: | ||
| result = reg_service.initiate_registration(username=data["username"].strip(), email=data["email"].strip().lower()) | ||
| return jsonify(result), 200 | ||
| except ValueError as exc: | ||
| return jsonify({"error": str(exc)}), 409 |
There was a problem hiding this comment.
2. register_init() leaks str(exc) 📘 Rule violation ⛨ Security
The registration endpoint returns str(exc) directly to clients, including messages that confirm whether a username/email already exists. This violates secure error handling by enabling account enumeration and exposing internal decision details to end users.
Agent Prompt
## Issue description
`register_init()` returns `str(exc)` to the client, which can reveal whether a username/email is already registered.
## Issue Context
Secure error handling requires generic user-facing errors and must not leak account existence.
## Fix Focus Areas
- app/api/auth_routes.py[25-29]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def initiate_registration(self, username: str, email: str) -> dict: | ||
| if User.query.filter_by(username=username).first(): | ||
| raise ValueError("اسم المستخدم موجود مسبقاً") | ||
| if User.query.filter_by(email=email).first(): | ||
| raise ValueError("البريد الإلكتروني مسجل مسبقاً") | ||
|
|
There was a problem hiding this comment.
3. Missing username/email validation 📘 Rule violation ⛨ Security
The registration flow only checks presence/uniqueness but does not validate format/constraints for external inputs like username and email. This increases the risk of bad or malicious input being persisted or causing downstream errors.
Agent Prompt
## Issue description
External inputs `username` and `email` are not validated for format/constraints before use.
## Issue Context
Security-first data handling requires validation/sanitization of external inputs to prevent invalid data storage and abuse.
## Fix Focus Areas
- app/services/registration_service.py[16-21]
- app/utils/validators.py[4-12]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| key = f"ratelimit:{request.remote_addr}:{f.__name__}" | ||
| current = r.get(key) | ||
|
|
||
| if current is None: | ||
| r.setex(key, window, 1) | ||
| elif int(current) >= requests: | ||
| return jsonify({"error": "تم تجاوز الحد المسموح. حاول لاحقاً", "retry_after": r.ttl(key)}), 429 | ||
| else: | ||
| r.incr(key) | ||
|
|
||
| return f(*args, **kwargs) |
There was a problem hiding this comment.
4. Redis errors unhandled in limiter 📘 Rule violation ⛯ Reliability
The rate limiter performs Redis operations without handling connection/timeouts, so Redis outages can crash request handling instead of degrading gracefully. This creates an availability and edge-case management gap in a critical middleware path.
Agent Prompt
## Issue description
Rate limiting depends on Redis but does not handle Redis connectivity/timeouts, causing requests to fail unexpectedly during Redis outages.
## Issue Context
Middleware should degrade gracefully on dependency failures rather than crashing all protected endpoints.
## Fix Focus Areas
- app/middleware/rate_limiter.py[20-30]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| - DATABASE_URL=postgresql://jibas_user:secure_pass@db:5432/jibas_db | ||
| - SECRET_KEY=${SECRET_KEY:-dev-secret} | ||
| - JWT_SECRET=${JWT_SECRET:-dev-jwt-secret} | ||
| - REDIS_URL=redis://cache:6379/0 |
There was a problem hiding this comment.
5. Hardcoded db credentials committed 📘 Rule violation ⛨ Security
Database credentials are hardcoded in docker-compose.yml and migrations/create_tables.sql, exposing secrets in source control. This violates secure data handling expectations for sensitive configuration.
Agent Prompt
## Issue description
Hardcoded DB credentials and default secrets are committed in deployment/config artifacts.
## Issue Context
Sensitive data must not be exposed in source control; credentials should come from secure environment configuration.
## Fix Focus Areas
- docker-compose.yml[9-12]
- docker-compose.yml[20-23]
- migrations/create_tables.sql[1-3]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| SECRET_KEY = os.getenv("SECRET_KEY", os.urandom(64).hex()) | ||
| JWT_SECRET = os.getenv("JWT_SECRET", os.urandom(64).hex()) | ||
| JWT_EXPIRATION = timedelta(minutes=30) |
There was a problem hiding this comment.
6. Jwt secret varies per worker 🐞 Bug ✓ Correctness
Config.JWT_SECRET/SECRET_KEY default to os.urandom() at import-time; with Gunicorn -w 4, different workers will sign/verify with different secrets, causing intermittent auth failures unless env vars are set.
Agent Prompt
### Issue description
`JWT_SECRET` / `SECRET_KEY` default to random values at import-time. In multi-worker deployments, each worker process uses different secrets so JWTs won’t verify reliably.
### Issue Context
Gunicorn is started with 4 workers.
### Fix Focus Areas
- app/config.py[6-11]
- Dockerfile[16-16]
- app/__init__.py[30-36]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| worker: | ||
| build: . | ||
| command: celery -A app.tasks.cleanup worker --loglevel=info |
There was a problem hiding this comment.
7. Celery worker cannot start 🐞 Bug ⛯ Reliability
docker-compose runs celery -A app.tasks.cleanup worker, but app.tasks.cleanup defines only functions and no Celery application instance or task registrations; the worker will fail to load.
Agent Prompt
### Issue description
Celery worker command points at a module with no Celery app instance, so the worker cannot boot.
### Issue Context
`app/tasks/cleanup.py` contains only plain functions.
### Fix Focus Areas
- docker-compose.yml[34-36]
- app/tasks/cleanup.py[1-30]
- app/tasks/__init__.py[1-1]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def generate_challenge(self, base_seed: str, assigned_images: dict, secret_offset_outer: int, secret_offset_inner: int) -> dict: | ||
| session_seed = self.generate_seed() | ||
|
|
||
| outer_perm = self.derive_permutation(f"{session_seed}:outer", len(assigned_images["outer"]), self.wheel_size) | ||
| inner_perm = self.derive_permutation(f"{session_seed}:inner", len(assigned_images["inner"]), self.wheel_size) | ||
|
|
||
| outer_wheel_shuffled = [assigned_images["outer"][i] for i in outer_perm] | ||
| inner_wheel_shuffled = [assigned_images["inner"][i] for i in inner_perm] | ||
|
|
||
| expected = self._calculate_expected_response( | ||
| outer_wheel_shuffled, | ||
| inner_wheel_shuffled, | ||
| assigned_images, | ||
| secret_offset_outer, | ||
| secret_offset_inner, | ||
| base_seed, | ||
| ) | ||
| expected_hash = self._hash_response(expected, session_seed) | ||
|
|
||
| return { | ||
| "session_seed": session_seed, | ||
| "outer_wheel_order": outer_wheel_shuffled, | ||
| "inner_wheel_order": inner_wheel_shuffled, | ||
| "expected_response_hash": expected_hash, | ||
| "expected_response_raw": expected, | ||
| } | ||
|
|
||
| def _calculate_expected_response(self, outer_shuffled, inner_shuffled, assigned_images, secret_offset_outer, secret_offset_inner, base_seed): | ||
| secret_images = self._derive_secret_images(base_seed, assigned_images) | ||
| outer_pos = self.find_image_position(outer_shuffled, secret_images["outer"]) | ||
| inner_pos = self.find_image_position(inner_shuffled, secret_images["inner"]) | ||
|
|
||
| if outer_pos is None or inner_pos is None: | ||
| return {"inner_rotation": 0, "alignment_check": 0} | ||
|
|
||
| required_inner_rotation = (outer_pos - inner_pos) % max(1, len(inner_shuffled)) | ||
| return {"inner_rotation": required_inner_rotation, "alignment_check": (outer_pos + inner_pos) % self.wheel_size} |
There was a problem hiding this comment.
8. Registration secret unused 🐞 Bug ⛨ Security
Registration stores secret offsets and a composite_secret_hash, but login challenge/verification never uses composite_secret_hash and shuffle logic ignores secret_offset_outer/inner; this makes registration inputs ineffective and can break the intended auth scheme.
Agent Prompt
### Issue description
Stored registration secrets (offsets/composite hash) are not used during login verification, likely breaking the intended auth design.
### Issue Context
`secret_offset_outer/inner` and `composite_secret_hash` are written at registration time but not validated during authentication.
### Fix Focus Areas
- app/services/registration_service.py[58-86]
- app/services/auth_service.py[20-113]
- app/services/shuffle_engine.py[44-90]
- app/models/user.py[14-19]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (18)
requirements.txt-8-8 (1)
8-8:⚠️ Potential issue | 🟠 MajorUpgrade
gunicornto ≥ 22.0.0 — two HTTP request-smuggling CVEs affecting 21.2.0Two vulnerabilities affect
gunicorn==21.2.0:
GHSA-w3h3-4rj7-4ph4 (CVE-2024-1135): HTTP Request Smuggling via improper
Transfer-Encodingheader validation. Attackers can bypass Gunicorn endpoint access restrictions using conflicting Transfer-Encoding headers. Fixed in 22.0.0.GHSA-hc5x-x2vx-497g (CVE-2024-6827): HTTP Request Smuggling (TE.CL) via improper
Transfer-Encodingvalidation. Affects all versions before 22.0.0. CVSS 7.5 (High). Fixed in 22.0.0.Proposed fix
-gunicorn==21.2.0 +gunicorn==22.0.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements.txt` at line 8, The requirements entry pins a vulnerable gunicorn version (gunicorn==21.2.0); update the dependency to a non-vulnerable release (e.g., change the gunicorn line to pin or require >=22.0.0) so the project uses Gunicorn 22.0.0 or later to address CVE-2024-1135 and CVE-2024-6827; after updating the gunicorn requirement, run your dependency install/lock step (pip install / pip-compile / poetry lock) and run tests to ensure compatibility with the new Gunicorn release.requirements.txt-7-7 (1)
7-7:⚠️ Potential issue | 🟠 MajorUpgrade
Pillowto ≥ 10.3.0 — buffer overflow in color management (CVE-2024-28219 / GHSA-44wm-f244-xhp3)Pillow versions before 10.3.0 are vulnerable to a buffer overflow in
_imagingcms.ccaused by unsafestrcpyusage, which can lead to memory corruption when processing specially crafted ICC color profiles. Upgrade to 10.3.0 or above to fix.Proposed fix
-Pillow==10.2.0 +Pillow==10.3.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements.txt` at line 7, Update the Pillow dependency in requirements.txt to a fixed version >=10.3.0 to address the CVE-2024-28219 buffer overflow; locate the line containing "Pillow==10.2.0" and change it to "Pillow>=10.3.0" (or a specific pinned release like "Pillow==10.3.0" or newer) and then run dependency install and tests to verify nothing else breaks.docker-compose.yml-9-9 (1)
9-9:⚠️ Potential issue | 🟠 MajorHardcoded
secure_passinDATABASE_URLandPOSTGRES_PASSWORD.The password appears in plaintext at Line 9 (
DATABASE_URL) and Line 23 (POSTGRES_PASSWORD), which is flagged by Checkov (CKV_SECRET_4). Any developer with repo read access has these credentials. Apply the same env-var substitution pattern already used forSECRET_KEY/JWT_SECRET.🔒 Proposed fix
- - DATABASE_URL=postgresql://jibas_user:secure_pass@db:5432/jibas_db + - DATABASE_URL=postgresql://jibas_user:${DB_PASSWORD:-}@db:5432/jibas_db- - POSTGRES_PASSWORD=secure_pass + - POSTGRES_PASSWORD=${DB_PASSWORD:?DB_PASSWORD must be set}Using
:?on the authoritative source (dbservice) causes Compose to fail fast with a clear error if the variable is unset, preventing accidental startup with an empty password.Also note that the fallback defaults for
SECRET_KEY(dev-secret) andJWT_SECRET(dev-jwt-secret) on Lines 10-11 are too weak to be safe if this file is ever used in a non-local environment. Consider removing the defaults and requiring them to be set explicitly.Also applies to: 23-23
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` at line 9, Replace the hardcoded plaintext DB password by switching both DATABASE_URL and the db service POSTGRES_PASSWORD to environment-variable substitution using the required-fallback syntax (e.g. use ${POSTGRES_PASSWORD:?} in the db service and update DATABASE_URL to use ${POSTGRES_PASSWORD:?} so Compose fails if unset) and remove weak defaults for SECRET_KEY and JWT_SECRET (require those env vars instead of keeping "dev-secret"/"dev-jwt-secret"); update occurrences of DATABASE_URL, POSTGRES_PASSWORD, SECRET_KEY and JWT_SECRET in the docker-compose.yml accordingly.Dockerfile-1-16 (1)
1-16:⚠️ Potential issue | 🟠 MajorAdd a non-root
USERinstruction — Gunicorn runs as root.No
USERdirective is present, so the container process runs asroot. If the application is compromised, the attacker has unrestricted access inside the container.🔒 Proposed fix
COPY . . +RUN useradd -m appuser && chown -R appuser:appuser /app +USER appuser EXPOSE 5000 CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "run:app"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 1 - 16, The Dockerfile currently runs Gunicorn as root; create a non-root user and switch to it before CMD to reduce risk: add steps to create a user/group (e.g., appuser), chown the application directory (/app) to that user, and set USER appuser so gunicorn (invoked by CMD ["gunicorn", ... "run:app"]) runs unprivileged; ensure these instructions appear after COPY . . and before CMD so files have correct ownership and the container process is not root.migrations/create_tables.sql-2-2 (1)
2-2:⚠️ Potential issue | 🟠 MajorHardcoded password in migration script is a security risk.
'secure_pass'is committed to source control and matches the credential exposed indocker-compose.yml. Any developer or CI pipeline with repo access has this credential.Use a psql variable passed at invocation time instead:
🔒 Proposed fix
-CREATE USER jibas_user WITH PASSWORD 'secure_pass'; +CREATE USER jibas_user WITH PASSWORD :'db_password';Then invoke with:
psql -U postgres -v db_password="$DB_PASSWORD" -f bootstrap.sql🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/create_tables.sql` at line 2, The migration currently hardcodes the password in the CREATE USER statement for jibas_user which leaks credentials; update the CREATE USER jibas_user WITH PASSWORD '...' statement to use a psql variable (e.g., :db_password or the -v db_password variable) instead of a literal string and document invoking psql with -v db_password="$DB_PASSWORD" when running the migration so the password is supplied at runtime; ensure any CI/deployment scripts or docker-compose references are updated to pass the secret from environment/secret manager rather than committing it in the SQL file.docker-compose.yml-34-39 (1)
34-39:⚠️ Potential issue | 🟠 Major
workerservice has noenvironmentblock — it will fail to start.The Celery worker builds from the same image as
appand runscelery -A app.tasks.cleanup worker, which requires Flask/app configuration. WithoutDATABASE_URL,REDIS_URL,SECRET_KEY, andJWT_SECRET, the application context cannot be initialized and the broker connection cannot be established.🛠️ Proposed fix
worker: build: . command: celery -A app.tasks.cleanup worker --loglevel=info + environment: + - DATABASE_URL=postgresql://jibas_user:${DB_PASSWORD:-}@db:5432/jibas_db + - SECRET_KEY=${SECRET_KEY:-dev-secret} + - JWT_SECRET=${JWT_SECRET:-dev-jwt-secret} + - REDIS_URL=redis://cache:6379/0 depends_on: - db - cache + restart: unless-stopped🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 34 - 39, The worker service is missing required environment variables so the Celery process started by the command "celery -A app.tasks.cleanup worker" cannot initialize the Flask app or broker; add an environment block to the worker service mirroring the app service and set DATABASE_URL, REDIS_URL, SECRET_KEY and JWT_SECRET (or reference the same values/secrets used by the app service) so the Celery worker can load app configuration and connect to the broker/backing store.app/config.py-35-35 (1)
35-35:⚠️ Potential issue | 🟠 MajorCORS wildcard default is unsafe for an authentication service.
"*"permits any origin to issue cross-origin requests to all/api/*routes, including login and registration. For a security-sensitive service, the default should be an empty/restrictive value, forcing operators to explicitly opt in to permissive origins.🛡️ Proposed fix
- CORS_ORIGINS = os.getenv("CORS_ORIGINS", "*") + CORS_ORIGINS = os.getenv("CORS_ORIGINS", "") # explicit allowlist required in production🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/config.py` at line 35, The CORS_ORIGINS constant currently defaults to "*" which is unsafe for an auth service; change the default to a restrictive value (e.g., empty string or None) and update the parsing logic that reads CORS_ORIGINS to return an explicit allowlist (e.g., list of origins) only when the environment variable is set, ensuring that CORS_ORIGINS (the symbol) no longer implicitly allows all origins and that callers validate/handle an empty/None value as “no cross-origin allowed” unless operators explicitly configure origins.app/middleware/rate_limiter.py-12-12 (1)
12-12:⚠️ Potential issue | 🟠 Major
db_num + 1silently uses a different Redis database and can overflow the valid range.When
REDIS_URLisredis://localhost:6379/0, the rate limiter connects to db 1 instead of db 0. If the URL specifies db 15 (the default Redis maximum),15 + 1 = 16, which exceeds the defaultdatabases 16setting and will raise aResponseErrorat runtime. Theratelimit:key prefix already provides adequate namespace isolation within the same database.🛠️ Proposed fix
-db_num = int(parsed.path.strip("/") or 0) + 1 +db_num = int(parsed.path.strip("/") or 0) r = redis.Redis(host=host, port=port, db=db_num)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/middleware/rate_limiter.py` at line 12, The code increments the Redis DB index (db_num = int(parsed.path.strip("/") or 0) + 1) which causes off-by-one connections and can overflow the max DB index; change the calculation in rate_limiter.py to use the path value directly (db_num = int(parsed.path.strip("/") or 0)) so the limiter connects to the intended DB from the REDIS_URL and relies on the existing "ratelimit:" key prefix for namespacing rather than shifting databases.app/config.py-9-10 (1)
9-10:⚠️ Potential issue | 🟠 MajorVolatile secrets break JWT verification in multi-worker deployments.
os.urandom(64).hex()is evaluated at class-definition time (module import). In any multi-process setup (Gunicorn with--workers N, multiple container replicas), each worker process generates a different defaultSECRET_KEYandJWT_SECRET. A JWT signed by worker A cannot be verified by worker B, causing random 401 errors for all users who hit a different worker than the one that issued their token.These defaults must be stable across all workers. The standard fix is to require the env vars and fail loudly on startup if they're missing:
🔒 Proposed fix
- SECRET_KEY = os.getenv("SECRET_KEY", os.urandom(64).hex()) - JWT_SECRET = os.getenv("JWT_SECRET", os.urandom(64).hex()) + SECRET_KEY = os.environ["SECRET_KEY"] # hard-fail at startup if missing + JWT_SECRET = os.environ["JWT_SECRET"] # hard-fail at startup if missingAlternatively, provide a single-process dev fallback with a clear warning:
import warnings _dev_fallback = os.urandom(64).hex() SECRET_KEY = os.getenv("SECRET_KEY") or (warnings.warn("SECRET_KEY not set; using ephemeral key — NOT suitable for multi-worker use", stacklevel=2) or _dev_fallback)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/config.py` around lines 9 - 10, SECRET_KEY and JWT_SECRET are being set to os.urandom(...) at import time which yields different values per worker and breaks JWT verification; update the initialization of SECRET_KEY and JWT_SECRET in config.py to not use os.urandom as the default — instead require the environment variables and raise a clear startup error (e.g., RuntimeError) if either is missing, or if you want a dev-only fallback implement a single-process fallback that emits a runtime warning (use warnings.warn) and uses a single module-scoped _dev_fallback value; change references to SECRET_KEY and JWT_SECRET accordingly so they come only from os.getenv(...) or the controlled fallback.app/middleware/session_guard.py-20-24 (1)
20-24:⚠️ Potential issue | 🟠 MajorUnhandled
KeyErroron missing JWT claims returns HTTP 500 instead of 401.
payload["sub"]andpayload["username"]are outside theexcept ValueErrorblock. A validly signed JWT that lacks either claim (e.g., issued by another service or an older code version) raisesKeyError, which propagates as an unhandled 500.🛡️ Proposed fix
try: payload = crypto.verify_jwt(token) g.current_user_id = payload["sub"] g.current_username = payload["username"] - except ValueError as exc: + except (ValueError, KeyError) as exc: return jsonify({"error": str(exc)}), 401Or, to avoid leaking key names in the error response:
try: payload = crypto.verify_jwt(token) - g.current_user_id = payload["sub"] - g.current_username = payload["username"] except ValueError as exc: return jsonify({"error": str(exc)}), 401 + try: + g.current_user_id = payload["sub"] + g.current_username = payload["username"] + except KeyError: + return jsonify({"error": "رمز جلسة غير صالح"}), 401🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/middleware/session_guard.py` around lines 20 - 24, The handler currently only catches ValueError from crypto.verify_jwt but accessing payload["sub"] and payload["username"] can raise KeyError and bubble up as a 500; update the try/except around payload = crypto.verify_jwt(token) plus the subsequent assignments (g.current_user_id and g.current_username) to also catch KeyError (or a broader exception for missing claims) and return a 401 JSON response; use a generic error message (e.g., "missing or invalid token claims") instead of leaking specific claim names and reference the crypto.verify_jwt call and the g.current_user_id / g.current_username assignments when making the change.app/api/auth_routes.py-68-88 (1)
68-88:⚠️ Potential issue | 🟠 Major
verify_responsehas no exception handling — service errors will surface as 500s.Unlike the other endpoints, this handler doesn't wrap the service call in a try/except. If
verify_login_responseraises any exception (e.g., a DB error, or if the challenge isn't found and the service raises), the client gets an opaque 500. Also,int(data["inner_rotation"])on Line 79 can raiseValueError/TypeErrorfor non-numeric input.Suggested hardening
def verify_response(): data = request.get_json() or {} required = ["challenge_id", "inner_rotation"] if not all(k in data for k in required): return jsonify({"error": "بيانات ناقصة"}), 400 - result = auth_service.verify_login_response( - challenge_id=data["challenge_id"], - user_response={ - "inner_rotation": int(data["inner_rotation"]), - "alignment_check": data.get("alignment_check", 0), - "interaction_data": data.get("interaction_data", {}), - }, - ip_address=request.remote_addr, - user_agent=request.headers.get("User-Agent", ""), - ) - - status_code = 200 if result["success"] else 401 - return jsonify(result), status_code + try: + inner = int(data["inner_rotation"]) + except (ValueError, TypeError): + return jsonify({"error": "بيانات ناقصة"}), 400 + + try: + result = auth_service.verify_login_response( + challenge_id=data["challenge_id"], + user_response={ + "inner_rotation": inner, + "alignment_check": data.get("alignment_check", 0), + "interaction_data": data.get("interaction_data", {}), + }, + ip_address=request.remote_addr, + user_agent=request.headers.get("User-Agent", ""), + ) + status_code = 200 if result.get("success") else 401 + return jsonify(result), status_code + except Exception: + return jsonify({"error": "خطأ في التحقق"}), 400🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/auth_routes.py` around lines 68 - 88, The verify_response handler lacks exception handling and can raise unhandled exceptions (including ValueError/TypeError from int(data["inner_rotation"])) or propagate errors from auth_service.verify_login_response; wrap the service call and the parsing of inner_rotation in a try/except block inside verify_response, validate/convert inner_rotation safely (e.g., catch ValueError/TypeError and return a 400 with an error JSON), call auth_service.verify_login_response inside the try and catch any service/DB exceptions (return a 500 or appropriate status with a sanitized error JSON), and ensure the final return still uses result and status_code when no exception occurs to preserve existing behavior.app/services/image_service.py-20-49 (1)
20-49:⚠️ Potential issue | 🟠 Major
category_indexcounts non-image files — indices will have gaps.
enumerate(sorted(os.listdir(source_directory)))incrementsidxfor every directory entry, but only image files are actually imported. Non-image entries (e.g.,.DS_Store, README, subdirectories) inflateidx, causing gaps incategory_index. Since theImageAssetmodel has a unique constraint on(category, category_index), this won't break, but the indices become unpredictable.Fix: count only imported images
- for idx, filename in enumerate(sorted(os.listdir(source_directory))): + for filename in sorted(os.listdir(source_directory)): if not filename.lower().endswith((".png", ".jpg", ".jpeg", ".webp")): continue source_path = os.path.join(source_directory, filename) public_hash = secrets.token_hex(16) processed_data = self._process_image(source_path) new_filename = f"{public_hash}.{self.config.IMAGE_SERVE_FORMAT}" dest_path = os.path.join("images", f"pool_{category.lower()}", new_filename) with open(dest_path, "wb") as f: f.write(processed_data) db.session.add( ImageAsset( public_hash=public_hash, category=category, - category_index=idx, + category_index=imported, file_path=dest_path, encrypted_data=processed_data, ) ) imported += 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/image_service.py` around lines 20 - 49, In initialize_image_pool, idx is derived from enumerate(sorted(os.listdir(source_directory))) so it increments for non-image entries causing gaps in ImageAsset.category_index; change to only count imported images by either (a) iterating over a filtered list of image filenames (e.g., filter by suffix before enumerate) so idx maps 0..N-1 for saved images, or (b) replace idx with a dedicated counter (e.g., category_index) that you increment only when you actually add an ImageAsset; update all references to use that category_index when constructing ImageAsset to ensure sequential, gap-free indices.app/services/image_service.py-74-98 (1)
74-98:⚠️ Potential issue | 🟠 MajorN+1 query:
ImageAsset.query.get(image_id)is called per tile inside the loop.Every tile triggers a separate DB query. For a canvas with 40 tiles, that's 40 queries. Batch-load all assets upfront.
Also, Lines 91–92 silently skip missing or data-less assets, which would produce a canvas with blank holes and no error signal to the caller — this could cause subtle auth failures.
Suggested batch-load approach
+ assets_by_id = { + a.id: a + for a in ImageAsset.query.filter(ImageAsset.id.in_(all_ids)).all() + } + tile_positions = {} for visual_idx, actual_idx in enumerate(scatter_order): image_id = all_ids[actual_idx] - asset = ImageAsset.query.get(image_id) + asset = assets_by_id.get(image_id) if not asset or not asset.encrypted_data: continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/image_service.py` around lines 74 - 98, In generate_canvas_data: avoid the N+1 by batch-loading all ImageAsset rows for all_ids (e.g., query.filter(ImageAsset.id.in_(all_ids)).all()) and build a dict mapping id -> asset, then use that map inside the loop instead of calling ImageAsset.query.get(image_id) for each tile; also stop silently skipping missing assets or assets without encrypted_data — detect missing ids or empty encrypted_data (referencing all_ids, tile_positions, and _encrypt_position) and raise or return a clear error so the caller knows the canvas is incomplete. Ensure the rest of the loop uses the cached asset map and preserves the same scatter_order/session_seed logic.app/api/auth_routes.py-40-47 (1)
40-47:⚠️ Potential issue | 🟠 MajorUnhandled
ValueErrorfromint()casts will produce a 500 instead of a 400.If
outer_rotationorinner_rotationaren't valid integers,int(...)raisesValueError. The existingexcept ValueErroron Line 46 catches errors fromreg_service.complete_registration, but it also inadvertently catches theint()conversion error — however the error message will be Python's default (e.g.,"invalid literal for int()...") rather than a meaningful API error. The same issue exists inverify_response(Line 79) where there is no exception handling at all.Suggested fix — validate and convert early
def register_complete(): data = request.get_json() or {} required = ["registration_token", "outer_rotation", "inner_rotation"] if not all(k in data for k in required): return jsonify({"error": "بيانات ناقصة"}), 400 + try: + outer = int(data["outer_rotation"]) + inner = int(data["inner_rotation"]) + except (ValueError, TypeError): + return jsonify({"error": "بيانات ناقصة"}), 400 + try: result = reg_service.complete_registration( registration_token=data["registration_token"], - user_response={"outer_rotation": int(data["outer_rotation"]), "inner_rotation": int(data["inner_rotation"])}, + user_response={"outer_rotation": outer, "inner_rotation": inner}, ) return jsonify(result), 201 except ValueError as exc: return jsonify({"error": str(exc)}), 400🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/auth_routes.py` around lines 40 - 47, Validate and convert outer_rotation and inner_rotation before calling reg_service.complete_registration and inside verify_response: parse the inputs using explicit try/except around int(data["outer_rotation"]) and int(data["inner_rotation"]) (or use safe helpers) and return jsonify({"error": "outer_rotation must be an integer"})/({"error": "inner_rotation must be an integer"}) with a 400 status on failure; then pass the validated ints into reg_service.complete_registration (and into verify_response) so ValueError from int() is handled with a clear API error while allowing reg_service.complete_registration to still raise and be caught separately.app/services/auth_service.py-191-205 (1)
191-205:⚠️ Potential issue | 🟠 MajorDummy challenge is trivially distinguishable from a real one — user enumeration still possible.
The dummy payload has an empty
composite_image, emptytile_map, and zero-sized canvas dimensions. Any client can compare these fields to detect that the username doesn't exist, defeating the timing-attack mitigation purpose of the dummy.Generate a realistic-looking dummy: render a random composite image of the same dimensions/tile count, populate
tile_mapwith random hashes, etc. — or at minimum return non-zero dimensions and a base64-encoded placeholder image.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth_service.py` around lines 191 - 205, The _generate_dummy_challenge method currently returns trivially empty fields; change it to produce a realistic-looking dummy challenge by populating composite_image with a base64-encoded placeholder (or generated image bytes) sized to match canvas_dimensions, set canvas_dimensions and tile_size to non-zero values derived from self.config.WHEEL_SIZE, fill tile_map with randomized tile IDs/hashes consistent with the tile count, and keep other fields (challenge_id, wheel_size, expires_in, timestamp) as-is; ensure the function still returns the same dict shape so callers of _generate_dummy_challenge and any consumers of composite_image/tile_map cannot trivially distinguish real vs dummy.app/services/registration_service.py-78-82 (1)
78-82:⚠️ Potential issue | 🟠 MajorNo rollback on commit failure — session left dirty.
If
db.session.commit()raises (e.g. from a DB constraint violation per the TOCTOU issue above), the SQLAlchemy session remains in an invalid state for the rest of the request. Atry/exceptwithdb.session.rollback()is needed to keep the session usable.This ties into the IntegrityError handling suggested above.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/registration_service.py` around lines 78 - 82, Wrap the commit sequence that adds user and AuditLog in a try/except: call db.session.commit() inside try, and on any exception (e.g., IntegrityError) call db.session.rollback() to clear the session state, then re-raise or translate the error to the existing IntegrityError handling flow; update the block around db.session.add(user), db.session.add(AuditLog(...)) and db.session.commit() in registration_service.py to ensure db.session.rollback() is invoked when commit fails so the session is not left dirty.app/services/registration_service.py-16-20 (1)
16-20:⚠️ Potential issue | 🟠 MajorTOCTOU race: uniqueness check can be bypassed by concurrent requests.
The
filter_bychecks on Lines 17–19 don't prevent a concurrentcomplete_registration(or a secondinitiate_registration+complete_registration) from inserting the sameusername/complete_registration. TheUsermodel hasunique=Trueconstraints, so the DB will reject duplicates — but the resultingIntegrityErroris unhandled incomplete_registrationand will surface as a 500.Wrap the
db.session.commit()incomplete_registrationwith anIntegrityErrorcatch and return a user-friendly error.Proposed fix in complete_registration (around line 82)
+ from sqlalchemy.exc import IntegrityError + db.session.add(user) db.session.flush() db.session.add(AuditLog(user_id=user.id, event_type="registration", event_metadata={"method": "JIBAS"})) - db.session.commit() + try: + db.session.commit() + except IntegrityError: + db.session.rollback() + raise ValueError("اسم المستخدم أو البريد الإلكتروني مسجل مسبقاً")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/registration_service.py` around lines 16 - 20, The pre-checks in initiate_registration are vulnerable to TOCTOU races; handle duplicate-key failures in complete_registration by wrapping the db.session.commit() call in a try/except that catches sqlalchemy.exc.IntegrityError, performs db.session.rollback(), and returns or raises a user-friendly error (e.g., "اسم المستخدم أو البريد الإلكتروني مستخدم بالفعل") instead of letting the raw IntegrityError propagate; reference complete_registration and the db.session.commit() call and ensure you import IntegrityError from sqlalchemy.exc and rollback the session before returning the friendly message.app/services/registration_service.py-31-31 (1)
31-31:⚠️ Potential issue | 🟠 MajorSecurity:
base_seed(long-term user secret) used as the canvas session seed.
base_seedis the user's permanentbase_shuffle_seedstored in the DB. Passing it directly togenerate_canvas_datameans it seeds the tile-scatter RNG and is fed into_encrypt_positionhashes — both of which are sent to the client. While the seed itself isn't transmitted in plaintext, an attacker who collects multiple registration canvas payloads could correlate or brute-force the seed offline.Generate an ephemeral session seed (e.g. via
self.shuffle_engine.generate_seed()) for canvas generation, the same waygenerate_challengedoes in the login flow.Proposed fix
+ canvas_seed = self.shuffle_engine.generate_seed() - canvas_data = self.image_service.generate_canvas_data(outer_display, inner_display, base_seed) + canvas_data = self.image_service.generate_canvas_data(outer_display, inner_display, canvas_seed)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/registration_service.py` at line 31, Replace use of the long-term base_seed when creating the registration canvas with an ephemeral session seed: instead of passing base_seed into generate_canvas_data, call the shuffle engine to create a fresh session seed (e.g., self.shuffle_engine.generate_seed()) and pass that session seed into generate_canvas_data so the tile-scatter RNG and _encrypt_position hashing use a one-time seed; mirror the approach used by generate_challenge in the login flow to ensure base_shuffle_seed remains secret and only ephemeral seeds are sent to the client.
🟡 Minor comments (8)
app/middleware/anti_automation.py-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorMisleading docstring — clarify that the logic has not yet been migrated here.
The current wording "Anti-automation logic currently implemented inside AuthService._verify_anti_automation" reads as a description of this module's own contents, not as a TODO. A reader landing here looking for the anti-automation code will find nothing. Rephrase to make the intent unambiguous, e.g.:
-"""Anti-automation logic currently implemented inside AuthService._verify_anti_automation.""" +"""Anti-automation middleware — logic pending migration from AuthService._verify_anti_automation."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/middleware/anti_automation.py` at line 1, Docstring is misleading: change the module docstring in anti_automation.py to state that anti-automation logic has not yet been moved here and still lives in AuthService._verify_anti_automation; update the wording to a clear TODO (e.g., "Placeholder module—anti-automation logic remains in AuthService._verify_anti_automation and should be migrated here") so readers know to look at AuthService._verify_anti_automation for current implementation and that migration is pending.run.py-5-6 (1)
5-6:⚠️ Potential issue | 🟡 Minor
host="0.0.0.0"is expected for container deployments but exposes the dev server outside the container if run directly on a host machine.
debug=Falseis correctly set, which prevents the Werkzeug debugger from being exposed. In production, Gunicorn is used (viarun:app) and this block never executes, so the risk is limited to developers runningpython run.pydirectly outside Docker. Consider guarding with an environment check or leaving a comment to clarify the intent:if __name__ == "__main__": # Development only — production uses Gunicorn (run:app) app.run(host="0.0.0.0", port=5000, debug=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@run.py` around lines 5 - 6, The current block with if __name__ == "__main__": calls app.run(host="0.0.0.0", port=5000, debug=False) which exposes the dev server when run directly; update this block to guard the host selection with an environment check and add a clarifying comment: detect a container/dev flag (e.g., os.getenv("RUN_IN_CONTAINER") or FLASK_ENV) and only use host="0.0.0.0" when that flag indicates containerized execution, otherwise bind to "127.0.0.1" for local runs; keep debug=False, reference the __name__ == "__main__" entry point and the app.run(...) call while adding the comment that production uses Gunicorn (run:app).app/tasks/cleanup.py-9-30 (1)
9-30:⚠️ Potential issue | 🟡 MinorNo error handling around
db.session.commit()— a failure leaves the session dirty.All three functions call
db.session.commit()without a try/except. If the commit raises (e.g., DB connection lost), the session is left in a broken state and subsequent operations on the same session will fail or produce unexpected results. Wrap each in a try/except withdb.session.rollback()in the except branch, or use a context manager pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/tasks/cleanup.py` around lines 9 - 30, The three functions cleanup_expired_challenges, unlock_expired_lockouts, and purge_old_audit_logs call db.session.commit() without error handling; wrap each commit in a try/except that calls db.session.rollback() on exception and then re-raises (or returns an error indicator) to avoid leaving the session dirty; alternatively use SQLAlchemy's session.begin() context manager around the mutation and commit so rollback happens automatically on error; ensure you reference the same return behavior (number of affected rows) after successful commit and handle/propagate the exception consistently.app/services/image_service.py-51-60 (1)
51-60:⚠️ Potential issue | 🟡 MinorImage handle is not closed — potential resource leak.
Image.open(source_path)returns a file-backed image object. Without a context manager or explicit.close(), the file descriptor may leak, especially when processing many images ininitialize_image_pool.Suggested fix
def _process_image(self, source_path: str) -> bytes: - img = Image.open(source_path) - data = list(img.getdata()) - clean_img = Image.new(img.mode, img.size) - clean_img.putdata(data) + with Image.open(source_path) as img: + data = list(img.getdata()) + clean_img = Image.new(img.mode, img.size) + clean_img.putdata(data) clean_img = clean_img.resize(self.tile_size, Image.Resampling.LANCZOS)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/image_service.py` around lines 51 - 60, The Image returned by Image.open in _process_image is file-backed and isn't closed; wrap the open in a context manager (use "with Image.open(source_path) as img:" in _process_image) so the underlying file descriptor is released, and similarly use a context manager for the io.BytesIO buffer or explicitly close it before returning; update initialize_image_pool usage if it relies on non-closed images to ensure no resource leak.app/services/auth_service.py-74-74 (1)
74-74:⚠️ Potential issue | 🟡 Minor
user_agentaccepted but never recorded in verify/failure audit logs.
user_agentis captured increate_login_challengeaudit logs (Line 61) but is not forwarded fromverify_login_responseinto_auth_failureor_auth_success. This creates an observability gap for the verification step of the auth flow.Proposed fix (sketch)
if verification["is_valid"]: - return self._auth_success(user, challenge, ip_address) - return self._auth_failure("محاذاة غير صحيحة", challenge=challenge, user=user, ip_address=ip_address) + return self._auth_success(user, challenge, ip_address, user_agent) + return self._auth_failure("محاذاة غير صحيحة", challenge=challenge, user=user, ip_address=ip_address, user_agent=user_agent)Then update
_auth_successand_auth_failuresignatures andAuditLogconstruction to includeuser_agent.Also applies to: 85-92, 107-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth_service.py` at line 74, The verify_login_response function accepts user_agent but never passes it into the audit helpers, causing missing user-agent in verification audit logs; update verify_login_response to forward the user_agent argument into both _auth_success and _auth_failure calls (and any intermediary calls), then update the signatures of _auth_success and _auth_failure to accept user_agent and include user_agent when constructing AuditLog entries (consistent with how create_login_challenge logs user_agent). Ensure all call sites (including the success/failure paths around lines 85-92 and 107-108) are updated to provide the user_agent parameter.app/services/registration_service.py-63-64 (1)
63-64:⚠️ Potential issue | 🟡 MinorMissing key validation on
user_response— bareKeyErroron bad input.If a client omits
"outer_rotation"or"inner_rotation", this raises an unhandledKeyError, which typically maps to a 500 in Flask rather than a 400. Use.get()with a guard or catch theKeyErrorand raise aValueError.Proposed fix
- outer_offset = user_response["outer_rotation"] % self.config.WHEEL_SIZE - inner_offset = user_response["inner_rotation"] % self.config.WHEEL_SIZE + try: + outer_offset = int(user_response["outer_rotation"]) % self.config.WHEEL_SIZE + inner_offset = int(user_response["inner_rotation"]) % self.config.WHEEL_SIZE + except (KeyError, TypeError, ValueError) as exc: + raise ValueError("بيانات الاستجابة غير مكتملة") from exc🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/registration_service.py` around lines 63 - 64, The code uses user_response["outer_rotation"] and ["inner_rotation"] directly (around the outer_offset/inner_offset computation) which will raise a bare KeyError on missing keys; change to validate presence first (e.g., if "outer_rotation" not in user_response or "inner_rotation" not in user_response) or use user_response.get(...) and check for None, then raise a ValueError (or a custom InvalidRequestError) with a clear message before computing outer_offset/inner_offset so callers can return a 400; update the input-handling in the method that computes outer_offset/inner_offset (the function/method containing user_response) to perform this guard.app/services/auth_service.py-25-27 (1)
25-27:⚠️ Potential issue | 🟡 Minor
timedelta.secondscan be misleading for durations ≥ 24 h; also leaks lock-remaining info to callers.
.secondsdrops the day component of atimedelta. Use.total_seconds()(andint()it) for correctness. Additionally, revealing the precise remaining lock time lets an attacker time their retries — consider returning a fixed message instead.Proposed fix
- remaining = (user.locked_until - datetime.utcnow()).seconds - raise PermissionError(f"الحساب مقفل. حاول بعد {remaining} ثانية") + remaining = int((user.locked_until - datetime.utcnow()).total_seconds()) + raise PermissionError(f"الحساب مقفل. حاول بعد {remaining} ثانية")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth_service.py` around lines 25 - 27, Replace the use of timedelta.seconds and the precise remaining-time leak in the user lock handling: in the block checking user.is_locked() (refer to user.is_locked() and user.locked_until), compute the duration using int((user.locked_until - datetime.utcnow()).total_seconds()) if you need it internally, but do not include that exact value in the raised PermissionError; instead raise a generic fixed message (e.g., "الحساب مقفل. حاول لاحقاً") to avoid leaking timing information to callers.app/services/auth_service.py-94-102 (1)
94-102:⚠️ Potential issue | 🟡 MinorMissing key validation —
user_response["inner_rotation"]raises bareKeyError.Same issue as in
registration_service.py: if the client payload is malformed, an unhandledKeyErrorsurfaces as a 500 instead of a clear validation error.Proposed fix
+ try: + inner_rotation = user_response["inner_rotation"] + except KeyError: + return self._auth_failure("بيانات الاستجابة غير مكتملة", challenge=challenge, user=user, ip_address=ip_address) + verification = self.shuffle.verify_response( user_response={ - "inner_rotation": user_response["inner_rotation"], + "inner_rotation": inner_rotation, "alignment_check": user_response.get("alignment_check", 0), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth_service.py` around lines 94 - 102, The call to self.shuffle.verify_response assumes user_response["inner_rotation"] exists and will raise a bare KeyError for malformed payloads; add an explicit validation for the inner_rotation key (same pattern used in registration_service.py) before calling self.shuffle.verify_response, and if it's missing return/raise the service's validation error (e.g., BadRequest/ValidationError or ValueError) with a clear message so the caller receives a 4xx validation response instead of a 500; then pass the validated value (or use user_response.get after validation) into verify_response along with challenge.expected_response and challenge.session_shuffle_seed.
| if current is None: | ||
| r.setex(key, window, 1) | ||
| elif int(current) >= requests: | ||
| return jsonify({"error": "تم تجاوز الحد المسموح. حاول لاحقاً", "retry_after": r.ttl(key)}), 429 | ||
| else: | ||
| r.incr(key) |
There was a problem hiding this comment.
TOCTOU race condition renders the rate limiter ineffective under concurrency.
The r.get() → r.setex() / r.incr() sequence is non-atomic. Under concurrent requests:
- Two requests can simultaneously read
None, both executer.setex(key, window, 1), and the counter starts at 1 instead of 2. - Two requests near the limit can both read
requests - 1, both pass theelifcheck, and both callr.incr(), pushing the count beyondrequests.
Use an atomic increment-then-expire pattern instead:
🔒 Proposed fix
- current = r.get(key)
-
- if current is None:
- r.setex(key, window, 1)
- elif int(current) >= requests:
- return jsonify({"error": "تم تجاوز الحد المسموح. حاول لاحقاً", "retry_after": r.ttl(key)}), 429
- else:
- r.incr(key)
+ pipe = r.pipeline()
+ pipe.incr(key)
+ pipe.ttl(key)
+ current, ttl = pipe.execute()
+
+ if current == 1:
+ r.expire(key, window)
+
+ if current > requests:
+ return jsonify({"error": "تم تجاوز الحد المسموح. حاول لاحقاً", "retry_after": max(ttl, 0)}), 429🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/middleware/rate_limiter.py` around lines 23 - 28, The current r.get →
r.setex / r.incr sequence has a TOCTOU race; replace it with an atomic
increment-then-expire pattern using Redis INCR (and set expire only when the
counter becomes 1) or a Lua script. Concretely, in the rate limiter code block
that uses r, key, window, requests (replace the if current is None / elif ... /
else ...): call new = r.incr(key); if new == 1 then r.expire(key, window); if
new > requests then return the 429 response with retry_after = r.ttl(key).
Alternatively, implement the same logic in a single r.eval Lua script to ensure
full atomicity.
| def _calculate_expected_response(self, outer_shuffled, inner_shuffled, assigned_images, secret_offset_outer, secret_offset_inner, base_seed): | ||
| secret_images = self._derive_secret_images(base_seed, assigned_images) | ||
| outer_pos = self.find_image_position(outer_shuffled, secret_images["outer"]) | ||
| inner_pos = self.find_image_position(inner_shuffled, secret_images["inner"]) | ||
|
|
||
| if outer_pos is None or inner_pos is None: | ||
| return {"inner_rotation": 0, "alignment_check": 0} | ||
|
|
||
| required_inner_rotation = (outer_pos - inner_pos) % max(1, len(inner_shuffled)) | ||
| return {"inner_rotation": required_inner_rotation, "alignment_check": (outer_pos + inner_pos) % self.wheel_size} |
There was a problem hiding this comment.
secret_offset_outer and secret_offset_inner are accepted but never used — per-user security offsets are silently ignored.
Both parameters are declared in the signature and passed from generate_challenge (lines 57-58), but _calculate_expected_response never incorporates them into the expected-response computation. The challenge hash is derived solely from base_seed and assigned_images, meaning any two users sharing the same base_seed and image assignment will produce identical expected responses regardless of their individual secret offsets.
If these offsets are intended to be part of the per-user authentication secret (as the User model columns secret_offset_outer / secret_offset_inner suggest), their omission is a security defect — the offsets provide no protection.
🔒 Example of incorporating offsets into the expected response
def _calculate_expected_response(self, outer_shuffled, inner_shuffled,
assigned_images, secret_offset_outer,
secret_offset_inner, base_seed):
secret_images = self._derive_secret_images(base_seed, assigned_images)
outer_pos = self.find_image_position(outer_shuffled, secret_images["outer"])
inner_pos = self.find_image_position(inner_shuffled, secret_images["inner"])
if outer_pos is None or inner_pos is None:
return {"inner_rotation": 0, "alignment_check": 0}
- required_inner_rotation = (outer_pos - inner_pos) % max(1, len(inner_shuffled))
- return {"inner_rotation": required_inner_rotation,
- "alignment_check": (outer_pos + inner_pos) % self.wheel_size}
+ required_inner_rotation = (
+ outer_pos - inner_pos + secret_offset_outer
+ ) % max(1, len(inner_shuffled))
+ return {
+ "inner_rotation": required_inner_rotation,
+ "alignment_check": (outer_pos + inner_pos + secret_offset_inner) % self.wheel_size,
+ }🧰 Tools
🪛 Ruff (0.15.1)
[warning] 71-71: Unused method argument: secret_offset_outer
(ARG002)
[warning] 71-71: Unused method argument: secret_offset_inner
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/shuffle_engine.py` around lines 71 - 80, The function
_calculate_expected_response currently ignores secret_offset_outer and
secret_offset_inner; update it to incorporate those per-user offsets when
deriving the expected response: after obtaining outer_pos and inner_pos from
find_image_position (or by passing offsets into _derive_secret_images if you
prefer), add the offsets modulo the appropriate wheel/list sizes (e.g.,
outer_pos = (outer_pos + secret_offset_outer) % self.wheel_size and inner_pos =
(inner_pos + secret_offset_inner) % max(1, len(inner_shuffled))) before
computing required_inner_rotation and alignment_check so each user’s secret
offsets affect the returned expected response; ensure generate_challenge still
supplies the offsets to this method.
| CREATE DATABASE jibas_db; | ||
| CREATE USER jibas_user WITH PASSWORD 'secure_pass'; | ||
| GRANT ALL PRIVILEGES ON DATABASE jibas_db TO jibas_user; | ||
|
|
||
| CREATE INDEX idx_challenges_user_status ON auth_challenges(user_id, status); | ||
| CREATE INDEX idx_challenges_expires ON auth_challenges(expires_at); | ||
| CREATE INDEX idx_audit_user_time ON audit_logs(user_id, created_at DESC); |
There was a problem hiding this comment.
Script is broken as written — CREATE INDEX will run in the wrong database.
\c/\connect metacommands that switch databases "cannot be included in SQL script files." After CREATE DATABASE jibas_db the session stays connected to the initial database (typically postgres). All subsequent DDL statements execute in that database, not jibas_db, so tables and indexes are created in the wrong place.
The CREATE INDEX statements on lines 5-7 will therefore fail with relation "auth_challenges" does not exist because:
- The target database was never switched to
jibas_db. - Even if it had been, these tables are created by the ORM, not by this script — so the script must be run after ORM-driven table creation, and that dependency is undocumented.
Additionally, the script is non-idempotent (CREATE DATABASE / CREATE USER will error on a second run).
🛠️ Suggested restructure
Split into two steps. First script (run against postgres):
CREATE DATABASE jibas_db;
CREATE USER jibas_user WITH PASSWORD :'db_password';
GRANT ALL PRIVILEGES ON DATABASE jibas_db TO jibas_user;Second script (run against jibas_db, after ORM creates the tables):
CREATE INDEX IF NOT EXISTS idx_challenges_user_status ON auth_challenges(user_id, status);
CREATE INDEX IF NOT EXISTS idx_challenges_expires ON auth_challenges(expires_at);
CREATE INDEX IF NOT EXISTS idx_audit_user_time ON audit_logs(user_id, created_at DESC);Or, add IF NOT EXISTS guards and invoke them via separate psql -d invocations:
psql -U postgres -f bootstrap.sql
# (run ORM migrations here)
psql -U postgres -d jibas_db -f indexes.sql🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@migrations/create_tables.sql` around lines 1 - 7, The SQL script mixes
database creation and index creation so the CREATE INDEX statements run in the
wrong database and are non-idempotent; split this into two steps: keep CREATE
DATABASE and CREATE USER (use parameterized password) in a bootstrap script run
against the postgres DB, then create indexes (use CREATE INDEX IF NOT EXISTS) in
a separate script executed against jibas_db after the ORM has created tables; do
not use psql metacommands (\c) in the file and document that the index script
must run after the ORM migrations so targets like auth_challenges and audit_logs
exist.
Motivation
Description
app/__init__.pyto use guarded imports, provide a minimalSQLAlchemyshim for import-time compatibility, and only raise a clearRuntimeErrorwhencreate_app()is invoked without Flask installed.__init__.pyfiles inapp/api,app/services,app/tasks, andapp/utilswith short module docstrings so no package module is empty.Testing
pytest -qwhich passed (2 passed).python -m compileall app run.py testswhich succeeded and confirmed modules compile.Codex Task
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores