Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions backend/app/celery/celery_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,19 @@
logger = logging.getLogger(__name__)


@worker_process_init.connect
def warm_llm_modules(**_) -> None:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n backend/app/celery/celery_app.py | head -30

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 1110


🏁 Script executed:

rg "def.*\*\*" backend/app/celery/celery_app.py -A 2 -B 2

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 215


🏁 Script executed:

rg "def.*\*\*" --type py -A 1 | head -50

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 3467


🏁 Script executed:

head -20 backend/app/celery/celery_app.py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 673


🏁 Script executed:

rg "from typing import|import typing" backend/app/celery/ --type py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 131


🏁 Script executed:

wc -l backend/app/celery/celery_app.py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 111


🏁 Script executed:

cat backend/app/celery/celery_app.py | grep -E "def |import"

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 386


🏁 Script executed:

rg "worker_process_init" --type py -B 2 -A 5

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 1067


🏁 Script executed:

python3 << 'EOF'
# Check if object is the right type or if Any is better for **kwargs
# Standard Python typing conventions
import inspect
from typing import get_type_hints

# According to PEP 484 and Python typing best practices:
# When accepting **kwargs with a type hint:
# - **kwargs: Any is the standard approach (requires: from typing import Any)
# - **kwargs: object is valid but less idiomatic
# - No type hint is common but violates strict typing requirements

print("Python typing best practice for **kwargs:")
print("1. **kwargs: Any (most common and idiomatic)")
print("2. **kwargs: object (valid but less common)")
print("\nFor Celery signal handlers specifically:")
print("Signal handlers receive kwargs but often don't use them")
print("Using **_: Any or **_: object both indicate the handler ignores kwargs")
EOF

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 374


Add a type hint for the variadic signal kwargs.

Line 13 leaves **_ untyped, which violates the repo typing rule for Python functions.

✅ Suggested fix
+from typing import Any
+
 import logging

 from celery import Celery
 from celery.signals import worker_process_init
 from kombu import Exchange, Queue

 from app.core.config import settings

 logger = logging.getLogger(__name__)


 `@worker_process_init.connect`
-def warm_llm_modules(**_) -> None:
+def warm_llm_modules(**_: Any) -> None:

As per coding guidelines, **/*.py: Always add type hints to all function parameters and return values in Python code. Use **_: Any to match the codebase pattern for signal handlers that intentionally ignore kwargs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/celery/celery_app.py` at line 13, The function warm_llm_modules
currently uses an untyped variadic kwargs parameter (**_), which breaks the
repository typing rule; update the signature to use an explicit type hint for
ignored signal kwargs (change **_ to **_: Any) and ensure Any is imported from
typing if not already present, keeping the existing return annotation -> None
and the function name warm_llm_modules to locate the change.

"""Import LLM service modules in each worker process right after fork.

This runs once per worker before any task arrives, so LLM calls
(the most latency-sensitive path) never pay a cold-import penalty.
The main process is unaffected, keeping overall memory low.
"""
import app.services.llm.jobs # noqa: F401

logger.info("[warm_llm_modules] LLM modules pre-loaded in worker process")


# Create Celery instance
celery_app = Celery(
"ai_platform",
Expand Down
4 changes: 2 additions & 2 deletions backend/app/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,8 @@ def AWS_S3_BUCKET(self) -> str:

# Celery Configuration
CELERY_WORKER_CONCURRENCY: int | None = None
CELERY_WORKER_MAX_TASKS_PER_CHILD: int = 1
CELERY_WORKER_MAX_MEMORY_PER_CHILD: int = 200000
CELERY_WORKER_MAX_TASKS_PER_CHILD: int = 150
CELERY_WORKER_MAX_MEMORY_PER_CHILD: int = 300000
CELERY_TASK_SOFT_TIME_LIMIT: int = 300
CELERY_TASK_TIME_LIMIT: int = 600
CELERY_TASK_MAX_RETRIES: int = 3
Expand Down
Loading