Feature/docker deploy#15
Conversation
- Check the CI workflow files and adjust. - Update the codes to pass some CI checks.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a manual CD GitHub Actions workflow for SSH deploy + health check; renames project to Changes
Sequence DiagramsequenceDiagram
participant GHA as GitHub Actions (CD)
participant SSH as Remote Host (SSH)
participant App as App Dir\n(docker compose)
participant Health as Local Health Endpoint
GHA->>SSH: ssh (host/user/key/port from secrets)
SSH->>App: git pull origin dev
SSH->>App: cd docker && docker compose up -d --build
App-->>SSH: build/start output
GHA->>GHA: wait 300s
GHA->>SSH: ssh curl -sf http://localhost:8000/healthz/
alt health ok
SSH-->>GHA: HTTP 200
GHA-->>GHA: job succeeds
else health fail
SSH-->>GHA: error/timeout
GHA-->>GHA: job fails
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
weblate/trans/autobatchtranslate.py (1)
367-371:⚠️ Potential issue | 🟡 MinorTypo in bot user name: "Autobatch_translatior" should be "Autobatch_translator".
The typo appears in both the
nameandverboseparameters.✏️ Proposed fix
# Get or create a bot user for autobatch translation bot_user = User.objects.get_or_create_bot( scope="weblate", - name="Autobatch_translatior", - verbose="autobatch_translatior", + name="Autobatch_translator", + verbose="autobatch_translator", )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@weblate/trans/autobatchtranslate.py` around lines 367 - 371, Fix the typo in the bot creation call: update the name and verbose arguments passed to User.objects.get_or_create_bot (where bot_user is assigned) from "Autobatch_translatior" to "Autobatch_translator" so both parameters use the correct spelling.weblate/trans/models/component.py (2)
4672-4694:⚠️ Potential issue | 🟠 MajorCommit each author-group before mutating the file for the next one.
This helper only processes one
Translation, but it queues every author-group intranslations_to_commitand only callsgit_commit()later in a second loop. Ifcommit_groupshas more than one entry, the first commit sees the file after later groups have already updated it, so author/timestamp attribution collapses into the first commit and the remaining commits are likely empty.Also applies to: 4708-4729
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@weblate/trans/models/component.py` around lines 4672 - 4694, The loop over commit_groups must commit each author-group immediately after update_units instead of deferring all commits; after calling translation.update_units(changes, store, author_name) and confirming changes (any(changes_status.values())), invoke git_commit(translation, author_name, timestamp) (using the same timestamp conversion to timezone.make_aware when needed) right there so the file state and attribution are recorded before the next group mutates the file; remove or skip the later deferred commit loop (the code that iterates translations_to_commit) to avoid duplicate/empty commits and apply the same immediate-commit fix to the other block referenced around lines 4708-4729.
1214-1226:⚠️ Potential issue | 🟠 MajorGuard
translations_countbefore the//operation to fix mypy type error.Line 1226 has a type safety violation:
self.translations_countis typed asint | None, but the//operator requires anint. The code only checks for-1at line 1218 and does not guard againstNoneor0, which would also cause a divide-by-zero error. Lift the attribute into a local variable, guard against bothNoneand0, and return early before computing progress.Suggested fix
# Calculate progress for translations if progress is None: + translations_count = self.translations_count + if translations_count in (None, 0): + return self.translations_progress += 1 - progress = 100 * self.translations_progress // self.translations_count + progress = 100 * self.translations_progress // translations_count🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@weblate/trans/models/component.py` around lines 1214 - 1226, In progress_step, guard self.translations_count (typed int | None) before using the floor-division: assign it to a local var (e.g. count = self.translations_count), check for None or 0 (and keep the existing -1 check for linked_component handling) and return early if invalid to avoid mypy type errors and divide-by-zero; then proceed to increment self.translations_progress and compute progress = 100 * self.translations_progress // count. Ensure you still call linked_component.progress_step(progress) when translations_count == -1 and preserve current_task checks.
🧹 Nitpick comments (13)
pyproject.toml (1)
490-496: Note: Empty[tool.pytest]section may be redundant.Line 496 adds an empty
[tool.pytest]section while[tool.pytest.ini_options]already exists at line 498. This appears harmless but may be unnecessary. If this was intentionally added for future configuration, consider adding a comment to clarify.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 490 - 496, The pyproject.toml contains an empty [tool.pytest] section that is redundant because [tool.pytest.ini_options] already exists; either remove the empty [tool.pytest] header or replace it with a clarifying comment indicating it's intentionally reserved for future pytest settings so readers won't be confused—look for the [tool.pytest] and [tool.pytest.ini_options] headers and perform the removal or add a short inline comment explaining intent.docker/requirements.txt (1)
1-66: Consider consolidating dependency management to avoid version drift.This
requirements.txtpins exact versions that duplicate dependencies already specified inpyproject.toml. Per coding guidelines, dependencies should be managed inpyproject.tomlwith dependency groups. Since the Dockerfile installs both this file andpyproject.tomlextras, there's a risk of version inconsistencies over time.Some minor version format inconsistencies noted:
- Line 1:
aeidon==1.15(short form) vs others using full semver- Line 32:
iniparse==0.5vs Line 34:mercurial==7.2Consider either:
- Generating this file from
pyproject.tomlduring CI/build- Using a tool like
uv pip compileto lock versions frompyproject.tomlAs per coding guidelines: "Manage dependencies in
pyproject.tomlwith dependency groups for different environments"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/requirements.txt` around lines 1 - 66, The requirements.txt duplicates and pins many deps already declared in pyproject.toml, risking version drift; remove the hand-maintained pins from docker/requirements.txt and instead: update the Dockerfile to install from pyproject.toml extras (or from a generated lock file), and add a build step in CI/packaging that generates a frozen requirements.txt (e.g., pip-compile / "uv pip compile" or equivalent) from pyproject.toml so the Docker build consumes that generated lockfile; ensure the repo documents the chosen flow and remove or mark docker/requirements.txt as generated to avoid manual edits.docker/etc/supervisor/conf.d/celery-backup.conf (1)
1-7: Consider adding aprioritysetting for consistency.Other Celery worker configurations in this PR include a
prioritysetting (e.g., celery-celery has 400, celery-notify has 500, celery-memory has 600). Adding a priority here would ensure deterministic startup order.♻️ Proposed addition
autorestart = true stdout_logfile=/dev/fd/1 stdout_logfile_maxbytes=0 redirect_stderr=true +priority = 700🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/supervisor/conf.d/celery-backup.conf` around lines 1 - 7, Add a supervisor "priority" setting to the celery-backup program block to ensure deterministic startup order alongside the other Celery workers; update the [program:celery-backup] configuration (the block defining environment, command, autorestart, stdout_logfile, redirect_stderr) to include a priority value consistent with the existing sequence (e.g., set priority=700 to follow 400/500/600 pattern).weblate/trans/autobatchtranslate.py (2)
108-122: Consider adding return type hint.♻️ Proposed signature
-def _prepare_batch_request(translation: Translation): +def _prepare_batch_request( + translation: Translation, +) -> tuple[None, None, None, None] | tuple[QuerySet, dict[str, str], dict[int, Unit], set[str]]:Or use a named tuple / dataclass for clearer return type.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@weblate/trans/autobatchtranslate.py` around lines 108 - 122, Add an explicit return type for _prepare_batch_request to improve clarity and typing (e.g., Optional[Tuple[QuerySet, Dict[str,str], Dict[int,UnitType], Set[str]]]) or replace the tuple return with a small dataclass/namedtuple (e.g., BatchRequest = dataclass(...)) and return that instead; update the function signature for _prepare_batch_request and adjust callers that unpack units_qs, units_data, unit_map, expected_keys to use the new typed return or fields to keep typesafe usage.
62-105: Consider adding return type hints for type safety.Per coding guidelines, type hints are required. This function returns a tuple of
(str | None, str | None, str | None).♻️ Proposed signature
-def _resolve_openrouter_config(translation: Translation): +def _resolve_openrouter_config( + translation: Translation, +) -> tuple[str | None, str | None, str | None]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@weblate/trans/autobatchtranslate.py` around lines 62 - 105, Update the _resolve_openrouter_config function signature to include a return type hint of a three-item optional string tuple (e.g. -> tuple[Optional[str], Optional[str], Optional[str]] or -> (str | None, str | None, str | None)) and import typing.Optional if using tuple[Optional[...]]; keep the existing logic and return values unchanged, only add the type annotation to the function definition to satisfy the project type-hinting guideline.docker/etc/supervisor/conf.d/celery-single.conf (1)
1-8: LGTM! Note command syntax inconsistency with other configs.This config correctly uses
celery --app=weblate.utils workersyntax. However,celery-translate.confuses the oldercelery workersyntax relying on theCELERY_APPenvironment variable. Consider standardizing the command syntax across all Celery configs for consistency and clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/supervisor/conf.d/celery-single.conf` around lines 1 - 8, The Celery command syntax is inconsistent across configs; standardize them by using the explicit "--app=weblate.utils" form (as used in the [program:celery] config) instead of relying on the CELERY_APP env var in other files (e.g., celery-translate.conf). Update the command line in celery-translate.conf (and any other Celery supervisor confs) to use "celery --app=weblate.utils worker ..." preserving existing options/queues and any "%(ENV_... )s" placeholders so all configs use the same explicit app invocation.docker/etc/supervisor/conf.d/celery-beat.conf (1)
1-7: Consider addingprioritysetting for consistency.Other Celery Supervisor configs (e.g.,
celery-translate.confwith priority 700,celery-single.confwith priority 400) include aprioritysetting to control startup order. This config is missing it. Consider adding a priority to ensure the beat scheduler starts at the appropriate time relative to workers.🔧 Suggested addition
autorestart = true stdout_logfile=/dev/fd/1 stdout_logfile_maxbytes=0 redirect_stderr=true +priority = 500🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/supervisor/conf.d/celery-beat.conf` around lines 1 - 7, Add a priority setting to the [program:celery-beat] supervisor block so its startup order is explicit and consistent with the other Celery configs; update the celery-beat section (program:celery-beat, the block containing the command "/app/venv/bin/celery beat ...") to include a numeric priority (for example 700 to match celery-translate.conf or another value chosen to place beat at the correct order relative to celery-single.conf which uses 400).docker/etc/supervisor/conf.d/celery-translate.conf (1)
1-8: Use explicit-Aflag to specify the Celery app instead of relying onCELERY_APPenvironment variable.In Celery 5.x, the application should be specified via the
-A(or--app) command-line option rather than theCELERY_APPenvironment variable. Update the command to:celery -A weblate.utils worker --hostname 'translate@%%h' ...and removeCELERY_APP=weblate.utilsfrom the environment. This aligns with Celery's documented best practices for app specification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/supervisor/conf.d/celery-translate.conf` around lines 1 - 8, Remove the CELERY_APP=weblate.utils environment assignment from the [program:celery-translate] environment and update the command string used by the celery worker to explicitly pass the app via -A (e.g., change the command from the current "/app/venv/bin/celery worker ..." to include "-A weblate.utils" so it reads like "celery -A weblate.utils worker ..."); keep the existing hostname, loglevel, queues, pool, prefetch-multiplier and %(ENV_CELERY_TRANSLATE_OPTIONS)s tokens and ensure stdout/stderr and autorestart settings remain unchanged.docker/etc/nginx/generate-site.py (2)
7-17: Add error handling for argument count mismatch.The tuple unpacking on
sys.argv[1:]will raise aValueErrorif the wrong number of arguments is provided, but with a cryptic error message. Consider adding explicit validation with a helpful error message indicating the expected arguments.Proposed fix
# Parse args +if len(sys.argv) != 9: + print( + f"Usage: {sys.argv[0]} TEMPLATE_DIRS WEBLATE_URL_PREFIX WEBLATE_REALIP " + "CLIENT_MAX_BODY_SIZE WEBLATE_BUILTIN_SSL WEBLATE_ANUBIS_URL " + "SITE_DOMAIN ENABLE_HTTPS", + file=sys.stderr, + ) + sys.exit(1) + ( TEMPLATE_DIRS,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/nginx/generate-site.py` around lines 7 - 17, Validate the number of CLI args before unpacking sys.argv[1:] and raise a clear error if it doesn't match the expected count; check len(sys.argv) (or len(sys.argv[1:])) against the number of variables (TEMPLATE_DIRS, WEBLATE_URL_PREFIX, WEBLATE_REALIP, CLIENT_MAX_BODY_SIZE, WEBLATE_BUILTIN_SSL, WEBLATE_ANUBIS_URL, SITE_DOMAIN, ENABLE_HTTPS) and exit with a descriptive message explaining the expected arguments and their order so the subsequent tuple unpacking using sys.argv[1:] will never raise a cryptic ValueError.
1-2: Missing GPL-3.0-or-later license header.Per coding guidelines, all Python files must include the GPL-3.0-or-later license header. Add the standard copyright and SPDX header at the top of this file.
Proposed fix
#!/usr/bin/env python3 +# Copyright © Boost Organization <boost@boost.org> +# +# SPDX-License-Identifier: GPL-3.0-or-later + import sys🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/etc/nginx/generate-site.py` around lines 1 - 2, Add the GPL-3.0-or-later license header immediately after the existing shebang in docker/etc/nginx/generate-site.py: insert the SPDX short identifier line ("SPDX-License-Identifier: GPL-3.0-or-later"), a copyright line with the project/organization and year, and the standard short GPL header notice (one- or two-line copyright/permission notice) so the file clearly declares GPL-3.0-or-later licensing; keep the shebang as the first line and place the header directly below it.docker/environment.example (1)
50-55: Clarify or remove duplicate database name variables.Both
POSTGRES_DBandPOSTGRES_DATABASEare defined. Typically only one is needed—Weblate's Docker settings usePOSTGRES_DB. Consider removingPOSTGRES_DATABASEor adding a comment explaining when each is used.Proposed fix
POSTGRES_PASSWORD= POSTGRES_USER= POSTGRES_DB= -POSTGRES_DATABASE= POSTGRES_HOST=host.docker.internal POSTGRES_PORT=5432🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/environment.example` around lines 50 - 55, Remove the duplicate database-name variable by keeping only the conventional POSTGRES_DB and deleting POSTGRES_DATABASE (or, if you need both for compatibility, add a clarifying comment above the variables explaining the difference and when to use POSTGRES_DB vs POSTGRES_DATABASE); update any references in the codebase or docs to use POSTGRES_DB (search for POSTGRES_DATABASE and replace or document usage) so the environment example is unambiguous..github/workflows/cd.yml (1)
35-37: Consider reducing or making the health check delay configurable.The 300-second (5-minute) sleep before the health check may unnecessarily extend the workflow duration. Docker Compose builds and container startup typically complete faster. Consider reducing this delay or implementing a retry loop with exponential backoff.
Example: Retry loop instead of fixed sleep
script: | - sleep 300 - curl -sf http://localhost:8000/healthz/ + # Wait up to 5 minutes, checking every 15 seconds + for i in $(seq 1 20); do + if curl -sf http://localhost:8000/healthz/; then + echo "Health check passed" + exit 0 + fi + echo "Attempt $i failed, waiting 15s..." + sleep 15 + done + echo "Health check failed after 20 attempts" + exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cd.yml around lines 35 - 37, Replace the fixed "sleep 300" in the workflow script block with a configurable or retry-based health check: either reduce the hardcoded delay to a shorter default or read a timeout/env var, or implement a retry loop around the existing "curl -sf http://localhost:8000/healthz/" (with incremental backoff and a max attempts) so the workflow probes until ready instead of always waiting 300s; update the script block that currently contains "sleep 300" and "curl -sf http://localhost:8000/healthz/" accordingly and ensure failures still cause the job to fail if the max retries are exhausted.docker/Dockerfile (1)
9-10: Keep build-only sources out of the final image.
COPY . /app/boost-weblate/and laterCOPY --from=build /app /appleave the full checkout and/app/srcinputs in production even though the package is already installed in/app/venv. That bloats the runtime image and ships repo-only artifacts unnecessarily. Copy only the runtime outputs, or remove/app/boost-weblateand/app/srcbefore the final stage ends.Also applies to: 50-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/Dockerfile` around lines 9 - 10, Remove build-only sources from the final image by not copying the full checkout into runtime or by deleting it before the final stage completes: stop using COPY . /app/boost-weblate/ into the image that becomes runtime, or after COPY --from=build /app /app remove /app/boost-weblate and /app/src (or only copy /app/venv and the installed package artifacts instead of the entire /app). Update the Dockerfile to ensure only runtime outputs (e.g., /app/venv and installed site-packages) are present in the final stage and remove repo-only artifacts (referencing COPY . /app/boost-weblate/, COPY --from=build /app /app, and paths /app/boost-weblate and /app/src); apply the same change for the other instances noted (lines 50-51).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/cd.yml:
- Around line 22-26: The deployment script under the "script" block currently
runs "git pull origin dev", but the repo only has "main"; change the branch
reference to "main" (replace the git pull origin dev command) so the deploy uses
the correct branch before running "cd docker" and "docker compose up -d
--build".
In @.gitignore:
- Line 46: The .gitignore currently contains a broad rule '/test/' which will
untrack legitimate source tests; replace that single-line rule with explicit
ignores for test artifacts instead (e.g., test-results directories, pytest/cache
folders, coverage outputs, IDE/test runner temporary files) so real test source
files remain in version control; locate the literal '/test/' entry and remove or
change it to specific patterns such as '/test-results/', '.pytest_cache/',
'coverage/', and any generated test/*.tmp patterns used by your tooling.
In `@docker/environment.example`:
- Around line 108-116: The environment example is missing Supervisor variables
referenced by supervisor conf; add WEB_WORKERS, WEB_BLOCKING_THREADS, and
GRANIAN_ARGS with sensible defaults (for example WEB_WORKERS=4,
WEB_BLOCKING_THREADS=20, GRANIAN_ARGS="") to the docker environment example so
Supervisor can read them; place them near the web/boost-related settings (around
CLIENT_MAX_BODY_SIZE and AUTO_BATCH_TRANSLATE_VIA_OPENROUTER) and ensure values
are exported as plain KEY=VALUE entries.
In `@docker/etc/nginx/default.tpl`:
- Around line 70-73: The proxy_set_header for X-Forwarded-For must be
unconditional: move the proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for; line out of the WEBLATE_BUILTIN_SSL conditional so
it always runs, leaving proxy_set_header X-Forwarded-Proto $scheme; inside the
{% if WEBLATE_BUILTIN_SSL %} ... {% endif %} block; update the template around
the WEBLATE_BUILTIN_SSL conditional to ensure X-Forwarded-For is set regardless
of TLS while X-Forwarded-Proto remains conditional.
In `@docker/etc/supervisor/supervisord.conf`:
- Line 5: Fix the typo in the supervisord config comment for the chmod setting:
change the word "sockef" to "socket" in the comment on the line containing the
chmod directive so it reads "... ; socket file mode (default 0700)" to
accurately describe the chmod parameter.
In `@docker/health_check`:
- Around line 21-25: The grep pattern in the failing detection block uses basic
regex so `\|` is treated literally; update the command that builds `failing` to
use an alternation-aware regex (e.g., switch to grep -E or use `\(...\)\|`
escaping) so it correctly matches lines containing either EXITED or RUNNING;
modify the line that sets the failing variable (the `failing="$(echo "$services"
| grep -v '^check *EXITED\|RUNNING' || true)"` assignment) to use extended regex
(grep -E) or properly escaped alternation so the negative grep filters out both
EXITED and RUNNING as intended.
In `@docker/start`:
- Around line 311-316: The script currently sets `set_real_ip_from 0.0.0.0/0;`
when `WEBLATE_IP_PROXY_HEADER=HTTP_X_FORWARDED_FOR`, allowing header spoofing;
change this to require a user-configurable trusted proxy CIDR list (e.g., new
env var `WEBLATE_TRUSTED_PROXIES`) instead of the global 0.0.0.0/0. Modify the
`case` branch that sets `WEBLATE_REALIP` to: if `WEBLATE_TRUSTED_PROXIES` is
empty, do not emit any `set_real_ip_from` lines (or exit with a clear error),
otherwise split `WEBLATE_TRUSTED_PROXIES` on spaces/commas and append a
`set_real_ip_from <CIDR>;` line for each CIDR plus `real_ip_header
X-Forwarded-For;`; remove the unconditional `set_real_ip_from 0.0.0.0/0;` and
ensure any logic that auto-enables `WEBLATE_IP_PROXY_HEADER` (related code)
respects that trusted-proxies variable.
- Around line 152-160: The timezone block currently skips on first boot because
it checks [ -w /tmp/localtime ] which fails when /tmp/localtime doesn't exist;
change the guard to check writability of /tmp (e.g. [ -w /tmp ]) and if
/tmp/localtime is missing create it (touch /tmp/localtime) before writing; then
proceed to compute zonefile="/usr/share/zoneinfo/$WEBLATE_TIME_ZONE" and cat the
selected zonefile (or Etc/UTC fallback) into /tmp/localtime as in the original
block so the symlink /etc/localtime receives the correct contents.
- Around line 208-210: Replace the use of User.objects.raw("SELECT 1") inside
the readiness loop with a direct DB connection probe: change the command passed
to run_weblate shell -c so it imports django.db.connection, opens a cursor,
executes a simple SELECT 1 and fetches a row (instead of using
User.objects.raw), so the probe doesn't require a model PK and will succeed when
PostgreSQL is reachable; update the until line accordingly to use that
connection.cursor() probe and keep the same success/failure behavior.
---
Outside diff comments:
In `@weblate/trans/autobatchtranslate.py`:
- Around line 367-371: Fix the typo in the bot creation call: update the name
and verbose arguments passed to User.objects.get_or_create_bot (where bot_user
is assigned) from "Autobatch_translatior" to "Autobatch_translator" so both
parameters use the correct spelling.
In `@weblate/trans/models/component.py`:
- Around line 4672-4694: The loop over commit_groups must commit each
author-group immediately after update_units instead of deferring all commits;
after calling translation.update_units(changes, store, author_name) and
confirming changes (any(changes_status.values())), invoke
git_commit(translation, author_name, timestamp) (using the same timestamp
conversion to timezone.make_aware when needed) right there so the file state and
attribution are recorded before the next group mutates the file; remove or skip
the later deferred commit loop (the code that iterates translations_to_commit)
to avoid duplicate/empty commits and apply the same immediate-commit fix to the
other block referenced around lines 4708-4729.
- Around line 1214-1226: In progress_step, guard self.translations_count (typed
int | None) before using the floor-division: assign it to a local var (e.g.
count = self.translations_count), check for None or 0 (and keep the existing -1
check for linked_component handling) and return early if invalid to avoid mypy
type errors and divide-by-zero; then proceed to increment
self.translations_progress and compute progress = 100 *
self.translations_progress // count. Ensure you still call
linked_component.progress_step(progress) when translations_count == -1 and
preserve current_task checks.
---
Nitpick comments:
In @.github/workflows/cd.yml:
- Around line 35-37: Replace the fixed "sleep 300" in the workflow script block
with a configurable or retry-based health check: either reduce the hardcoded
delay to a shorter default or read a timeout/env var, or implement a retry loop
around the existing "curl -sf http://localhost:8000/healthz/" (with incremental
backoff and a max attempts) so the workflow probes until ready instead of always
waiting 300s; update the script block that currently contains "sleep 300" and
"curl -sf http://localhost:8000/healthz/" accordingly and ensure failures still
cause the job to fail if the max retries are exhausted.
In `@docker/Dockerfile`:
- Around line 9-10: Remove build-only sources from the final image by not
copying the full checkout into runtime or by deleting it before the final stage
completes: stop using COPY . /app/boost-weblate/ into the image that becomes
runtime, or after COPY --from=build /app /app remove /app/boost-weblate and
/app/src (or only copy /app/venv and the installed package artifacts instead of
the entire /app). Update the Dockerfile to ensure only runtime outputs (e.g.,
/app/venv and installed site-packages) are present in the final stage and remove
repo-only artifacts (referencing COPY . /app/boost-weblate/, COPY --from=build
/app /app, and paths /app/boost-weblate and /app/src); apply the same change for
the other instances noted (lines 50-51).
In `@docker/environment.example`:
- Around line 50-55: Remove the duplicate database-name variable by keeping only
the conventional POSTGRES_DB and deleting POSTGRES_DATABASE (or, if you need
both for compatibility, add a clarifying comment above the variables explaining
the difference and when to use POSTGRES_DB vs POSTGRES_DATABASE); update any
references in the codebase or docs to use POSTGRES_DB (search for
POSTGRES_DATABASE and replace or document usage) so the environment example is
unambiguous.
In `@docker/etc/nginx/generate-site.py`:
- Around line 7-17: Validate the number of CLI args before unpacking
sys.argv[1:] and raise a clear error if it doesn't match the expected count;
check len(sys.argv) (or len(sys.argv[1:])) against the number of variables
(TEMPLATE_DIRS, WEBLATE_URL_PREFIX, WEBLATE_REALIP, CLIENT_MAX_BODY_SIZE,
WEBLATE_BUILTIN_SSL, WEBLATE_ANUBIS_URL, SITE_DOMAIN, ENABLE_HTTPS) and exit
with a descriptive message explaining the expected arguments and their order so
the subsequent tuple unpacking using sys.argv[1:] will never raise a cryptic
ValueError.
- Around line 1-2: Add the GPL-3.0-or-later license header immediately after the
existing shebang in docker/etc/nginx/generate-site.py: insert the SPDX short
identifier line ("SPDX-License-Identifier: GPL-3.0-or-later"), a copyright line
with the project/organization and year, and the standard short GPL header notice
(one- or two-line copyright/permission notice) so the file clearly declares
GPL-3.0-or-later licensing; keep the shebang as the first line and place the
header directly below it.
In `@docker/etc/supervisor/conf.d/celery-backup.conf`:
- Around line 1-7: Add a supervisor "priority" setting to the celery-backup
program block to ensure deterministic startup order alongside the other Celery
workers; update the [program:celery-backup] configuration (the block defining
environment, command, autorestart, stdout_logfile, redirect_stderr) to include a
priority value consistent with the existing sequence (e.g., set priority=700 to
follow 400/500/600 pattern).
In `@docker/etc/supervisor/conf.d/celery-beat.conf`:
- Around line 1-7: Add a priority setting to the [program:celery-beat]
supervisor block so its startup order is explicit and consistent with the other
Celery configs; update the celery-beat section (program:celery-beat, the block
containing the command "/app/venv/bin/celery beat ...") to include a numeric
priority (for example 700 to match celery-translate.conf or another value chosen
to place beat at the correct order relative to celery-single.conf which uses
400).
In `@docker/etc/supervisor/conf.d/celery-single.conf`:
- Around line 1-8: The Celery command syntax is inconsistent across configs;
standardize them by using the explicit "--app=weblate.utils" form (as used in
the [program:celery] config) instead of relying on the CELERY_APP env var in
other files (e.g., celery-translate.conf). Update the command line in
celery-translate.conf (and any other Celery supervisor confs) to use "celery
--app=weblate.utils worker ..." preserving existing options/queues and any
"%(ENV_... )s" placeholders so all configs use the same explicit app invocation.
In `@docker/etc/supervisor/conf.d/celery-translate.conf`:
- Around line 1-8: Remove the CELERY_APP=weblate.utils environment assignment
from the [program:celery-translate] environment and update the command string
used by the celery worker to explicitly pass the app via -A (e.g., change the
command from the current "/app/venv/bin/celery worker ..." to include "-A
weblate.utils" so it reads like "celery -A weblate.utils worker ..."); keep the
existing hostname, loglevel, queues, pool, prefetch-multiplier and
%(ENV_CELERY_TRANSLATE_OPTIONS)s tokens and ensure stdout/stderr and autorestart
settings remain unchanged.
In `@docker/requirements.txt`:
- Around line 1-66: The requirements.txt duplicates and pins many deps already
declared in pyproject.toml, risking version drift; remove the hand-maintained
pins from docker/requirements.txt and instead: update the Dockerfile to install
from pyproject.toml extras (or from a generated lock file), and add a build step
in CI/packaging that generates a frozen requirements.txt (e.g., pip-compile /
"uv pip compile" or equivalent) from pyproject.toml so the Docker build consumes
that generated lockfile; ensure the repo documents the chosen flow and remove or
mark docker/requirements.txt as generated to avoid manual edits.
In `@pyproject.toml`:
- Around line 490-496: The pyproject.toml contains an empty [tool.pytest]
section that is redundant because [tool.pytest.ini_options] already exists;
either remove the empty [tool.pytest] header or replace it with a clarifying
comment indicating it's intentionally reserved for future pytest settings so
readers won't be confused—look for the [tool.pytest] and
[tool.pytest.ini_options] headers and perform the removal or add a short inline
comment explaining intent.
In `@weblate/trans/autobatchtranslate.py`:
- Around line 108-122: Add an explicit return type for _prepare_batch_request to
improve clarity and typing (e.g., Optional[Tuple[QuerySet, Dict[str,str],
Dict[int,UnitType], Set[str]]]) or replace the tuple return with a small
dataclass/namedtuple (e.g., BatchRequest = dataclass(...)) and return that
instead; update the function signature for _prepare_batch_request and adjust
callers that unpack units_qs, units_data, unit_map, expected_keys to use the new
typed return or fields to keep typesafe usage.
- Around line 62-105: Update the _resolve_openrouter_config function signature
to include a return type hint of a three-item optional string tuple (e.g. ->
tuple[Optional[str], Optional[str], Optional[str]] or -> (str | None, str |
None, str | None)) and import typing.Optional if using tuple[Optional[...]];
keep the existing logic and return values unchanged, only add the type
annotation to the function definition to satisfy the project type-hinting
guideline.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fa20452e-7956-480e-8687-f7b7104c51d0
⛔ Files ignored due to path filters (2)
docker/etc/nginx/ffdhe2048.pemis excluded by!**/*.pemuv.lockis excluded by!**/*.lock
📒 Files selected for processing (67)
.dockerignore.github/workflows/api.yml.github/workflows/cd.yml.github/workflows/dependency-review.yml.github/workflows/docs.yml.github/workflows/fossa.yml.github/workflows/issue-closed.yml.github/workflows/issue-commented.yml.github/workflows/issue-labeled.yml.github/workflows/issue-milestoned.yml.github/workflows/issue-opened.yml.github/workflows/issue-reopened.yaml.github/workflows/issue-stale.yml.github/workflows/labels.yml.github/workflows/licenses-update.yml.github/workflows/linkcheck.yml.github/workflows/macos.yml.github/workflows/migrations.yml.github/workflows/milestone-closed.yml.github/workflows/pr-stale.yml.github/workflows/pull_requests.yaml.github/workflows/schema-update.yml.github/workflows/setup.yml.github/workflows/test.yml.github/workflows/unicodechars.yml.github/workflows/uv-upgrade.yml.github/workflows/yarn-update.yml.gitignore.pre-commit-config.yamlREUSE.tomldocker/Dockerfiledocker/docker-compose.ymldocker/environment.exampledocker/etc/nginx/default.tpldocker/etc/nginx/generate-site.pydocker/etc/nginx/nginx.confdocker/etc/supervisor/conf.d/celery-backup.confdocker/etc/supervisor/conf.d/celery-beat.confdocker/etc/supervisor/conf.d/celery-celery.confdocker/etc/supervisor/conf.d/celery-memory.confdocker/etc/supervisor/conf.d/celery-notify.confdocker/etc/supervisor/conf.d/celery-single.confdocker/etc/supervisor/conf.d/celery-translate.confdocker/etc/supervisor/conf.d/check.confdocker/etc/supervisor/conf.d/web.confdocker/etc/supervisor/supervisord.confdocker/health_checkdocker/requirements.txtdocker/startpyproject.tomlscripts/backup/backup_from_server.shscripts/backup/dump_database.shscripts/backup/recalculate_stats.pyscripts/backup/sync_database_to_files.pyscripts/backup/update_push_urls.pystart-weblate.shstop-weblate.shweblate/boost_endpoint/services.pyweblate/boost_endpoint/views.pyweblate/formats/asciidoc.pyweblate/formats/quickbook.pyweblate/settings_docker.pyweblate/trans/autobatchtranslate.pyweblate/trans/models/component.pyweblate/trans/tasks.pyweblate/utils/openrouter_translator.pyweblate/utils/quickbook.py
💤 Files with no reviewable changes (24)
- .github/workflows/dependency-review.yml
- .github/workflows/issue-opened.yml
- .github/workflows/milestone-closed.yml
- .github/workflows/fossa.yml
- .github/workflows/issue-milestoned.yml
- .github/workflows/issue-labeled.yml
- .github/workflows/pull_requests.yaml
- .github/workflows/labels.yml
- .github/workflows/issue-closed.yml
- .github/workflows/licenses-update.yml
- .github/workflows/pr-stale.yml
- .github/workflows/schema-update.yml
- .github/workflows/issue-commented.yml
- .github/workflows/linkcheck.yml
- .github/workflows/issue-reopened.yaml
- .github/workflows/api.yml
- .github/workflows/macos.yml
- .github/workflows/yarn-update.yml
- .github/workflows/docs.yml
- .github/workflows/issue-stale.yml
- .github/workflows/uv-upgrade.yml
- .github/workflows/setup.yml
- .github/workflows/migrations.yml
- .github/workflows/unicodechars.yml
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/specs/openapi.yaml`:
- Around line 72402-72403: The OpenAPI server entry currently publishes an
insecure URL (the server block with url: http://localhost:8000 and description:
Weblate); replace or augment it with a TLS endpoint (e.g., url:
https://weblate.example.com) and mark the http localhost URL explicitly as
dev-only (update description to "Weblate (dev only, HTTP)" or similar) so that
token-auth endpoints list a secure HTTPS server for production while retaining
localhost HTTP for local development only. Ensure the servers array contains the
HTTPS entry used for prod and keep the http://localhost:8000 entry only with an
explicit dev-only description.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5be13ed3-a57c-4017-b6f8-141b0b03b297
📒 Files selected for processing (1)
docs/specs/openapi.yaml
Remove repo-root .dockerignore, keep start/stop-weblate.sh local-only via .gitignore, and drop stale REUSE annotation for the removed file.
Regenerated with DJANGO_SETTINGS_MODULE=weblate.settings_test, CI_* database env, and migrations applied so the spec matches api.yml / spectacular on CI.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
docs/specs/openapi.yaml (1)
72402-72403:⚠️ Potential issue | 🟠 MajorAdd a TLS production server entry; current server list still defaults to cleartext HTTP.
Line 72402 documents only
http://localhost:8000with a non-dev-specific description, which is unsafe guidance for token-auth API usage. Please add an HTTPS production URL and mark localhost as dev-only.🔧 Proposed fix
servers: -- url: http://localhost:8000 - description: Weblate +- url: https://weblate.example.com + description: Weblate (production, TLS) +- url: http://localhost:8000 + description: Weblate (dev only, HTTP)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/specs/openapi.yaml` around lines 72402 - 72403, The OpenAPI servers list currently contains only the entry with url: http://localhost:8000 and description "Weblate"; add a production TLS server entry (e.g., url: https://weblate.example.com) above the localhost entry and update descriptions to distinguish environments (mark the existing http://localhost:8000 entry as development-only). Modify the servers array so the first/primary server is the HTTPS production URL, include a clear description like "Weblate (production, TLS)" and change the localhost description to "Weblate (development only, HTTP)" to avoid suggesting cleartext usage for token-auth APIs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@docs/specs/openapi.yaml`:
- Around line 72402-72403: The OpenAPI servers list currently contains only the
entry with url: http://localhost:8000 and description "Weblate"; add a
production TLS server entry (e.g., url: https://weblate.example.com) above the
localhost entry and update descriptions to distinguish environments (mark the
existing http://localhost:8000 entry as development-only). Modify the servers
array so the first/primary server is the HTTPS production URL, include a clear
description like "Weblate (production, TLS)" and change the localhost
description to "Weblate (development only, HTTP)" to avoid suggesting cleartext
usage for token-auth APIs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f6bd1fa4-2c61-470a-bb74-2c525d6eeb64
📒 Files selected for processing (1)
docs/specs/openapi.yaml
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment Thanks for integrating Codecov - We've got you covered ☂️ |
The API Lint job inherits DJANGO_SETTINGS_MODULE=weblate.settings_test, but docs/specs/openapi.yaml is produced with default weblate.settings (localhost). Override the Generate OpenAPI step so regenerated spec matches the committed file.
- settings_example: DATABASES honor CI_* like CI runners; SITE_DOMAIN and DATA_DIR from DJANGO_SITE_DOMAIN / WEBLATE_DATA_DIR for reproducible spec. - API workflow and docs/Makefile use weblate.settings_example with the same env as local (no gitignored settings.py); fixes exit 2 when weblate.settings pointed at Postgres :5432 while CI maps :60000. - Refresh openapi.yaml (VcsEnum without optional github backend when absent).
- Post-process webhooks: set operationId and remove DRF error examples from webhook request bodies (they referenced the messaging schema incorrectly). - Align ProjectMachinerySettings OpenApiExample with ProjectMachinerySettings schema. - Regenerate docs/specs/openapi.yaml.
actions/checkout defaults to the merge ref for pull_request. Regenerating openapi on that tree differs from docs/specs/openapi.yaml committed on the branch; git diff --exit-code then fails. Use the PR head SHA so generation and commit are the same snapshot.
Add gerrit, mercurial, and subversion to VcsEnum and vcs field descriptions. These backends were missing because git-svn, mercurial, and git-review were not installed locally when the spec was last generated.
…eam v5.16.1 Remove custom Redocly post-hooks (ensure_operation_summaries, fix_webhook_operations_for_redocly) and the serializer example change that diverged from upstream. Regenerate openapi.yaml accordingly. Redocly lint still passes (warnings only, no errors).
- Address SC2012/SC2162/SC2181/SC1091 in scripts/backup/restore_to_local.sh - Regenerate docs/specs/openapi.yaml: add github VCS and align servers URL with settings_test output
…ation CI spectacular run does not emit github in VcsEnum; keeping it caused git diff to fail after make update-openapi.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores