fix(longRunningMigrations): make 0020 batch size small to avoid OOM#6875
fix(longRunningMigrations): make 0020 batch size small to avoid OOM#6875noliveleger merged 1 commit intorelease/2.026.07from
Conversation
|
Important Review skippedAuto reviews are limited based on label configuration. 🏷️ Required labels (at least one) (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Comment |
|
| Filename | Overview |
|---|---|
| kobo/apps/long_running_migrations/jobs/0020_backfill_asset_version_hash.py | Comment updated to explain memory-vs-throughput trade-off; CHUNK_SIZE hardcoded to 5 and unused settings import removed. Logic is otherwise unchanged. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A([run]) --> B{get_queryset\nreturns records?}
B -- "No records left" --> Z([Done])
B -- "Up to CHUNK_SIZE=5\nrecords" --> C[Iterate with\niterator chunk_size=5]
C --> D[Compute content_hash\nfor each AssetVersion]
D --> E[bulk_update\n_content_hash field]
E --> F[sleep 2s\navoid DB flood]
F --> B
Reviews (2): Last reviewed commit: "docs(long_running_migrations): improve C..." | Re-trigger Greptile
kobo/apps/long_running_migrations/jobs/0020_backfill_asset_version_hash.py
Show resolved
Hide resolved
4f79600 to
26a0cd4
Compare
💭 Notes
0020_backfill_asset_version_hash.py.CHUNK_SIZEis kept small: eachversion_contentblob can be large, so a small batch size trades throughput for a lower memory footprint.