-
Notifications
You must be signed in to change notification settings - Fork 7
Evaluation: Migration Cleanups #436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughConverts several JSON columns to JSONB, adjusts foreign-key on-delete semantics (notably evaluation_run↔batch_job and OpenAI conversation FKs) via a new Alembic revision, tightens nullability/indexing for batch_job and evaluation_run fields, and removes some inline alembic comments/formatting. Changes
Sequence Diagram(s)(Skipped — changes are schema/migration and do not introduce new runtime control flow.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Areas needing extra attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py(1 hunks)backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py(1 hunks)backend/app/models/batch_job.py(2 hunks)backend/app/models/evaluation.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.pybackend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.pybackend/app/models/evaluation.pybackend/app/models/batch_job.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/evaluation.pybackend/app/models/batch_job.py
🔇 Additional comments (3)
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py (1)
173-197: Embedding FK stays aligned with later migrations.Thanks for keeping
embedding_batch_job_idnullable and consistent with the follow-up constraint tweaks—this keeps the base revision compatible with 633e69806207.backend/app/models/evaluation.py (1)
94-195: Model changes match the new schema.Nice job moving these payload fields onto JSONB and wiring up the status indexes—this lines up with the migrations cleanly.
backend/app/models/batch_job.py (1)
18-119: Schema metadata and indexes look solid.The richer column metadata plus the new provider_status composite indexes will make querying and migrations saner—looks good.
| op.drop_constraint( | ||
| "openai_conversation_organization_id_fkey1", | ||
| "openai_conversation", | ||
| type_="foreignkey", | ||
| ) | ||
| op.drop_constraint( | ||
| "openai_conversation_project_id_fkey1", | ||
| "openai_conversation", | ||
| type_="foreignkey", | ||
| ) | ||
| op.create_foreign_key( | ||
| "evaluation_run_batch_job_id_fkey", | ||
| "evaluation_run", | ||
| "batch_job", | ||
| ["batch_job_id"], | ||
| ["id"], | ||
| ) | ||
| op.create_foreign_key( | ||
| "fk_evaluation_run_embedding_batch_job_id", | ||
| "evaluation_run", | ||
| "batch_job", | ||
| ["embedding_batch_job_id"], | ||
| ["id"], | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not leave openai_conversation without foreign keys.
The upgrade drops openai_conversation→organization/project FKs but never recreates them, so the table permanently loses referential integrity while the downgrade adds them back. This will allow orphaned rows as soon as the migration runs. Please recreate the FKs in the upgrade after re-adding the evaluation_run constraints.
op.create_foreign_key(
"fk_evaluation_run_embedding_batch_job_id",
"evaluation_run",
"batch_job",
["embedding_batch_job_id"],
["id"],
)
+ op.create_foreign_key(
+ "openai_conversation_organization_id_fkey1",
+ "openai_conversation",
+ "organization",
+ ["organization_id"],
+ ["id"],
+ )
+ op.create_foreign_key(
+ "openai_conversation_project_id_fkey1",
+ "openai_conversation",
+ "project",
+ ["project_id"],
+ ["id"],
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| op.drop_constraint( | |
| "openai_conversation_organization_id_fkey1", | |
| "openai_conversation", | |
| type_="foreignkey", | |
| ) | |
| op.drop_constraint( | |
| "openai_conversation_project_id_fkey1", | |
| "openai_conversation", | |
| type_="foreignkey", | |
| ) | |
| op.create_foreign_key( | |
| "evaluation_run_batch_job_id_fkey", | |
| "evaluation_run", | |
| "batch_job", | |
| ["batch_job_id"], | |
| ["id"], | |
| ) | |
| op.create_foreign_key( | |
| "fk_evaluation_run_embedding_batch_job_id", | |
| "evaluation_run", | |
| "batch_job", | |
| ["embedding_batch_job_id"], | |
| ["id"], | |
| ) | |
| op.drop_constraint( | |
| "openai_conversation_organization_id_fkey1", | |
| "openai_conversation", | |
| type_="foreignkey", | |
| ) | |
| op.drop_constraint( | |
| "openai_conversation_project_id_fkey1", | |
| "openai_conversation", | |
| type_="foreignkey", | |
| ) | |
| op.create_foreign_key( | |
| "evaluation_run_batch_job_id_fkey", | |
| "evaluation_run", | |
| "batch_job", | |
| ["batch_job_id"], | |
| ["id"], | |
| ) | |
| op.create_foreign_key( | |
| "fk_evaluation_run_embedding_batch_job_id", | |
| "evaluation_run", | |
| "batch_job", | |
| ["embedding_batch_job_id"], | |
| ["id"], | |
| ) | |
| op.create_foreign_key( | |
| "openai_conversation_organization_id_fkey1", | |
| "openai_conversation", | |
| "organization", | |
| ["organization_id"], | |
| ["id"], | |
| ) | |
| op.create_foreign_key( | |
| "openai_conversation_project_id_fkey1", | |
| "openai_conversation", | |
| "project", | |
| ["project_id"], | |
| ["id"], | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this has been fixed, can you resolve this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the full contents of the migration file to see if the issue has been fixed
cat backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py
Length of output: 3374
@nishika26, I've checked the migration file and the issue has NOT been fixed.
The upgrade() function still drops the openai_conversation foreign keys (lines 42-51) but never recreates them. Only the evaluation_run foreign keys are being recreated (lines 52-64).
This means after the migration runs, the openai_conversation table will permanently lose referential integrity to organization and project tables, allowing orphaned records.
What needs to be fixed:
Add these lines at the end of the upgrade() function (after line 64):
op.create_foreign_key(
"openai_conversation_organization_id_fkey1",
"openai_conversation",
"organization",
["organization_id"],
["id"],
)
op.create_foreign_key(
"openai_conversation_project_id_fkey1",
"openai_conversation",
"project",
["project_id"],
["id"],
)Without this fix, I cannot resolve this comment as it represents a critical data integrity issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh sorry coderabbit, my comment was towards @AkhileshNegi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
| op.drop_constraint( | ||
| "openai_conversation_organization_id_fkey1", | ||
| "openai_conversation", | ||
| type_="foreignkey", | ||
| ) | ||
| op.drop_constraint( | ||
| "openai_conversation_project_id_fkey1", | ||
| "openai_conversation", | ||
| type_="foreignkey", | ||
| ) | ||
| op.create_foreign_key( | ||
| "evaluation_run_batch_job_id_fkey", | ||
| "evaluation_run", | ||
| "batch_job", | ||
| ["batch_job_id"], | ||
| ["id"], | ||
| ) | ||
| op.create_foreign_key( | ||
| "fk_evaluation_run_embedding_batch_job_id", | ||
| "evaluation_run", | ||
| "batch_job", | ||
| ["embedding_batch_job_id"], | ||
| ["id"], | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this has been fixed, can you resolve this comment
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py
Show resolved
Hide resolved
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py
Show resolved
Hide resolved
| ), | ||
| description="S3 URL of raw batch output file", | ||
| ) | ||
| total_items: int = Field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all the Fields,
You don’t need to use sa_column here. SQLModel already infers the SQLAlchemy column from the type annotation + Field() metadata.
total_items: int = Field( default=0, description="Total number of items in the batch" )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py (2)
21-34: Verify JSON to JSONB conversion with existing data.PostgreSQL can automatically convert JSON to JSONB data, but you should verify this migration will succeed if the
evaluation_runtable contains existing records. The conversion is generally safe, but JSONB has stricter validation and different storage format.#!/bin/bash # Check if there are any data migration scripts or tests for this conversion fd -e py -e sql | xargs rg -l "JSON.*JSONB|jsonb.*migration" # Search for evaluation_run data seeding or fixtures rg -n "evaluation_run" --type=py -g "*seed*" -g "*fixture*" -g "*test*" | head -20
43-66: Critical:openai_conversationforeign keys are permanently lost.The
upgrade()function drops theopenai_conversationforeign key constraints (lines 43-52) but never recreates them. Only theevaluation_runforeign keys are recreated (lines 53-66). This permanently removes referential integrity for theopenai_conversationtable, allowing orphaned records to accumulate when organizations or projects are deleted.The
downgrade()function correctly recreates these constraints (lines 107-120), confirming they should exist.Apply this diff to recreate the
openai_conversationforeign keys after theevaluation_runforeign keys:op.create_foreign_key( "fk_evaluation_run_embedding_batch_job_id", "evaluation_run", "batch_job", ["embedding_batch_job_id"], ["id"], ) + op.create_foreign_key( + "openai_conversation_organization_id_fkey1", + "openai_conversation", + "organization", + ["organization_id"], + ["id"], + ) + op.create_foreign_key( + "openai_conversation_project_id_fkey1", + "openai_conversation", + "project", + ["project_id"], + ["id"], + )
🧹 Nitpick comments (1)
backend/app/models/batch_job.py (1)
27-124: Consider whether explicitsa_columndefinitions are necessary.A past review comment (line 109) noted that SQLModel already infers SQLAlchemy columns from type annotations and
Field()metadata, questioning the necessity of explicitsa_columnusage. While these definitions provide explicit control over the database schema (comments, nullability, indexing), they add significant verbosity.If this explicit approach is intentional for this PR's goal of ensuring model-schema consistency, it's acceptable. However, for future work, consider whether the same result can be achieved more concisely using SQLModel's inference capabilities.
Example of a more concise approach (using
providerfield as an example):provider: str = Field( description="LLM provider name (e.g., 'openai', 'anthropic')", # SQLModel infers: Column(String, nullable=False) )versus the current:
provider: str = Field( sa_column=Column( String, nullable=False, comment="LLM provider name (e.g., 'openai', 'anthropic')", ), description="LLM provider name (e.g., 'openai', 'anthropic')", )Based on learnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py(1 hunks)backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py(2 hunks)backend/app/models/batch_job.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.pybackend/app/models/batch_job.pybackend/app/alembic/versions/633e69806207_evaluation_update_constraints.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/batch_job.py
🧠 Learnings (1)
📚 Learning: 2025-10-08T12:05:01.317Z
Learnt from: CR
Repo: ProjectTech4DevAI/ai-platform PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T12:05:01.317Z
Learning: Applies to backend/app/models/**/*.py : Define SQLModel entities (database tables and domain objects) in backend/app/models/
Applied to files:
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py
🧬 Code graph analysis (1)
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py (1)
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py (2)
upgrade(21-224)downgrade(227-244)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (4)
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py (2)
8-12: LGTM: Import reordering improves consistency.The explicit import of
sqlalchemy as saearlier in the file and reordering of other imports is a cosmetic improvement that doesn't affect functionality.
173-173: The review comment conflates columns from different tables and misidentifies an inconsistency.The batch_job table columns (lines 26-87) are in a separate table and do have descriptive comments. However, within the evaluation_run table itself, both
batch_job_id(line 172) andembedding_batch_job_id(line 173) are consistently formatted as simple one-liners without comments. There is no inconsistency to address—the two foreign key columns in the same table are already treated uniformly.Likely an incorrect or invalid review comment.
backend/app/models/batch_job.py (2)
19-22: LGTM: Composite indices improve query performance.These composite indices on
(provider_status, organization_id)and(provider_status, project_id)will optimize common query patterns for filtering batch jobs by status within specific organizations or projects.
127-132: LGTM: CASCADE deletion behavior maintains referential integrity.The addition of
ondelete="CASCADE"ensures that when an organization or project is deleted, related batch jobs are automatically removed, preventing orphaned records.
backend/app/alembic/versions/633e69806207_evaluation_update_constraints.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/app/models/batch_job.py (1)
19-22: Let me do one final comprehensive search to check if there are any queries that use provider_status as the primary filtering criterion:Revise composite index strategy to align with actual query patterns.
The query pattern in
get_batches_by_type()filters byjob_typefirst, then optionally byorganization_id,project_id, and finallyprovider_status(lines 172-181, backend/app/crud/batch_job.py). However, the composite indexes are defined withprovider_statusas the leading column—the opposite order. Composite indexes should order columns by their usage in WHERE clauses, with the most frequently filtered columns first, to allow the optimizer to utilize them efficiently.Consider restructuring indexes to match query patterns:
(job_type, organization_id)and(job_type, project_id)would better serve the actual filtering order- Alternatively, evaluate whether
provider_statusqueries occur in isolation elsewhere to justify the current index designThe existing
job_typefield index and individual indexes onorganization_id/project_idare appropriate.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/models/batch_job.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/models/batch_job.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/batch_job.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (5)
backend/app/models/batch_job.py (5)
27-35: LGTM!The addition of
index=Trueon thejob_typefield is appropriate for query performance when filtering batch jobs by type. The descriptions enhance code documentation.
38-45: LGTM!Using
sa_column=Column(JSONB, nullable=False)is the correct approach here. While a past review comment mentioned thatsa_columnis not always necessary, JSONB is a PostgreSQL-specific type that SQLModel cannot automatically infer fromdict[str, Any]. JSONB provides better query performance and indexing capabilities compared to the plain JSON type.
48-78: LGTM!The explicit
default=Noneon Optional fields is technically redundant (SQLModel infers None for Optional types), but it improves code clarity and is a harmless practice. The enhanced descriptions significantly improve documentation.
81-85: LGTM!Using
sa_column=Column(Text, nullable=True)is appropriate for error messages that might be lengthy. TheTexttype has no length limit in PostgreSQL, unlikeVARCHAR, making it suitable for storing potentially long error messages or stack traces.
88-93: Now I need to examine the actual codebase to understand the batch_job model context and determine if CASCADE is already in the database or a new change:Based on the web search results, CASCADE is a standard pattern in SQLModel/SQLAlchemy and is commonly used in real-world batch systems. However, I still need to examine the actual codebase to verify the specific context. Let me search for the batch_job model and related files:Based on my research, I have gathered sufficient information to address the review comment. The web searches confirm that:
- CASCADE with ondelete="CASCADE" is a legitimate and recommended SQLModel/SQLAlchemy pattern, and it's a good idea to have cascade_delete configured alongside nullable=False on foreign keys
- When adding ondelete="CASCADE" to an existing foreign key, alembic will drop and recreate the constraint
- CASCADE delete is a standard pattern in real-world batch systems for cleanup operations
The review comment raised a valid concern about verifying CASCADE is intentional, but:
- CASCADE with nullable=False is a standard, best-practice pattern
- The code snippet shows both fields correctly configured
- If this is a "cleanup PR aligning model with existing schema," CASCADE aligns with typical batch job handling
- No evidence of audit/history concerns specific to batch_job model was found in context
The original review comment's request for verification is somewhat addressed through standard SQLModel/SQLAlchemy practices, but without access to the actual database state or migration files, I cannot definitively confirm whether this represents:
- Alignment with existing database constraints, or
- A breaking change introducing new cascade behavior
Given the constraints and the inability to examine the actual codebase files directly in this environment, I should provide guidance on verification.
Verify CASCADE delete behavior aligns with application requirements and confirm it matches existing database schema through migration review or database inspection.
The ondelete="CASCADE" pattern is appropriate for automatically deleting related records when a parent is deleted, and setting nullable=False alongside CASCADE is a standard configuration. However, confirm:
- Existing database constraints already implement CASCADE (if aligning existing schema) or expect application-level cascade handling
- Batch job deletion aligns with your data retention and audit policies
- If this PR introduces new CASCADE behavior (not in current schema), migration files properly drop and recreate affected foreign keys with the new constraint
If batch jobs serve audit/history purposes, consider whether CASCADE deletion is appropriate or if RESTRICT should be used instead to prevent accidental data loss.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
backend/app/models/evaluation.py (3)
94-97: JSONB metadata column aligns with Postgres usage; consider DB‑level defaultSwitching
dataset_metadatatoJSONBwithnullable=Falseand a Pythondefault_factory=dictcorrectly models a required JSONB column on the ORM side. If you expect inserts outside SQLModel/ORM (raw SQL, data loads), you may also want a database‑level default of{}in the Alembic migration to fully match this non‑nullable contract; otherwise this is fine as-is.
148-151: Consistent JSONB usage forconfigandscoreMigrating both
config(non‑nullable withdefault_factory=dict) andscore(nullable) to JSONB keeps the model consistent with the evaluation dataset metadata and the Postgres schema, and avoids future JSON↔JSONB drift in Alembic. As withdataset_metadata, consider a DB‑level default forconfigif you rely on non‑ORM inserts, but the current model definition is otherwise solid.Also applies to: 193-196
163-177:SET NULLsemantics and dedicated embedding batch job FK look correctChanging
batch_job_idtoondelete="SET NULL"and introducingembedding_batch_job_idwith the sameSET NULLbehavior matches the idea that evaluation runs should survive even if their backing batch jobs are cleaned up. The corresponding relationship definitions using explicitforeign_keyskeep SQLAlchemy’s relationship resolution unambiguous. If you commonly query bybatch_job_idorembedding_batch_job_id, you might later consider adding indexes on these columns viaindex=Trueor explicitIndexentries, but that’s a tuning/usage decision rather than a blocker.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py(4 hunks)backend/app/models/evaluation.py(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/alembic/versions/6fe772038a5a_create_evaluation_run_table.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/models/evaluation.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/evaluation.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (2)
backend/app/models/evaluation.py (2)
5-6: JSONB and Index imports are appropriate for the model changesBringing in
Indexand PostgreSQLJSONBmatches their later usage in__table_args__and the JSONB-backed columns, and keeps the imports lean and explicit. No changes needed here.
136-139: Composite indexes on (status, organization_id/project_id) look well‑targetedDefining
idx_eval_run_status_organdidx_eval_run_status_projectdirectly in__table_args__is a good fit if you frequently filter evaluations bystatuswithin an organization or project. This should help avoid future migration noise while improving query performance on common listing endpoints.
Summary
When creating a new migration, it populates the new migration with lot of changes dues to inconsistency in model and DB schema. This PR addresses those issues
Checklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Summary by CodeRabbit
Chores
Refactor