-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support syncing data stores in the DB with those defined in process_group.json files #1052
Conversation
WalkthroughWalkthroughThe recent updates introduce enhancements in data management and file handling in the project. New data store models and their synchronization logic have been added, improving how data is managed internally. Additionally, a new file system utility method has been introduced to identify specific file types, streamlining file processing tasks. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: .coderabbit.yaml
Files selected for processing (2)
- spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py (3 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/services/file_system_service.py (1 hunks)
Additional comments: 4
spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py (3)
- 6-10: Ensure the newly added imports are utilized within the file. Unused imports should be removed to maintain code cleanliness.
- 35-35: Initialization of
all_data_store_specifications
dictionary is correct and follows the specified objectives.- 99-99: The call to
_sync_data_store_models_with_specifications
correctly uses the aggregated specifications. Ensure that any exceptions raised within this method are appropriately handled or logged.spiffworkflow-backend/src/spiffworkflow_backend/services/file_system_service.py (1)
- 65-67: The addition of
is_process_group_json_file
method is correctly implemented and aligns with the PR objectives to identifyprocess_group.json
files based on their extension.
spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py
Outdated
Show resolved
Hide resolved
spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: .coderabbit.yaml
Files selected for processing (1)
- spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py (3 hunks)
Files skipped from review as they are similar to previous changes (1)
- spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: .coderabbit.yaml
Files selected for processing (1)
- spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py (3 hunks)
Files skipped from review as they are similar to previous changes (1)
- spiffworkflow-backend/src/spiffworkflow_backend/services/data_setup_service.py
Work on #1037 - this adds the support for syncing the data store data base tables with the data store specifications found in various process_group.json files on disk. During a run of
save_all_process_models
the data store specifications are aggregated by a compound key of(type, location, identifier)
, then existing models are aggregated by the same compound key. From there set operations are used to identify what needs to be inserted, updated or deleted. As noted before data does not migrate, just the specifications.Summary by CodeRabbit