-
Notifications
You must be signed in to change notification settings - Fork 14
Modify Deployment for multi data source searchengine #448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sbesson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments. Also two general process questions:
- what happens if the
searchengine_backupvolume is created but not cloned from a previous volume? Will the playbook still initialise an empty cluster or will it fail duringrestore_elasticsearch_datacommand? This will be immediately relevant for the creation ofprod128as the volume will not be cloned from a previous version. - what is the process for creating a new backup of the search cluster? Does that happen automatically after the indexer is executed? Or is that a separate command?
We will probably need to 2 companion PRs:
- one against submission workflow with the indexing/backup commands if they are different from the current one
- one against the deployment workflow to update the creation of new volumes and include the new volume to clone
|
I have added two new playbooks to backup and restore the searching data
This one should run after the deployment playbooks have completed successfully. It will check for the existence of the snapshot before running
This playbook should run just before releasing the production server. |
Shouldn't it run automatically as part of the search engine deployment then? Is the task asynchronous? If not how long will it typically take?
So the current indexing process is unchanged to update for new studies. Should this be executed by the person running the indexer using the same approach |
|
The search engine data restore process is asynchronous, and it runs automatically after the last commit.
|
@jburel @francesw @will-moore @dominikl what are your thoughts about the best place to integrate this into the IDR lifecycle? |
sbesson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss the state of this PR tomorrow at the IDR weekly meeting. I think we need to clarify two questions before spinning up prod128 with this included:
- the process for creating a dump of the search engine as raised in #448 (comment). My opinion is that this task is outside the scope of these playbooks
- re-reading the diff, what would happen if the playbooks and notably
idr-02-services.ymlwas re-run against an environment previously deployed? Would the restore command be re-executed and what would be the expected state of the indexer?
|
From the earlier discussion, the agreement was to:
I will open 2 issues to capture the two concerns raised in #448 (review) |
This PR contains the required changes to deploy the multi-data source searchengine ome/omero_search_engine#102