-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow data sync to safely append new changes without recreating entire db every time #322
Conversation
…without re-creating columns all the time
…o delete necessary datasets
// to update all datasets. Dates must be a YYYY-MM-DD string. | ||
// It's recommended to pass this as an env var instead. | ||
// Env var: LAST_METADATA_UPDATE_DATE | ||
lastMetadataUpdateDate: '2022-01-01', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this need to be manually updated later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be overridden via a environment variable when we run the data ingestion command (yarn seed-database-full
). The cron job we set will set this environment variable correctly whenever we run this on a weekly basis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
- i don't understand why the id is now a String rather than an int
- not sure i followed all the db ingestion code but appreciate the comments
* Updated docs and ci workflow (#282) * Closes #274 and closes #275 - updated UPDATE_ON_BOOT process to be easier to run and quit the server process so it feels more like a script * Add matching categories in search results (#285) * Trim whitespace from searches (#286) * Feature/fake auth (#287) * Added fake auth and updated readmes * Updated README and necessary env vars * Improved environment variable defaults * Updated PR template * Update bug_report.md * Deprioritize test datasets (#288) * new copy for toggle (#293) * URL encode collection name (#294) * button-border-update (#302) * Focus failures (#307) * changed border color * added outline to dataset links * show Add To Collection button if loading collection from URL (#297) * always show collection button on datasets * Make entire Collection card clickable (#321) * Wrap the Collection card in a Link tag * Added updated roadmap * about/welcome copy updates (#320) * Allow data sync to safely append new changes without recreating entire db every time (#322) * Change dataset_column id type and make it safe to re-run the seed db without re-creating columns all the time * Updated elasticsearch indexing to use upsert functionality and to also delete necessary datasets * Add new command to package.json Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Sunwoo Yim <sunny8751@gmail.com> Co-authored-by: Indraneel Purohit <i.m.indraneel@gmail.com> Co-authored-by: Indraneel Purohit <indraneel.purohit@twosigma.com> Co-authored-by: Detroit-the-Dev <70827740+Detroit-the-Dev@users.noreply.github.com> Co-authored-by: Tiana Wofford <detroit@Tianas-MacBook-Pro.local> Co-authored-by: Tiana Wofford <detroit@tianas-mbp.mynetworksettings.com>
Summary
Finally, fixed the data ingestion path so that it can reliably append new rows and update records in elastic search (and delete things that have been removed from socrata!). In the past our only option was to destroy the entire db and recreate everything from scratch. Every time. Which would take hours in production. This should finally allow us to set a cron job to actually keep data in scout up-to-date.
Also added lots of comments because the database ingestion path (in the portal-sync module and in search.service) is difficult to follow and is due a massive refactor one day.
These changes require a postgres db migration unsupported by TypeORM, since we had to change the dataset_column id to be a string, and to cascade on dataset deletion. These are the postgres commands to safely migrate the tables without loss of data:
Screenshots or Videos (if applicable)
Entirely backend, no screenshots.
Related Issues
Closes #276
Test Plan
Please describe the exact steps you followed to test your change. Be as clear as possible so a reviewer can follow the same steps. List what UI interactions are needed, or if there are automated tests then list what should be run. For example:
yarn sync-database-dev
and openlocalhost:9200/dataset_index
while it runs. You should notice that the elasticsearch index has not been deleted.LAST_METADATA_UPDATE_DATE
env var in theseed-database-dev
command to see how we only update datasets that have had recent metadata changes (this works thanks to Socrata actually tracking when the last metadata update of each dataset is, and we keep track of that in our db as well).Checklist Before Requesting a Review