-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate docker-compose files for test run and persistent db #37
Conversation
For some reason 'timescaledb' will not be written to shared_preload_libraries of postgresql.conf if 001_timescaledb_tune.sh is copied to /docker-entrypoint-initdb.d/ by Dockerfile, but it works if the sh script is brought there by mapping it from db_test/.
depends_on does not work like we would like to: v3 does not support condition: ... mapping under depends_on.
This instance will be accessible through POSTGRES_LOCAL_PORT and can be used for exploration, e.g. with QGIS.
Document test run and persistent db separately.
|
Works well with docker-compose version 1.29.0 now.
Added a commit that implements |
Great stuff! |
"It would be nice to run timescaledb-tune straight away when the image is built --" This would work out only as long as the image is not pushed into a Docker image repository and is not used by others. On the other hand, maybe the tuning is not necessary at all. We're not measuring performance on our development laptops. And the tests are probably not that much slower untuned. On the third hand, we might gain some time when we run the tuning for the persistent development database. We could tune one and not the other Compose setup. Yet we wish to catch bugs early. We can aid catching bugs early by keeping environments similar to each other. The current approach facilitates that. I would keep it as it is but it's your call. |
"Could we come up with a neat solution that would allow us to run the imports automatically if the database is created from scratch, and omit the imports if it the database already exists on db_volume?" Some alternatives off the top of my head:
I would go for alternative 1. Alternative 2 wastes our time by running the migrations every time. Alternative 3 sounds messy and error-prone. |
Individual commitsI prefer to see individual commits that each make a coherent contribution towards the goal of the PR branch. Once the review starts, we should aim to no longer having WIP commits in the PR. In this case it made sense as we built WIP commit together. GitHub has a feature called draft PR that allows the developer to toggle, when a PR is ready for review and for merging from their point of view. Setting the draft boolean allows one to run CI checks without signalling that the work is finished. Ideally, if every commit compiles and passes tests, one can rely on SquashingSome prefer to squash a PR into just one commit. I think that is rare. What is more divisive is whether to
I think that is also dependent on the review tools available. I prefer option 1. Merge commits vs linear historyAs GitHub makes it pretty hard to review individual commits instead of the full PR, they also default to merging with GitHub has an option to "Rebase and merge" which uses Branching strategiesThere are several popular git workflows. Git flow is one. GitHub flow another. I often find GitLab flow useful. It's actually a set of alternative flows but they make sense to me. |
I like your approach. It's clean and documented. A general suggestion: For this PR, I might create roughly these individual commits:
|
I had a bunch of problems getting this PR to run. I use an unusual To be able to run the Compose file, I had to run: chmod -R go+r . && find ./* -type d -print0 | xargs -0 chmod -R go+rx I did not try switching to type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stellar work! You were really quick to create this PR.
Oh yeah, I could not find an issue on the board that would correspond to this PR. If there were one, adding "Fixes #123" into the description of this PR would connect this PR with that issue. Then if the kanban board automation is set up as it already is, merging this PR would automatically close the issue and move the issue on the board to "Done". |
Many people find pair work more productive than code reviews. It takes time to hone a great review process in a team. One source regarding code reviews that I enjoy is https://mtlynch.io/tags/code-review/. |
Good point. Probably we don't need to worry about timescaledb-tune at this point, at least it is not needed for quick tests with small amount of example data. The main reason for using Timescale anyway is to handle massive amounts of HFP data in a reasonable way (+ we might use some nice time series specific functions from the extension) but we are not quite there yet. |
Thank you, alternative 1 sounds good! I just have to document the commands needed in the readme. Wrote #38 about this. |
Also thank you for the helpful comments about commits and branching, I will get back to them as I learn more about PR workflows and try them in practice! |
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
Co-authored-by: haphut <haphut@mistmap.com>
This PR creates docker-compose files for two different cases:
docker-compose.test.yml
: One-off test run of launching a database instance, running the DDL commands and importing example data from scratch -> developer can see if all this runs without errors. No database data on volumes is left hanging around.docker-compose.db.yml
: Launching a database instance for more continuous use. This version uses a separatedb_volume
Docker volume to store the database data even if the services are removed and restarted later. The database is also made accessible vielocalhost
. This can be used for exploring withpsql
or QGIS, for example.Later on, a separate docker-compose file should probably be created for production use too, but for now we will do well with these two.
There are still some problems and todos with this PR, I will comment them in more detail.