Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve docker-compose.yml / setup.md #208

Closed
ShatteredBunny opened this issue Aug 9, 2022 · 2 comments · Fixed by #209
Closed

Improve docker-compose.yml / setup.md #208

ShatteredBunny opened this issue Aug 9, 2022 · 2 comments · Fixed by #209

Comments

@ShatteredBunny
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Despite the single docker-compose file, it is still not easy to get started and there were quite a few pitfalls, at least for me

  • docker-compose build copies the whole context and executes setup commands there, so they need to be run again locally (because the local files get mounted into the containers). This is mentioned in setup.md, but is not really convenient. Maybe this could be executed when a container is started (and not during the build). Or run a script using the backend image before running docker-compose up, no python setup would be needed on the host.
  • docker exec -it django-dev python3 manage.py makemigrations only generated a migration for api (which failed to migrate because of missing tables from control), I needed to run docker exec -it django-dev python3 manage.py makemigrations [control|chat] separately to create all migrations. I'm not familiar with django so I don't know what went wrong. Also for me it wasn't clear, that the user created with createsuperuser must be in the ESCROW_USERNAME .env variable
  • setup.md says RoboSats development site should be accessible on localhost:8000, but by default localhost is not a whitelisted host; one could use 127.0.0.1 instead so no configuration is necessary
  • The path /mnt/development is quite specific and it may not be possible to use depending on the system configuration. Imo a generic location would be better (either a docker volume, or at least a directory within the repo; one could create a symlink to wherever large data should be stored)

These are all rather small problems, but they take time to solve and make getting started more complicated

@Reckless-Satoshi
Copy link
Collaborator

Hey @ShatteredBunny , thanks for going through this painful process and pinning down so precisely the fixes that are needed.

* `docker-compose build` copies the whole context and executes setup commands there, so they need to be run again locally (because the local files get mounted into the containers). This is mentioned in setup.md, but is not really convenient. Maybe this could be executed when a container is started (and not during the build). Or run a script using the backend image before running `docker-compose up`, no python setup would be needed on the host.

I wonder whether this is an issue after the first build? Usually no more builds will be needed in the dev workflow.

There is quite a bit of malpractice on that docker-compose and also in the /docker/ Dockerfiles. Many of these are not needed and we could use the base images directly from the repository (at least that is true if we can mount the config files from configMaps alla K8s, but did not research if that is possible in docker-compose...)

* `docker exec -it django-dev python3 manage.py makemigrations` only generated a migration for `api` (which failed to migrate because of missing tables from `control`), I needed to run `docker exec -it django-dev python3 manage.py makemigrations [control|chat]` separately to create all migrations. I'm not familiar with django so I don't know what went wrong. Also for me it wasn't clear, that the user created with `createsuperuser` must be in the `ESCROW_USERNAME` .env variable

Indeed, in addition ... makemigrations api will fail with a cryptic error. setup.md has not been updated after "chat" and "control" were created. I believe docker exec -it django-dev python3 manage.py makemigrations api control chat should do the trick (will give it a try)

ESCROW_USERNAME must be an existing user in DB, usually it makes sense for it to be the same as the superuser. Indeed, it might have been a nightmare to figure this out given the poor/outdated setup.md

* setup.md says _RoboSats development site should be accessible on localhost:8000_, but by default localhost is not a whitelisted host; one could use 127.0.0.1 instead so no configuration is necessary

* The path `/mnt/development` is quite specific and it may not be possible to use depending on the system configuration. Imo a generic location would be better (either a docker volume, or at least a directory within the repo; one could create a symlink to wherever large data should be stored)

Agree

These are all rather small problems, but they take time to solve and make getting started more complicated

Again, very appreciated for this Issue. There has not been many devs that have gone through the process of spinning up the full dev stack and these pain-points have remained largely ignored. Hoping to make it a more seamless experience going forward.

@ShatteredBunny
Copy link
Contributor Author

I wonder whether this is an issue after the first build? Usually no more builds will be needed in the dev workflow.

The GRPC files are only generated within the image, so the image would work after the build. But the local files are mounted above the files in the image, so at some point GRPC generation must be run for the local repository, either on the host or while the repository is mounted into the container. I chose the second option in the PR

Alternatively the grpc files could be placed somewhere where they are not shadowed by the mount ("globally" in the image, e.g. /usr/src/proto); that would not be nice on the host but saves the extra step to generate them locally

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants