Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

photon: change base image to debian:bullseye-slim #10

Merged
merged 1 commit into from
May 30, 2022

Conversation

3nprob
Copy link
Contributor

@3nprob 3nprob commented May 30, 2022

In order to use common base image. Note that tzdata is already installed in the new base image.

  • upgrade openjdk from 8 to 11
  • Add build args PHOTON_VERSION and PHOTON_HASH
  • Strip version suffix from jar filename

@3nprob
Copy link
Contributor Author

3nprob commented May 30, 2022

Ideally the image would be reworked/split into two to decouple the data processing from the build step and not include the data in the actual image as brought up in #4 but figured this can be done first regardless

@ellenhp
Copy link
Member

ellenhp commented May 30, 2022

Ideally the image would be reworked/split into two to decouple the data processing from the build step and not include the data in the actual image as brought up in https://github.com/ellenhp/headway/discussions/4 but figured this can be done first regardless

A few friends have convinced me that this is the way to go, so 100% agreed. What do you think of the graphhopper build config I wrote this morning? It's more complex for sure, but it does keep the data out of the image. Also just want to make sure you see #9 so you can avoid putting too effort into prettying it up, but I'll probably use a similar workflow for OTP. I'm considering cutting out one of the intermediate copies though?

It just seems overly complicated right now:
osm.pbf -> build volume -> preprocess step -> copy data out of build volume onto disk -> copy data into the runtime volume

But I like having the on-disk artifact because it makes it really easy to have the Makefile work incrementally.

@3nprob
Copy link
Contributor Author

3nprob commented May 30, 2022

What do you think of the graphhopper build config I wrote this morning?

Overall it's a step in the right direction I think!

But I like having the on-disk artifact because it makes it really easy to have the Makefile work incrementally.

You mean on the host filesystem outside of volumes? Agreed it makes things easier to work with.

What do you think about mounting the docker volumes directly to paths on the local filesystem, rather than using native docker volumes?

Compared to now I imagine something like this:

  • Add new env var HEADWAY_BASE_DIR (picked up both in docker-compose.yaml and Makefile) with some sane default
  • Instead of e.g. docker run --rm -v headway_graphhopper_build:/headway_graphhopper_build: docker run --rm -v ${HEADWAY_BASE_DIR}/graphs:/headway_graphhopper_build
  • There should be no need for cp/mv of data artifacts during the make stages; containers should be able to mount the same volumes and pick up directly from each other (so if any preprocessing is needed, that should be done inside containers via commands or arguments)

@ellenhp
Copy link
Member

ellenhp commented May 30, 2022

What do you think about mounting the docker volumes directly to paths on the local filesystem, rather than using native docker volumes?

I love this idea in theory (it's actually what I started with!) but I think it would require us to parameterize the docker images with the users UID and GID to prevent files being created in your home directory that you don't own. Is there a clean way to do that?

@ellenhp
Copy link
Member

ellenhp commented May 30, 2022

Also this PR looks good to me! Just tested it. We can continue this discussion here or in the discussions tab?

@ellenhp ellenhp merged commit 2e8212b into headwaymaps:main May 30, 2022
@3nprob
Copy link
Contributor Author

3nprob commented May 30, 2022

Yes, and that's something that is desirable either way (operators may want to use separate per-container uids with minimal permissions for security reasons, for example)

It may be that the only thing necessary is to tell docker to use the appropriate user (docker run --user 1234:1234, compose user (there's also group_add but that seems a bit overkill here)

In some cases one may need to add the user inside the container and maybe set some permissions in an init script - but from what I saw so far (actually have no experience with any of the containerized software before) I don't think it should be the case here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants