-
Notifications
You must be signed in to change notification settings - Fork 13
Uses Dockerfile for building Debian packages #87
Conversation
Intended to build the "sunder-build" image, which will be used for creating the Sunder Linux Debian packages. Will be set as a dependency of the "build" target in the Makefile.
The "sunder-build" image contains the necessary tooling for kicking off a build of the Sunder Electron app, creating artifacts in the form of Debian packages. The apt package installation logic, pulling in the dependencies, was ported from the existing Ansible config. The `npm` build commands will be placed in an adjacent script, since running those requires the application code from the git repo to be mounted inside the container, which will happen as a container "run" action, not "build" action.
Not folding this logic into the Dockerfile build info, since it requires the application code to be mounted inside the container. The volume requirement mandates that the `npm` commands in the build script be run as a container "run" action, not as part of the container "build" action.
Building using containers now, which should make it far easier for new contributors to get started using the repo.
@msheiny More eyes here would be appreciated. Typical grsecurity-related pain points for mmap executables. See what you can do to drag this over the finish line. |
@conorsch I have a WIP local branch (sorry, should've said something I guess) for this issue. I'll see if I can merge them. |
k - keep me posted @garrettr if i can help. ive done a lot of fire-fighting with this issue in other Docker repos. Something having to do with the extended file ACL attributes not sticking around after the image is created. To get around it sometimes I'll chown the binary in question to the |
@@ -0,0 +1,31 @@ | |||
ARG NODE_VERSION=8.9.4 | |||
FROM node:$NODE_VERSION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of what this Dockerfile is doing is implementing the Linux instructions from the electron-builder documentation. What do you think about basing this docker image on one of the electronuserland/builder base images, e.g. FROM electronuserland/builder:8
for Node 8, instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ha, should have checked for an electron-specific base image! Yes, I don't mind switching to that if it keeps us lean. @msheiny?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @garrettr what are your thoughts on yarn? is it a "drop-in" replacement for npm? i am reading into that electronuserland builder tooling around their docker images.. they are 💩 'ing on npm every chance they get and have yarn
in all the examples.
|
||
icns2png --extract --output "${SUNDER_CODE}/build/icons/" "${SUNDER_CODE}/build/icon.icns" | ||
npm install | ||
npm rebuild |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure you don't want to call npm rebuild
anymore, it will actually break things because it will rebuild native module for the system's version of Node rather than Electron's version of Node. The order of events is:
npm install
builds native modules for system Node by default- There is a postinstall hook in
package.json
that runselectron-builder install-app-deps
, which takes care of rebuilding native modules against Electron's version of Node.
I'm pretty sure calling npm rebuild
at this point will do the wrong thing, e.g. similar to how it was causing breakage on CircleCI that I fixed in 960dc2b.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The npm rebuild
step was lifted verbatim from the vagrant config logic, so glad we're refreshing all of this. Seems like using an electron-specific builder image would simplify this logic, as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly the hardest thing about returning to work on this was grokking all of the interactions between native modules and the rest of our code/dependencies/NPM/etc. 😓
fi | ||
|
||
icns2png --extract --output "${SUNDER_CODE}/build/icons/" "${SUNDER_CODE}/build/icon.icns" | ||
npm install |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we npm install
in the Dockerfile. It is a time-consuming step and in general I would expect the Sunder code to change more frequently than its underlying dependencies, so it will probably be worthwhile for Docker to cache the results of npm install
in a layer.
References:
- http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/ (explains basic concept)
- https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ (example code includes
package-lock.json
for NPM v5+).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! npm install
downloads a mammoth amount of dependencies, so very happy to cache that.
Makefile
Outdated
docker build . -t sunder-build | ||
|
||
build: docker-build ## Builds Sunder Debian packages for Linux. | ||
docker run -v "$(PWD):/sunder" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't recommend bind mounting the entire code directory in the container. There is at least one serious problem with it, which I encountered while working on my own branch to implement Docker builds: node_modules
cannot be shared between the host and container if the host and container are not running the same OS/arch (e.g. my macOS development host vs. a Linux build container), due to our project's use of native modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blacklisting dirs via .dockerignore
may be a sane approach here, to ensure that e.g. node_modules aren't included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the ignore file only works with context during build time.. not with volume mounting unfortunately. Realistically we should have a docker container for node instead of asking devs to run raw commands but thats another scope. So I say we do COPY
in docker build with the ignore logic to get around this... i tried a couple other solutions but havent found a work-around yet.
someone suggested just running -v /sunder/node_modules
which mounts a blank directory there but its root owned :|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msheiny I'd recommend COPY
ing the source into the container, in which case .dockerignore
will work for you. I think the only directory you should need to bind mount is the directory where the build output ends up.
@conorsch @msheiny Check out the comments I just wrote, describing my concerns with this approach so far. Some of them may be controversial, so I'm interested in your feedback. I will attempt to branch from this PR and fix the described issues myself, unless one of you beats me to it (I need to finish the remaining open documentation issues first). |
Previous logic did not take into account a condition where a user wants to run the container with a volume mapped to a uid which doesnt already exist in the container. Now we are explicitly setting the `node` user to match the builder's user id. Also introduce some logic to correct volume mounting permissions from a named volume mounted over-top of node_modules
A few changes here: * Trimmed down docker apt dependencies, purge apt archive packages * Utilize named volume in makefile to ensure a user's node_modules folder gets ignored. This could be problematic for instance if a user built locally with npm on Mac and then tries to use that same folder to build under the docker env. * Fix a weird scenario with a ruby binary getting grsec mprotect denials during the build process. The only way I could figure out how to fix was to run once, re-flag file, run again. I'm sure theres a better way but I timeboxed myself.
Cleaning up by labels is much easier maintanence-wise. So in this commit, we add the label metadata to the images and provide a makefile target to ease in clean-up. Note - this doesnt clean containers. Thats what `docker container prune` is for.
Hey @garrettr / @conorsch -- i made a number of changes here that I hope keeps all happy, a few notes:
Since I made a slew of changes I can no longer stamp this as good to go.. so I'll back off to address any comments either of you have. PS - I miss you @garrettr ❤️ |
@conorsch corrected me --> |
whoopsss clarification.. @conorsch pointed out i could have edited my original comment instead of making a new comment. Now I've just made 2 un-necessary comments :| |
@msheiny These changes look great! Taking them for a spin now. Noticed a few dangling references to Ansible (e.g. |
Didn't originally realize I can purge containers by filters as well! Cool. So I expanded the clean-up logic a bit to also clean containers and delete the named volume.
Oh by the way... I also notice changes with |
Good call, @msheiny. I see changes as well, and worse, I'm seeing two
|
Culls "ansible" from the list of Python requirements, and excises associated Makefile target that ran Ansible commands.
uggghh thats weird -- i got it built and working on my box :( |
Will keep trying throughout the day... I made sure to run |
At @msheiny's encouragement, I reran |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving my own changes.. based off Conor's given approval 👍
Status
Work in progress.
Description of Changes
Partial progress toward #75.
Ran into a problem with PaX flags on some of the binaries. Node itself is handled, but inside the homedir, other binaries are created at app-build time that don't exist at container build time.
Should work as-is for non-grsecurity-patched kernels, and presumably on Mac hosts, as well.
Did not excise the Ansible and Vagrant logic yet, but that should happen once we finalize the container config.
Testing
Run
make build
and confirm the Debian packages are created cleanly.