New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: docker build updates, automate npm publish #1394
Conversation
KEGustafsson
commented
Feb 11, 2022
•
edited
edited
- base image as ubuntu 20.04
- needed apt-get packages
- nodejs script installer (node 16.x)
- mDNS enabled for node
- startup.sh script to start mDNS and SK
- base image as ubuntu 20.04 - needed apt-get packages - nodejs script installer (node 16.x) - mDNS enabled for node - startup.sh script to start mDNS and SK
When SK is restarted e.g. from admin webpage, then also container will stop or restart depending on settings. To keep container running while signalk-server is restarting, I could add PM2 here. This would be a bit more sophisticated method to run nodejs app. There would 2 modifications needed With PM2 included images sizes. |
- pm2 (process manager) to run signalk-server
In original |
The COPY empty file is from cross building arm in Travis. I think that can be removed. In my own use I have managed restarts outside the container, restarting the whole container if it exits. I am not sure what the best course of action here is. @jncarter123 do you have any advice? |
For reference Dockerfile for node:16 |
One obvious difference is that the official image has yarn that we don't need. |
Now that there's more than just a single Dockerfile: should we put everything under a single directory |
I think we should move forward with this, but I am just saying that the way the server is built here is kinda dumb and results in images that are not exactly 1 to 1 with what is published to npm. What I mean is that instead of building the server and packages from scratch (and leaving devDependencies in place!) during docker build we should install from npm the ready made packages. But then we should have a github action that installs server's deps from npm (not via npm workspace), builds, tests and publishes to npm first and only then builds the docker image, either from the files built in build step or by installing the server from npm. The latter option is more like what end users do and would also work as a test of "can we install the version we just published to npm". |
I like the idea "can we install the version we just published to npm". This is a bit outside of docker image build, but still very much related to it. Every bit and pieces need to be ready before docker build is triggered, but that shouldn't be a problem if all necessary packages are available from npmjs or other locations. For docker, there are still OS related stuff, like base image, needed additional packages, extra settings and startup stuff. Anyway, Signal K server installation could follow the same path as normal user. Would there be then any need for |
That would be a good idea. |
IS_IN_DOCKER disables installing new server versions in the admin ui, so yes we want it. |
What do you think about this approach? Edit: Only one of the platforms (amd64) would do build, test and publish, not all. That is missing. |
In principle yes, but I don't think docker build is the way to do other builds. But Github actions are. So everything but creating the docker image should be happening in github actions workflow: install, compile, test and then publish. Here is an example of the earlier stages, then after publish succeeds it should run a cross arch docker build (like the current workflow), installing from npm what was just released. A small detail is that each command in a Dockerfile creates a file system layer. Running rm on its own will not decrease the image size, but actually slightly increase it, because it will create a new file system layer. If used |
This was/is a study that I wanted to do to check whether it is possible/feasible. Seems to be too complicated vs github actions... How to proceed forward? Should Dockerfile be simplified to build image from npmjs package (signalk-server) or continue current path (install, build, having src available)? I think simplified version could be more suitable and usable. Multiplatform docker image build time: 19min Amd64 and arm64 version working fine. |
So with this Dockerfile the essential changes would be a smaller image and working mdns? Mdns requires host network mode, right? Now the docker build in Github is triggered on pushing the tag for a release. Now this will not work, because in my release workflow I
So the new release may or may not be published in npm and just running Also this approach won't be able to build docker images off master. So there are uses to building off the local files. So what if we (you...) back up a little: install as previously, but not from official node, and we get the advantages here. Then we can revisit installing from npm separately:
As for the dockerfile: is there a reason for back and fort changes between root and node - why not do everything as root in one go? |
For the time being I'd like to keep things the way they are re: pm2 and not add it. https://stackoverflow.com/questions/51191378/what-is-the-point-of-using-pm2-and-docker-together |
Jep, use of the same package as for normal installation process, quite much smaller image and working mDNS. I haven't tested other than host mode. Also all input and output ports mapping would require configuration changes to compose file.
RUN npm i -g signalk-server@TAG . I need to check how to pass ${{ steps.docker_meta.outputs.tags }} correctly to Dockerfile and map it to TAG. This would ensure that tagged version is also used to build docker image.
If master has tag then it should follow the same rules as any other tag.
No problem to go back. So back to COPY repo to docker image.
Would this be for e.g. development etc... purposes with full repo included and
and this for official release, slim and polished?Would there be then actually two Dockerfiles: e.g. Dockerfile for dev and Dockerfile_Rel for releases?
Would you like that I remove USER node totally and there would be only USER root? No problem and that simplifies build a little bit. No need for sudo as user is already root.
I remove pm2 from this configuration but I'll keep testing it by myself. |
Probably two dockerfiles eventually. Node user is good, same as before and non root containers are docker best practise. Compose could map the default 3000 port so that it is usable as such. |
Now all root actions are done at first and then user is changed to node. I would keep compose at host to keep things as simple as possible. If someone wants to do hardening etc.. and control, which port are open to/from container, that is doable. I could add commented code to compose, which adds this options and gives end user easy change from host to bridge network. Ok? |
When release with tag v* is done, then this script will be run, generating docker images tagged v* and latest
Why is this publishing to npm? |
One thing lead to another: Docker release images should be built like a local install from npm => the release needs to be published in npm => we can automate that here also. |
Ahh. Got it. I see you doing that elsewhere too. Nice. |
Oh damn, I just realised there's preleases to think of: https://github.com/SignalK/signalk-server/blob/master/publishing.md The main difference is in And I don't get it why we have both releasing.md and publishing.md. If we get beta publishing integrated so that you just need to push a tag name ending in |
Can beta release be named e.g. Then Npm Bit more IF-THEN to scripting... |
Yes. just to be precise: the ending needs to include the version number. |
I'll add test if tag string contains substring |
b4f120f
to
32c7edd
Compare