Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

Cannot docker build container image #2279

Closed
Josua-SR opened this issue Jan 31, 2020 · 20 comments
Closed

Cannot docker build container image #2279

Josua-SR opened this issue Jan 31, 2020 · 20 comments
Labels

Comments

@Josua-SR
Copy link

Description

Building a container image with docker build docker fails.

Steps to reproduce

  1. git clone https://github.com/SUSE/Portus.git
  2. cd Portus
  3. docker build docker
  • Expected behavior: No errors, container image built
  • Actual behavior:
...
Step 4/6 : RUN chmod +x /init &&     mkdir -m 0600 /tmp/build &&     (        gpg --homedir /tmp/build --keyserver ha.pool.sks-keyservers.net --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2 ||         gpg --homedir /tmp/build --keyserver pgp.mit.edu --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2 ||         gpg --homedir /tmp/build --keyserver keyserver.pgp.com --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2     ) &&     gpg --homedir /tmp/build --export --armor 55A0B34D49501BB7CA474F5AA193FBB572174FC2 > /tmp/build/repo.key &&     rpm --import /tmp/build/repo.key &&     rm -rf /tmp/build &&     zypper ar -f obs://Virtualization:containers:Portus/openSUSE_Leap_15.0 portus &&     zypper ref &&     zypper -n in --from portus ruby-common portus &&     zypper clean -a &&     rm -rf /etc/pki/trust/anchors &&     ln -sf /certificates /etc/pki/trust/anchors
 ---> Running in 49bb5af567b6
gpg: keybox '/tmp/build/pubring.kbx' created
gpg: keyserver receive failed: Cannot assign requested address
gpg: /tmp/build/trustdb.gpg: trustdb created
gpg: key A193FBB572174FC2: public key "Virtualization OBS Project <Virtualization@build.opensuse.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
Adding repository 'portus' [......done]
Repository 'portus' successfully added

URI         : https://download.opensuse.org/repositories/Virtualization:/containers:/Portus/openSUSE_Leap_15.0
Enabled     : Yes                                                                                             
GPG Check   : Yes                                                                                             
Autorefresh : Yes                                                                                             
Priority    : 99 (default priority)                                                                           

Repository priorities are without effect. All enabled repositories share the same priority.
Retrieving repository 'portus' metadata [..done]
Building repository 'portus' cache [....done]
Retrieving repository 'Non-OSS Repository' metadata [..done]
Building repository 'Non-OSS Repository' cache [....done]
Retrieving repository 'Main Repository' metadata [...done]
Building repository 'Main Repository' cache [....done]
Retrieving repository 'Main Update Repository' metadata [....done]
Building repository 'Main Update Repository' cache [....done]
Retrieving repository 'Update Repository (Non-Oss)' metadata [.done]
Building repository 'Update Repository (Non-Oss)' cache [....done]
All repositories have been refreshed.
Loading repository data...
Reading installed packages...
No provider of 'portus' found.
'ruby-common' not found in package names. Trying capabilities.
'portus' not found in package names. Trying capabilities.
The command '/bin/sh -c chmod +x /init &&     mkdir -m 0600 /tmp/build &&     (        gpg --homedir /tmp/build --keyserver ha.pool.sks-keyservers.net --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2 ||         gpg --homedir /tmp/build --keyserver pgp.mit.edu --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2 ||         gpg --homedir /tmp/build --keyserver keyserver.pgp.com --recv-keys 55a0b34d49501bb7ca474f5aa193fbb572174fc2     ) &&     gpg --homedir /tmp/build --export --armor 55A0B34D49501BB7CA474F5AA193FBB572174FC2 > /tmp/build/repo.key &&     rpm --import /tmp/build/repo.key &&     rm -rf /tmp/build &&     zypper ar -f obs://Virtualization:containers:Portus/openSUSE_Leap_15.0 portus &&     zypper ref &&     zypper -n in --from portus ruby-common portus &&     zypper clean -a &&     rm -rf /etc/pki/trust/anchors &&     ln -sf /certificates /etc/pki/trust/anchors' returned a non-zero code: 104

Deployment information

No deployment

Configuration:
No configuration

Portus version: master (b87d37e)

Josua-SR added a commit to Josua-SR/Portus that referenced this issue Jan 31, 2020
Also add devel:languages:ruby repository for required ruby2.6 package.
Fixes SUSE#2279

Signed-off-by: Josua Mayer <josua@solid-run.com>
@vl-bwalocha
Copy link

Hi, branch master and v2.4 is failing. look at the badges.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 2, 2020

Successfully repaired opensuse/portus:2.5 OCI container image

Hi @Josua-SR @vl-bwalocha This time it's done, I have a solution to use, all details #2241 (comment)

:)

I will read any feedback from you with interest

@Jean-Baptiste-Lasselle
Copy link

oh, just noticed,that's it's your Josua-SR@f297f25 that introduced the updated Ruby installation method, my fix fixes yours... :) full details in release notes of the release I mention at #2241 (comment)

@Josua-SR
Copy link
Author

Josua-SR commented Mar 6, 2020

@Jean-Baptiste-Lasselle I am really confused!

Besides that, I also have a branch where I can successfully build container images from portus master now ... and I approached it much different apparently:

I went on the suse obs and rebuilt the rpms from my fork, and then created the container image from that repo.

Your explanations are very extensive and seem to overwhelm me a little. I am especially curious about the ruby versioning issue. I have actually not expected anyone to hardcode the ruby version - not in distro packages - and not in containers. Wouldn't the sane solution be to use system defaults everywhere?

@Josua-SR
Copy link
Author

Josua-SR commented Mar 6, 2020

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 7, 2020

Besides that, I also have a branch where I can successfully build container images from portus master now ... and I approached it much different apparently:

I went on the suse obs and rebuilt the rpms from my fork, and then created the container image from that repo.

Your explanations are very extensive and seem to overwhelm me a little. I am especially curious about the ruby versioning issue.
I have actually not expected anyone to hardcode the ruby version - not in distro packages - and not in containers. Wouldn't the sane solution be to use system defaults everywhere?

Hi @Josua-SR Thank you so much for your answer, I have worked to actually understand the whole build from source cycle. So thank you so much for asking me, and for giving me more details on how you, yourself worked on portus.
Here are the things I did :

  • I have git cloned github.com/SUSE/Portus , and docker-compose build, it failed. So I tried to quickly repair that, and did not succeed.
  • So I left aside building from source, and I reduced my aim at just running portus, using the docker images on docker hub;
  • I used examples/compose as a base, which I had to repair, plus understanding how to configure properly everything, all that automatized :
    • I first had a success using portus:2.4.3
    • I then investigated issue Garbage Collector removes all tags #2241 : using portus:2.4.3, we don't have some features, like the garbage collector keep_latest feature. Investigation went far, to finally be sure that in image portus:2.4.3, we don't have the commit that makes operational the keep_latest feature. About the investigation that could make that sure, See Garbage Collector removes all tags #2241 (comment)
    • So I asked issuer of Garbage Collector removes all tags #2241, to try same test using the published portus:2.5 assuming that this image was published with purpose of making available such key features as keep_latest, to users, even though there is no 2.5.xsource code release as of today.
    • Using portus:2.5 we had multiple issues, including this error in logs :
background_1                        | /usr/bin/bundle:23:in `load': cannot load such file -- /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError)
background_1                        | 	from /usr/bin/bundle:23:in `<main>'
zypper ar -f obs://devel:languages:ruby/openSUSE_Leap_15.1 ruby
  • Wouldn't the sane solution be to use system defaults everywhere? : I released that, and shared it everywhere I could, to actually discuss that, but what my work shows, is that we have to either change the ruby opensuse package it self, or it is portus that has to be changed, to respect Ruby development standards. I am not specialist of Ruby at all, So I don't have a point of view now. But I had to dive into ruby dev to understand the dependency hell, and untie it. So If I have to, I will know a lot more on ruby development, on my own, yet I always prefer learning from others.

A few questions on your build from source process

I went on the suse obs and rebuilt the rpms from my fork, and then created the container image from that repo.

zypper ar -p 1 -f obs://Virtualization:containers:Portus:2.5/openSUSE_Leap_15.1 portus

# --
# And in the original : https://github.com/openSUSE/docker-containers/blob/portus-2.5/derived_images/portus/Dockerfile
# they install [portus] package with : 
# zypper ar -p 1 -f obs://Virtualization:containers:Portus:2.5/openSUSE_Leap_15.0 portus
  • I did not change that, but when you say rebuilt the rpms from my fork, and then created the container image from that repo., means that your build from source process is the following (its a question) :
    • you build from source portus, and package it into a zypper package ( an RPM ? it is *.rpm fro zypper, like redhat ? My first time using opensuse was last november for this project). To do this do you also use (maybe only) this packaging/suse/make_spec.sh script ?
    • you push that portus package to public OBS at obs://Virtualization:containers:Portus:2.5/openSUSE_Leap_15.1 (or maybe you own account from https://build.opensuse.org/ , yes you use https://build.opensuse.org/project/monitor/home:mayerjosua:branches:Portus, don't you ?) . To do that, do you use this package_and_push_to_obs.sh script ?
    • And then in your Dockerfile, you have a zypper ar command that installs the package from the OBS linux repository where you pushed your rpm . So you changed dockerfile sothat it add your https://build.opensuse.org/project/monitor/home:mayerjosua:branches:Portus zypper repo, instead of the official obs://Virtualization:containers:Portus:2.5/openSUSE_Leap_15.1 zypper repo, didn't you ?
    • Is it how you do it ? To me as of today, how the portus Team does the portus build from source is big mistery to me, I just unveiled things / proved facts. But i'm used to that in the open source world, And I thank opensuse team for giving me their source code. Plus I today think I is a real important thing that the whole community keeps alternatives to harbor.
  • As for the build from source, I am also working on a debian based containerized stack, because I know I just have to follow Ruby On rails general recommendations for developers, to be sure I can build from source portus, and a docker image after that. On this work, I have finished a build within which portus actually starts, cause now when I run it with a simple docker run command, it's complaining where is the database. I'll test it deployed in the exact same docker-compose as my opensuzie:portus:2.5, in the next weeks.

All in all, I would really like to get to the bottom of it all, and fix what I believe is a much higher level issue, like how the whole factory processes are designed. I want to have a factory that completely works, and which people can actually start using in less than one hour of setup, if not 2 minutes, for my customers to trust portus in production, cause they know at any moment they can hire any devops, and he's going to be able to run pipelines successfully in less than 4 hours, like a backup / restore / test session, or a secret rotation. What OpenSUSE Team wants to do with it or not, their call, I am just grateful they open sourced this project.

Also, I am working on deploying to kubernetes, with scale up/down tests, load balancing portus and registry. I have seen in issues people stating they did, found no resources on that though. I have investigated the subject, And if portus has problem with that, I m almost sure it is possible technically, and worth as opportunity, to change portus source code so that it supports auto-scaling.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 7, 2020

hi again @Josua-SR , After writing my sum up and questions above, I am almost sure that it is the Portus OpenSUSE package that has a bad design :

BUNDLER_VERSION="1.16.0"

And in my fix https://github.com/pokusio/opensuzie-oci-library/releases/tag/0.0.2 , I had to reset bundle version to 1.16.4 :

https://github.com/pokusio/opensuzie-oci-library/blob/659b287e136fcb708859e74957b80a89a644b23c/library/portus/init#L67

/usr/bin/gem.ruby2.6 uninstall bundle --version '<1.17.3'
/usr/bin/gem.ruby2.6 install bundle --version '1.16.4'

/usr/bin/gem.ruby2.6 uninstall bundler --version '<1.17.3'
/usr/bin/gem.ruby2.6 install bundler --version '1.16.4'
  • At least, we should have :
BUNDLER_VERSION=${BUNDLER_VERSION:-"1.16.0"}
# 
# Instead of 
# BUNDLER_VERSION="1.16.0"

Or it's not worth sharing a file, with an accurate version, if we have to edit it, to use it properly.

@Josua-SR
Copy link
Author

Josua-SR commented Mar 8, 2020

\o @Jean-Baptiste-Lasselle

I'll try to answer in a structrued way here:

This is my current build-process:

There are a few important pieces to note here:

  • The spec file for the rpm is not taken from github. However that is easy to change, and on my wishlist.

  • There is a suspicious copy of bundler-1.16.4.gem in the obs repository - being injected into every rpmbuild.

  • There is a kiwi file for building a portus docker image at https://github.com/openSUSE/portus-image - and I did adapt it to use my fork repository - but did not test the resulting image yet. In the SUSE world, this would be the second CI part of portus after building the rpm - and will actually be triggered by the obs every time the rpm changes.

My goals are somewhat related to yours:

My use of portus is for an internal docker registry hosting both private and public images - for use by a future CI pipelines.
I am using a bare-metal deployment of kubernetes with all of armv7-a, aarch64 and x86 nodes. x86 are a scarce ressource here, and I epect to deploy portus mostly on the aarch64 nodes. Therefore my goal is to Portus images for all 3 architectures.

Claire is not something I looked at yet, neither did I deploy more than 1 of portus and background yet.

@Josua-SR
Copy link
Author

Josua-SR commented Mar 8, 2020

* in the `make_spec.sh`, hard-coded, and impossible to override
  https://github.com/SUSE/Portus/blob/53bd63e299f47f1761f55ba2bc53b9cb804c1616/packaging/suse/make_spec.sh#L6

I actually believe this bit is irrelevant - the remainder of that script makes no use of this particular variable - or does it?

Also, I'd like to point you at my "solution" to bundler, which is inspired by yours, and uses the fact that bundler-1.16.4 is actually part of the portus rpm file:

In docker/Dockerfile:

RUN set -e; \
  mkdir -p /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/; \
  ln -s /srv/Portus/vendor/bundle/ruby/2.6.0/gems/bundler-1.16.4/exe/bundle /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle; \
  ln -s /srv/Portus/vendor/bundle/ruby/2.6.0/gems/bundler-1.16.4/exe/bundler /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundler; \
  :

Josua-SR@fe354d7

@Josua-SR
Copy link
Author

Josua-SR commented Mar 8, 2020

* To do that, do you use [this `package_and_push_to_obs.sh` script](https://github.com/SUSE/Portus/blob/b87d37e4e692b4fe5616b6f0970cb606688c344a/packaging/suse/package_and_push_to_obs.sh#L2) ?

Nope, not at all.
I use the osc command-line tool for interacting with build.opensuse.org - basically

osc co home:mayerjosua:branches:portus
cd home:mayerjosua:branches:Portus/portus

# for fetching new source code
osc service rr
osc update

# for changes to obs files:
osc ci -m "something meaningful"

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 8, 2020

This is my current build-process:

* push changes to my fork at github.com/Josua-SR/Portus

* trigger source service on build.opensuse.org for fetching source code from my fork

* obs builds an rpm for me, and makes it available at http://download.opensuse.org/repositories/home:/mayerjosua:/branches:/Portus/openSUSE_Leap_15.1/

* build docker image from Dockerfile in Portus repository at `docker/Dockerfile`!

Hi @Josua-SR , Thank you so much for this discussion !

  • Question about what I quoted above :
    • So in docker/Dockerfile, you replaced zypper ar -f obs://Virtualization:containers:Portus/openSUSE_Leap_15.0 portus && \ by zypper ar -f obs://mayerjosua:Portus/openSUSE_Leap_15.1 portus && \, didn't you ? Oh yes, I saw on your fork zypper ar -f obs://home:mayerjosua:branches:Portus/openSUSE_Leap_15.1 portus && \ Great.
    • Also I am completely new to OBS, which I am not sure I will use yet, and thank especially about explaining the osc command-line tool for interacting with build.opensuse.org I duly note your indications.

About the kiwi file

Omg, I never ever heard or saw reference anywhere in the documentation for https://github.com/openSUSE/portus-image : Thank you so much for telling me about that !

About the strange bundler

I actually believe this bit is irrelevant - the remainder of that script makes no use of this particular variable - or does it?

I checked it and you are right, I did not check how the variable would be used, checked it now, and there is no executable invoked in the script that can make use of the BUNDLER_VERSION envrionment variable. The script just copies files and generates this spec file apparently defining struture of the RPM package to build.

I am not used to build linux packages yet, but i will soon, I read with interest all this about portus, will be an excellent everyday use case to bear in mind when i'll more work on linux packaging automation.
I worked on automatic provisioning of Redhat Spacewalk and foreman katello, in preparation, actually also needed portus for that, I provision Spacewalk and foreman katello only in containers.

  • There is a suspicious copy of bundler-1.16.4.gem in the obs repository - being injected into every rpmbuild.

So there is, a problem somewhere in the CI/CD about dependency management. I'll be very interested in understanding what went wrong then, cause if that happened to an OpenSUSE Team, it is something that will happen to other teams. And hence how that suspicious copy of bundler-1.16.4.gem .

There is a kiwi file for building a portus docker image at https://github.com/openSUSE/portus-image - and I did adapt it to use my fork repository - but did not test the resulting image yet. In the SUSE world, this would be the second CI part of portus after building the rpm - and will actually be triggered by the obs every time the rpm changes.

Again thank you so much for letting me know about https://github.com/openSUSE/portus-image , and that's very interesting indeed :

  • what's funny there is that we have a typical situation, or it feels like
  • usually when I lead a dev team, I end up telling them :
    • "And now you understand why in maven or npm team, they had no choice but to make the build fail, to prevent the developer from running a executable although some test failed."
    • "So that's why it was so mart, to build angular like that : you have to compile it, or you don't have the runnable thing. SO you introduce same as maven, but for angular, that is angular-cli. Unless you can build it, you can't run it. So And you can't build it, if any test fails. "

That is very interesting, and I will think more about what you wrote, and will continue discuss.

https://opensuse.github.io/kiwi/

My goals are somewhat related to yours:

My use of portus is for an internal docker registry hosting both private and public images - for use by a future CI pipelines.
I am using a bare-metal deployment of kubernetes with all of armv7-a, aarch64 and x86 nodes. > x86 are a scarce ressource here, and I epect to deploy portus mostly on the aarch64 nodes. Therefore my goal is to Portus images for all 3 architectures.

Claire is not something I looked at yet, neither did I deploy more than 1 of portus and background yet.

How interesting, I'll really be interested in somehow keep contact discuss our respective projects development :

  • I searched 2 years ago, a bit, to find out on what kind of ARM architectures I could containerize, so have a K8S cluster node : I will once spin up rasberry pi / arduino - based K8S cluster. I fell on things like Linaro, etc...Doing that I was left with the Idea that I will not try that on ARM unless it's proper ARM v8-A 64 bits arch. So you did it with ARMv7A ? Amazing! Which kind of Cortex? (tell me when I am too indiscrete, no pb) About that, You probably already,but just in case, it'd be so useful in your use case : https://github.com/kubernetes-sigs/node-feature-discovery labels your cluster nodes automatically, labels telling every dirty litlle secret of your hardware underneath.
  • Ok, I note that, that you did not try more than one instance of portus, And I will definitely keep you informed when I have a first test suite on load balancing it
  • About the Clair Scanner, here is all I did to add to my portus setup, the Clair Service :
    • In my docker-compose.yml, I add to both the portus and the background services, 2/3 environment variables, stating where is clair scnanner?. Because Clair works as grunt, stupid, and efficient : You call it, telling it this is a layer, scann it please, or this is an image, scan its layers please. Clair anwsers with a huge JSON, containing all scanner results. Worth noting, I think here we could even just tell the backgound, but not to portus, where is the scanner Haven't tested that Yet. Here are sample environment vairbles I used to connect portus to clair :
        # --- CLAIR SCANNER @[clair.pegasusio.io]
        # http://port.us.org/features/6_security_scanning.html#intro
        # http for test MUST HAVE AN SSL TLS CERTIFICATE
        - PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060
        # - PORTUS_SECURITY_CLAIR_SERVER=https://clair.pegasusio.io:6060

        - PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061
        - PORTUS_SECURITY_CLAIR_TIMEOUT=900s
  • I add this to my docker-compose.yml, where I add the Clair Service and its database :
  clair:
    # image: quay.io/coreos/clair:v2.0.1
    # image: quay.io/coreos/clair:v2.1.2
    # image: ${OCI_REGISTRY_SERVICE_FQDN}/pokus/clair:v2.0.1
    image: ${OCI_REGISTRY_SERVICE_FQDN}/pokus/clair:v2.1.2

    build:
      context: oci/clair
    restart: unless-stopped
    # entrypoint: ["/usr/bin/dumb-init", "--", "/clair"]
    entrypoint: ["/usr/bin/dumb-init", "--", "/clair.customized"]

    depends_on:
      - postgres
    links:
      - postgres
    ports:
      - "6060-6061:6060-6061"
    volumes:
      - $PWD/tmpclair:/tmp
      - $PWD/clair/clair.yml:/clair.yml
      # - ./clair/clair.yml:/config/config.yaml
      - $PWD/clair/clair.customized:/clair.customized
      - $PWD/secrets/certificates:/secrets/certificates
    command: [-config, /clair.yml]
    networks:
      pipeline_portus:
        aliases:
         - clair.pegasusio.io
  postgres:
    image: library/postgres:10-alpine
    environment:
      POSTGRES_PASSWORD: portus
    networks:
      pipeline_portus:
        aliases:
         - pgclair.pegasusio.io
  • I build my own Clair service :
FROM quay.io/coreos/clair:v2.1.2
# FROM quay.io/coreos/clair:v2.0.1
# v2.1.2

# see update.sh for why all "apk install"s have to stay as one long line
RUN apk update && apk upgrade && apk add dumb-init curl ca-certificates

RUN chmod +x /clair
COPY clair.customized /
RUN chmod +x /clair.customized
RUN mkdir -p /secrets/certificates

ENTRYPOINT ["/usr/bin/dumb-init", "--", "/clair.customized"]
  • But actually all I do is that I redifine the ENTRYPOINT , with the /clair.customized shell script, which contains this (note the certtificate under there,whose issuer / Certif.Atuhority has to be trusted by clair, is the registry', that because clair will hit registry's API which is served through HTTPS ) :
#!/bin/sh

set -x

echo "--------------------------------"
echo "--------------------------------"
echo "------ [clair.customized] ------"
echo "--------------------------------"
echo "--------------------------------"

mkdir -p /secrets/certificates

echo "--------------------------------"
echo "------ [clair.customized] ------"
echo "-------  [ content of ]  -------"
echo "-----[/secrets/certificates]----"
echo "--------------------------------"
ls -allh /secrets/certificates
echo "--------------------------------"
cp /secrets/certificates/portus-oci-registry.crt /usr/local/share/ca-certificates
update-ca-certificates
echo "--------------------------------"
echo "------ [clair    STARTUP] ------"
echo "--------------------------------"

# passing args from Dockerfile
echo ""
echo "--------------------------------"
echo "------ arg1 = [$1] ------"
echo "--------------------------------"
echo "------ arg2 = [$2] ------"
echo "--------------------------------"
echo ""
/clair $1 $2
  • I setup an SSL/TLS CErtificate, self signed for clair service, and like for other services, there is a config param to configure Clair so it knows where is the certificate file. I had clair workng ovr http not https yet, but that config is in a config file, clair.yml, and here is mine (section to configure your clair service PKI, is the api section,not notifier) :
# Copyright 2015 clair authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# The values specified here are the default values that Clair uses if no configuration file is specified or if the keys are not defined.
clair:
  database:
    # Database driver
    type: pgsql
    options:
      # PostgreSQL Connection string
      # https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
      source: host=pgclair.pegasusio.io port=5432 user=postgres password=portus sslmode=disable statement_timeout=60000

      # Number of elements kept in the cache
      # Values unlikely to change (e.g. namespaces) are cached in order to save prevent needless roundtrips to the database.
      cachesize: 16384

      # 32-bit URL-safe base64 key used to encrypt pagination tokens
      # If one is not provided, it will be generated.
      # Multiple clair instances in the same cluster need the same value.
      paginationkey:

  api:
    # v3 grpc/RESTful API server address
    addr: "0.0.0.0:6060"

    # Health server address
    # This is an unencrypted endpoint useful for load balancers to check to healthiness of the clair server.
    healthaddr: "0.0.0.0:6061"

    # Deadline before an API request will respond with a 503
    timeout: 900s

    # Optional PKI configuration
    # If you want to easily generate client certificates and CAs, try the following projects:
    # https://github.com/coreos/etcd-ca
    # https://github.com/cloudflare/cfssl
    servername:
    cafile:
    keyfile:
    certfile:

  worker:
    namespace_detectors:
      - os-release
      - lsb-release
      - apt-sources
      - alpine-release
      - redhat-release

    feature_listers:
      - apk
      - dpkg
      - rpm

  updater:
    # Frequency the database will be updated with vulnerabilities from the default data sources
    # The value 0 disables the updater entirely.
    interval: 2h
    enabledupdaters:
      - debian
      - ubuntu
      - rhel
      - oracle
      - alpine

  notifier:
    # Number of attempts before the notification is marked as failed to be sent
    attempts: 3

    # Duration before a failed notification is retried
    renotifyinterval: 2h

    http:
      # Optional endpoint that will receive notifications via POST requests
      endpoint:
      # https://PORTUS_SERVICE_FQDN_JINJA2_VAR:3000/v2/token

      # Optional PKI configuration
      # If you want to easily generate client certificates and CAs, try the following projects:
      # https://github.com/cloudflare/cfssl
      # https://github.com/coreos/etcd-ca
      servername:
      cafile:
      keyfile:
      certfile:

      # Optional HTTP Proxy: must be a valid URL (including the scheme).
      proxy:

@Josua-SR
Copy link
Author

Josua-SR commented Mar 8, 2020

About the strange bundler

Turns out I actually have a little more to say about the Bundler:

  1. portusctl expects the bundler to be "bundled" in the vendor folder:
    https://github.com/Josua-SR/portusctl/blob/master/exec.go#L124

  2. system-provided bundler binary in /usr/bin is usable by either invoking portusctl --vendor=false or replacing all occurences of portusctl exec in docker/init with bundle exec

  3. traces of ruby 2.5 are pulled in by the spec file which is doing a lot of version mangling. I cleaned that up to - for now

    • target ruby-2.6 by build-depending on ruby-2.6 only
    • use bundler provided by the distro package (/usr/bin)
    • remove all features of the make_spec script
  4. traces of ruby2.5 are pulled in by order of zypper dependency resolution. A clean docker image with only ruby-2.6 can be forced by explicitly installing ruby2.6 ruby2.6-stdlib ruby2.6-rubygem-gem2rpm with portus.

  5. GEM_PATH in docker/init does not have to contain the version - setting it to /srv/Portus/vendor is good enough

  6. GEM_PATH / /srv/Portus/vendor folder could be avoided completely by when requried gems are installed system-wide. While targeting debian inside a container, e.g., this might be feasible ...

With steps 2..5 I completely avoid the initial ruby error about searching a particular version of bundler in /srv/Portus/vendor!

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 9, 2020

With steps 2..5 I completely avoid the initial ruby error about searching a particular version of bundler in /srv/Portus/vendor!

Thank you so much for clearing that up on the linux packaging side ! This will also be helpful for me to setup a complete rails pipeline, i will for example take in account that no specific version of BUNDLER should be searched, and that's logical here : the version in use should have a symbolic link to the top searched path to bundler binary, which path should both be in path and not mention any version number, for example /usr/bin/ruby/bundle

GEM_PATH in docker/init does not have to contain the version - setting it to /srv/Portus/vendor is good enough

GEM_PATH / /srv/Portus/vendor folder could be avoided completely by when requried gems are installed system-wide. While targeting debian inside a container, e.g., this might be feasible ...

I do confirm it is feasible, I have done that just using a ruby / debian stretch, then stacked on this rails, GOLANG, whatever I needed to build an run portus.

system-provided bundler binary in /usr/bin is usable by either invoking portusctl --vendor=false or replacing all occurences of portusctl exec in docker/init with bundle exec

On funny thing to mention here, is that setting up my debian image, I had to use portusctl :

  • I could not use OpenSUSE packages, to install portusctl in the container.
  • So what did I do ? I tried building portusctl from source. Well problem is that if you run a standard GO build on the portusctl repo, you get an error, the build is awful, there is no git workflow, no branches but master, even zero tags on the repo, well, we know the story.
  • So understanding what is portusctl, and what I will get if I persisted trying to build from source portusctl, and the fact that a ruby on rails application can be started without portusctl in the first place...
  • I immediately followed the standards of ruby, and soon got on how to run puma with bundler.
  • And honestly, we're better off portusctl : if portusctl is supposed to make portus REST API calls easy, well I prefer playing with curl, than with an executable, with zero git workflow in its dev, zero releases, and supposed to be a dependency of a project which itself, has a bit of a hard time with mastering dependency and distribution management. Let's generate the swagger docs, neatly for a start, and for every portus release, that will be muuuch of an improvement, compared to a not-helping helper. (someone just wanted to say they know how to code with golang ? :) )

So I also confirm using bundler instead of portusctl works, and is exactly what I did to start portus inside a debian image. Here below, is my current Dockerfile, I set its CMD to /bin/bash, then run it interactively -itd restart always, and I can start portus with a bundler exec "pumactl -F /srv/Portus/config/puma.rb start" command.

I ended up with my modified /init, where the most important is to have a working wait_for_database function :

#!/bin/bash


# --------------------------------------------
# see https://github.com/pokusio/opensuzie-oci-library/issues/1#issuecomment-593322264
# --------------------------------------------
# --------------------------------------------
# -----  POKUS PORTUS FIX
# -----  issue fixed :
# -----  https://github.com/SUSE/Portus/issues/2241
# --------------------------------------------
# --------------------------------------------
# ------  FIX short description
# ---
# --- Goal I had : Do not change the OpenSUSE
# --- package based installation of Portus and
# --- its dependencies, here the Ruby related packages
# --- Because they work hard on packages to be the best suited for the underlying OS.
# ---
# --- I designed the fix based on the closest
# --- issue I found on other ruby stacked
# --- projects on github.com :
# ---
# ---  https://github.com/rubygems/rubygems/issues/2180#issuecomment-365263622
# ---
# --- 1./ The issue is that the
# ---     OpenSUSE installed package expects
# ---     the 'bundle' executable at a path
# ---     that does not exist;
# ---     So we use a symlink to fix that
# ---
# --- 2./ The issue is that the
# ---     OpenSUSE installed package expects
# ---     the 'bundle' executable at a path
# ---     that does not exist; But I noticed
# ---     that this path is using the 'bundle'
# ---     executable version number.
# ---     So I searched the system, to find
# ---     out what version of bundle was installed
# ---     by the openSUSE Ruby-packages.
# ---     Answer was (dockerhub 'opensuse/portus:2.5')
# ---     version '1.16.4'. So I uninstalled the bundle/bundler twins
# ---
# ---     Also, paths referenced by the https://github.com/rubygems/rubygems/issues/2180#issuecomment-365263622
# ---     were debian specific, so I had to modify
# ---     them, to match OpenSUSE Leap file system
# ---     layout :
# ---
# ---     for example '/usr/lib64/ ...'
# ---     instead of  '/usr/lib/ ...'
# ---     (typical layout customisation for virtualization / containerization)
# --------------------------------------------
# --------------------------------------------
# --------------------------------------------


export RUBY_MAJOR_VERSION=${RUBY_MAJOR_VERSION:-'2'}
export RUBY_MINOR_VERSION=${RUBY_MINOR_VERSION:-'6'}
export RUBY_UPDATE_VERSION=${RUBY_UPDATE_VERSION:-'0'}
export RUBY_VERSION="${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION}.${RUBY_UPDATE_VERSION}"
export RAKE_VERSION=${RAKE_VERSION:-'12.3.2'}
# so GEM_PATH should be set to ... ?
export BUNDLER_VERSION=${BUNDLER_VERSION:-'1.16.4'}


# export RAILS_ENV=production

/usr/bin/gem.ruby2.6 uninstall bundle --version '<1.17.3'
/usr/bin/gem.ruby2.6 install bundle --version '1.16.4'

/usr/bin/gem.ruby2.6 uninstall bundler --version '<1.17.3'
/usr/bin/gem.ruby2.6 install bundler --version '1.16.4'


# ----
# Symlink repairing missing path :
#  /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/
#
#
mkdir -p /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/

# because on opensuse Leap_15.1 , the
# ruby bundle executable is at [/usr/bin/bundle.ruby2.6], not at
# [/var/lib/gems/${RUBY_VERSION}/gems ...]
ln -s /usr/bin/bundle.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION} /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/bundle
ln -s /usr/bin/bundler.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION} /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/bundler
# ln -s /usr/bin/ruby.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION} /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/ruby
# ln -s /usr/bin/rake.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION} /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/rake
# ln -s /usr/bin/gem.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION} /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/gem



# echo '' > /pokus.portus.issue.2241.fix
# echo 'oui jai bien ete execute tres cher' > /pokus.portus.issue.2241.fix
# echo '' > /pokus.portus.issue.2241.fix
# echo "contenu de []"
# ls -allh /usr/lib64/ruby/gems/${RUBY_VERSION}/gems/rake-${RAKE_VERSION}/exe/rake > /pokus.portus.issue.2241.fix
# echo "répertoire créé : [/usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/]" > /pokus.portus.issue.2241.fix
# echo "chemin executable ruby : [/usr/bin/bundle.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION}]" > /pokus.portus.issue.2241.fix
# echo " [/usr/bin/bundle.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION}] est mappé sur : [/usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/bundle]" > /pokus.portus.issue.2241.fix
# echo "chemin executable ruby : [/usr/bin/bundler.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION}]" > /pokus.portus.issue.2241.fix
# echo " [/usr/bin/bundler.ruby${RUBY_MAJOR_VERSION}.${RUBY_MINOR_VERSION}] mappé sur : [/usr/lib64/ruby/gems/${RUBY_VERSION}/gems/bundler-${BUNDLER_VERSION}/exe/bundler]" > /pokus.portus.issue.2241.fix


# This script will ensure Portus' database is ready to be used. It will keep
# waiting for the db to be usable, but the script will exit with an error
# after a certain amount of failed attempts.
#
# The script will automatically import all the SSL certificates from
# `/certificates` into the final system. This is needed to talk with the
# registry API when this one is protected by TLS.
#
# Finally the script will start apache running Portus via mod_rails.

set -e

wait_for_database() {
  should_setup=${1:-0}

  TIMEOUT=90
  COUNT=0
  RETRY=1

  while [ $RETRY -ne 0 ]; do
    # case $(portusctl exec --vendor rails r /srv/Portus/bin/check_db.rb | grep DB) in
    case $(bundle exec "rails r /srv/Portus/bin/check_db.rb" | grep DB) in
      "DB_DOWN")
        if [ "$COUNT" -ge "$TIMEOUT" ]; then
          printf " [FAIL]\n"
          echo "Timeout reached, exiting with error"
          exit 1
        fi
        echo "Waiting for mariadb to be ready in 5 seconds"
        sleep 5
        COUNT=$((COUNT+5))
        ;;
      "DB_EMPTY"|"DB_MISSING")
        if [ $should_setup -eq 1 ]; then
          # create db, apply schema and seed
          echo "Initializing database"
          # portusctl exec --vendor rake db:setup
          bundle exec "rake db:setup"
          if [ $? -ne 0 ]; then
            echo "Error at setup time"
            exit 1
          fi
        fi
        ;;
      "DB_READY")
        echo "Database ready"
        break
        ;;
    esac
  done
  set -e
}

setup_database() {
  wait_for_database 1
}

# Usage: file_env 'XYZ_DB_PASSWORD' 'example'. This code is taken from:
# https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
file_env() {
    local var="$1"
    local fileVar="${var}_FILE"
    if [ -v "${var}" ] && [ -v "${fileVar}" ]; then
        echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
        exit 1
    fi
    if [ -v "${fileVar}" ]; then
        val="$(< "${!fileVar}")"
        export "$var"="$val"
    fi
    unset "$fileVar"
}

# Setup environment variables from secrets.
secrets=( PORTUS_DB_PASSWORD PORTUS_PASSWORD PORTUS_SECRET_KEY_BASE
          PORTUS_EMAIL_SMTP_PASSWORD PORTUS_LDAP_AUTHENTICATION_PASSWORD )
for s in "${secrets[@]}"; do
    if [[ -z "${!s}" ]]; then
        file_env "$s"
    fi
done

# Ensure additional certificates (e.g. docker registry) are known.
update-ca-certificates

# Further settings
export PORTUS_PUMA_HOST="0.0.0.0:3000"
export RACK_ENV="production"
export RAILS_ENV="production"
export CCONFIG_PREFIX="PORTUS"

# --- Don't you! (mess with GEM_PATH)
# if [ -z "$PORTUS_GEM_GLOBAL" ]; then
#     export GEM_PATH="/srv/Portus/vendor/bundle/ruby/2.6.0"
# fi

# On debug, print the environment in which we'll call Portus.
if [ "$PORTUS_LOG_LEVEL" == "debug" ]; then
    # printenv
    env
fi

# Go to the Portus directory and execute the proper command.
cd /srv/Portus
if [ ! -z "$PORTUS_BACKGROUND" ]; then
    wait_for_database
    # portusctl exec --vendor rails r /srv/Portus/bin/background.rb
    bundler exec "rails r /srv/Portus/bin/background.rb"
elif [ -z "$PORTUS_INIT_COMMAND" ]; then
    setup_database
    # portusctl exec --vendor "pumactl -F /srv/Portus/config/puma.rb start"
    bundler exec "pumactl -F /srv/Portus/config/puma.rb start"
else
    wait_for_database
    # portusctl exec --vendor "$PORTUS_INIT_COMMAND"
    bundler exec "$PORTUS_INIT_COMMAND"
fi

# ------
# Pour démarrer portus enmode [portus] :
# [bundler exec "pumactl -F /srv/Portus/config/puma.rb start"]
# [bundler exec "puma -C /srv/Portus/config/puma.rb"]
# [rails server]
# ------------------------------------------
# Pour démarrer portus enmode [background] :
# [bundler exec "rails r /srv/Portus/bin/background.rb"]
#

  • so here my Dockerfile, based on a ruby on debian image (note that I have striped out any use of GEM_PATH, and just let the stack take care of itself ) :
FROM ruby:2.6.0-stretch

# ------------------------------------------
#  Custom Commands that I added, inspired by
#  https://github.com/SUSE/Portus/issues/2244#issuecomment-584889394
# ------------------------------------------
#  Installs rake, and annotate, missing in
#  the SUSE/Portus Dockerfile
# ------------------------------------------




# --- because check_dn_.rb expects, I don't know why yet, a 2.6.0 ruby env.
# FROM ruby:2.6.5-stretch
# https://hub.docker.com/layers/ruby/library/ruby/2.6.5-stretch/images/sha256-159e7e054244af6aa75696a2a5141ccbcd12683e5b143dcb17077809d7d0c87d?context=explore
# FROM opensuse/ruby:2.6
# MAINTAINER is deprecated ...
# MAINTAINER SUSE Containers Team <containers@suse.com>

# I don't rely on opensuse/ruby anymore, in the stead, I rely on
# the official https://hub.docker.com/layers/ruby/library/ruby/
#

ENV COMPOSE=1

# Install the entrypoint of this image.
COPY init /

WORKDIR /srv/Portus

# ------------------------------------------------------v
# --- Ok, At OpenSUSE , they wanted to just install
# --- dependencies from Gemfiles, but I have to modify
# --- it live, so I don't want my changes involved by my
# ---  'gem install' commands, to be wiped out
# ---
# COPY built-portus/Gemfile* ./
COPY built-portus/ .

RUN apt-get update -y && apt-get install -y curl


# 1./ Install latest go version : picked from suse team dockerfile : [GOLANG_VERSION=1.10]
ARG GOLANG_VERSION=1.10
ARG GOLANG_OS=linux
ARG GOLANG_CPU_ARCH=amd64
# installing golang version
RUN curl https://dl.google.com/go/go${GOLANG_VERSION}.${GOLANG_OS}-${GOLANG_CPU_ARCH}.tar.gz -o go${GOLANG_VERSION}.${GOLANG_OS}-${GOLANG_CPU_ARCH}.tar.gz
RUN tar -C /usr/local -xzf go${GOLANG_VERSION}.${GOLANG_OS}-${GOLANG_CPU_ARCH}.tar.gz

# ARG PATH=$PATH:/usr/local/go/bin
# ENV PATH=$PATH:/usr/local/go/bin
# ARG PATH
# ENV PATH=$PATH
# RUN export PATH=$PATH:/usr/local/go/bin && go version

RUN PATH=$PATH:/usr/local/go/bin go version

RUN echo "[FINISHED] : Golang installation commands transposed from OPENSUSE to Debian"



# 2./ Then we install dev. dependencies
#
# ------------------------------------------
# ------------------------------------------
#  OPENSUSE packages installed using zypper
# ------------------------------------------
#  OPENSUSE           | DEBIAN
# ------------------------------------------
# ruby2.6-devel       | already installed in base image [ruby:2.6.5-stretch]
# libmariadb-devel    | libmariadb-dev
# postgresql-devel    | postgresql-server-dev-all
# nodejs              | [nodejs] Instead of just installing the package ... I install nodejs properly ?
# libxml2-devel       | libxml2-dev
# libxslt1            | libxslt1.1
# git-core            | git-core
# go1.10              | I don't install that package, I installed a proper golang envrionnement in the stead, no OS package. See section above this table.
# phantomjs           | phantomjs
# gcc-c++             | g++, but I installed [build-essential], instead of [g++ / pattern / devel_basis OpenSUSE packages ]
# pattern             | ...no match found
# devel_basis         | ...no match found
# ------------------------------------------
# Notes on installed packages
# ---> the [devel_basis] [pattern] packages
#      are used for building stuff like
#      nokogiri.(Source : OpenSUSE Team)
# ------------------------------------------
RUN apt-get install -y libmariadb-dev \
                       postgresql-server-dev-all \
                       nodejs \
                       libxml2-dev \
                       libxslt1.1 \
                       git-core \
                       phantomjs \
                       build-essential
RUN echo "[FINISHED] : packages installation commands transposed from OPENSUSE to Debian"



# ARG GEM_PATH="/usr/local:/usr/local/bin:/srv/Portus/vendor:/srv/Portus/vendor/bundle:/srv/Portus/vendor/bundle/ruby/2.6.0"
# ENV GEM_PATH="/usr/local:/usr/local/bin:/srv/Portus/vendor:/srv/Portus/vendor/bundle:/srv/Portus/vendor/bundle/ruby/2.6.0"
# ENV GEM_PATH="${GEM_PATH}:/srv/Portus/vendor/bundle/ruby/2.6.0"

# ARG GEM_HOME=/srv/Portus/vendor/bundle/ruby/2.6.0
# ENV GEM_HOME=/srv/Portus/vendor/bundle/ruby/2.6.0
# ENV GEM_HOME=/srv/Portus/vendor/bundle


# https://bundler.io/v1.17/bundle_install.html
# ARG BUNDLE_PATH="${BUNDLE_PATH}:/srv/Portus/vendor/bundle"
# ENV BUNDLE_PATH="${BUNDLE_PATH}:/srv/Portus/vendor/bundle"

# ARG BUNDLE_HOME=/srv/Portus/vendor/bundle
# ENV BUNDLE_HOME=/srv/Portus/vendor/bundle


# ARG RAILS_ENV=prod
# ENV RAILS_ENV=prod

# ------------------------------------------
#  Commands that should run identically on
#  Debian, and OPENSUSE
# ------------------------------------------
#  installs dev stack to build portus

ARG RUBYGEMS_VERSION=3.0.3
ENV RUBYGEMS_VERSION=3.0.3


ARG RAILS_VERSION=5.2.3
ENV RAILS_VERSION=5.2.3

ARG BUNDLE_VERSION='1.16.4'
ENV BUNDLE_VERSION='1.16.4'

ARG BUNDLER_VERSION='1.16.4'
ENV BUNDLER_VERSION='1.16.4'


# -- forcing update rubygem to an accurate version
RUN gem update --system ${RUBYGEMS_VERSION}

# RUN rm /usr/local/bin/bundle

RUN gem uninstall bundler
RUN gem uninstall bundle
RUN gem uninstall bundle --install-dir /usr/local/
RUN rm -fr /usr/local/bundle
# --- #
# RUN gem install bundler --no-document -v 1.17.3
RUN gem update
RUN gem install bundler -v "${BUNDLER_VERSION}"
# RUN gem install bundle --no-document -v "${BUNDLER_VERSION}"
RUN gem install bundle
RUN bundle install --retry=3
# RUN bundle update --bundler
# unnecessary in a docker library official ruby image # update-alternatives --install /usr/bin/bundle bundle /usr/bin/bundle.ruby2.6 3 && \
# unnecessary in a docker library official ruby image # update-alternatives --install /usr/bin/bundler bundler /usr/bin/bundler.ruby2.6 3 && \

RUN export PATH=$PATH:/usr/local/go/bin && go get -u github.com/vbatts/git-validation && \
    go get -u github.com/openSUSE/portusctl && \
    mv /root/go/bin/git-validation /usr/local/bin/ && \
    mv /root/go/bin/portusctl /usr/local/bin/

RUN echo "[FINISHED] : Commands that should run identically on Debian and OEPNSUSE"



# --- mandatory future change :
# accurate version of annotate, no context dependent dependency resolution
ARG RUBY_ANNOTATE_VERSION=3.1.0
ENV RUBY_ANNOTATE_VERSION=3.1.0
# RUBY_ANNOTATE_VERSION=2.7.4
# RUN gem i annotate -v $RUBY_ANNOTATE_VERSION
RUN gem i annotate
ARG RUBY_RAKE_VERSION=12.3.2
ENV RUBY_RAKE_VERSION=12.3.2

RUN gem i rake -v $RUBY_RAKE_VERSION

# ---
# Or Ruby ([RVM RubyVersionManager] I believe) is
# going to throw an error
# ---
# https://rvm.io/workflow/projects
RUN ruby --version|awk '{print $2}' > .ruby-version

# ---
# Or Bundler is gonna complain about it missing
# RUN gem i minitest -v 5.11.3

#

#RUN export PATH=$PATH:/usr/local/go/bin && go get gopkg.in/urfave/cli.v1
RUN apt-get update -y && apt-get install -y go-md2man
RUN apt-get install graphviz -y
# RUN export PATH=$PATH:/usr/local/go/bin && make install
# --
# RUN bundler install --deployment && bundler package --all

EXPOSE 3000
# ENTRYPOINT ["/init"]


# --------------------------------------------------------------
# --------------------------------------------------------------
# ------ IMPOSSIBLE D INSTALLER PORTUSCTL, ET
# ------ SON BUILD FROM SOURCE BEUG DANS TOUS LES SENS
# --------------------------------------------------------------
# --------------------------------------------------------------
# ------
ARG PUMA_VERSION=4.3.3
ENV PUMA_VERSION=4.3.3

RUN gem i puma -v "${PUMA_VERSION}"

COPY .portusgitignore .
RUN cp ./.portusgitignore ./.gitignore
RUN git init
RUN git add --all && git commit -m "releasing-2.5.0-rc"
RUN git tag 2.5.0 -m "releasing-2.5.0-rc"

RUN bundle --deployment
# ---
# ---
# By default,its the foreground webapp and Docker auth v2 service, not the background.
# ---

ENV PORTUS_BACKGROUND=${PORTUS_BACKGROUND:-''}
# ------
# Pour démarrer portus enmode [portus] :
# [bundler exec "pumactl -F /srv/Portus/config/puma.rb start"]
# [bundler exec "puma -C /srv/Portus/config/puma.rb"]
# [rails server]
# ------------------------------------------
# Pour démarrer portus enmode [background] :
# [bundler exec "rails r /srv/Portus/bin/background.rb"]
#

# Tres interessant : la commande pumactl c'estpastrouvée, je suis donc obligé, de trouvr un moyen d'installer pumactl
# root@747b24cf1854:/srv/Portus# git tag 2.5.0 -m "releasing-2.5.0-rc"
# root@747b24cf1854:/srv/Portus# bundler exec "pumactl -F /srv/Portus/config/puma.rb start"
# [25681] Puma starting in cluster mode...
# [25681] * Version 3.12.1 (ruby 2.6.0-p0), codename: Llamas in Pajamas
# [25681] * Min threads: 1, max threads: 4
# [25681] * Environment: development
# [25681] * Process workers: 4
# [25681] * Preloading application
# [schema] Selected the schema for mysql
# [WARN] couldn't connect to database. Skipping PublicActivity::Activity#parameters's serialization
# No such file or directory - connect(2) for /srv/Portus/tmp/sockets/puma.sock
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:371:in `initialize'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:371:in `new'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:371:in `add_unix_listener'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:141:in `block in parse'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:90:in `each'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/binder.rb:90:in `parse'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/runner.rb:153:in `load_and_bind'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/launcher.rb:186:in `run'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/cli.rb:80:in `run'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/cluster.rb:412:in `run'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/control_cli.rb:261:in `start'
# /usr/local/bundle/gems/puma-3.12.1/lib/puma/control_cli.rb:225:in `run'
# /usr/local/bundle/gems/puma-3.12.1/bin/pumactl:8:in `<top (required)>'
# /usr/local/bundle/bin/pumactl:23:in `load'
# /usr/local/bundle/bin/pumactl:23:in `<main>'
# root@747b24cf1854:/srv/Portus# # git tag 2.5.0 -m "releasing-2.5.0-rc"
# root@747b24cf1854:/srv/Portus# # git add --all && git commit -m "releasing-2.5.0-rc"
# root@747b24cf1854:/srv/Portus# gem install portusctl
# ERROR:  Could not find a valid gem 'portusctl' (>= 0) in any repository
# ERROR:  Possible alternatives: portugal
# root@747b24cf1854:/srv/Portus# ./bin/
# bundle               integration/         rake                 spring
# ci/                  rails                setup                test-integration.sh
# root@747b24cf1854:/srv/Portus# rails server
# => Booting Puma
# => Rails 5.2.3 application starting in development
# => Run `rails server -h` for more startup options
# [schema] Selected the schema for mysql
# [WARN] couldn't connect to database. Skipping PublicActivity::Activity#parameters's serialization
# [25693] Puma starting in cluster mode...
# [25693] * Version 3.12.1 (ruby 2.6.0-p0), codename: Llamas in Pajamas
# [25693] * Min threads: 1, max threads: 4
# [25693] * Environment: development
# [25693] * Process workers: 4
# [25693] * Preloading application
# [25693] * Listening on unix:///srv/Portus/tmp/sockets/puma.sock
# [25693] Use Ctrl-C to stop
# [25693] - Worker 0 (pid: 25703) booted, phase: 0
# [25693] - Worker 1 (pid: 25705) booted, phase: 0
# [25693] - Worker 2 (pid: 25709) booted, phase: 0
# [25693] - Worker 3 (pid: 25717) booted, phase: 0
#







####### environnement exec portus puma
ENV PORTUS_MACHINE_FQDN=${PORTUS_MACHINE_FQDN:-'pegasusio.io'}
ENV PORTUS_PASSWORD=${PORTUS_PASSWORD:-'123123123'}
ENV PORTUS_KEY_PATH=${PORTUS_KEY_PATH:-'/secrets/certificates/portus.key'}
#RACK ENV DOIT ETRE A PRODUCTION POUR QUE CA FONCTIONNE
ENV RACK_ENV=${RACK_ENV:-'production'}
ENV PORTUS_SECRET_KEY_BASE=${PORTUS_SECRET_KEY_BASE:-'lhkjhgjhgjhgjhgjgf638ygjh685'}
ENV CCONFIG_PREFIX=PORTUS

ENV PORTUS_DB_HOST=${PORTUS_DB_HOST:-'db'}
ENV PORTUS_DB_DATABASE=${PORTUS_DB_DATABASE:-'portus_production'}
ENV PORTUS_DB_PASSWORD=${DATABASE_PASSWORD:-'tintin'}
ENV PORTUS_DB_POOL=5
####### instruction de demarrage qui marche
# bundler exec "pumactl -F /srv/Portus/config/puma.rb start"
RUN bundle --deployment
RUN mkdir -p /srv/Portus/tmp/sockets
RUN touch /srv/Portus/tmp/sockets/puma.sock

CMD ["/bin/bash"]

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 9, 2020

1. portusctl expects the bundler to be "bundled" in the vendor folder:
   https://github.com/Josua-SR/portusctl/blob/master/exec.go#L124

Oh, and most of all : AT LAAAAAST we have the final word, so it was worth writing it all down "that there is no executable that make use of the BUNDLER_VERSION envrionment variable at runtime " :)

Keeps happening all the time I go "wait, wait, so If we put it in proper words, what is happening here is ...." And when I'm finished writing the question, we all have the answer. But that's a classical, still always good to have everyday proof of that.
Though this time, it took checking portusctl 's source code, which I did not ! Also why I always combine my approach with blackbox test (I don't care what's in there, I 'm gonna check how it behaves, and then guess) : Is it how you had the idea of searching portusctl's source code for occurrences of BUNDLER_VERSION, like you ran tests o the image, changing value of BUNDLER_VERSION variable, and you could check at runtime, that "something changed" ?

Also,I really worked hard on the Docker iamge of protus, so I was pretty sure, of the 1.16.4 requirement to run the portus application in puma. SO, when I saw BUNDLER_VERSION=1.16.whatever, It was so obviously things I saw done by developer teams, like soooo classical ... (Is there a PORTUS_BUNDLER_VERSION env var ? no)

Honestly @Josua-SR I am like really so proud of what we achieved together, so obvious in this page how complementary we were.

Oups was going happy too fast

Actually, that the bundler executable is expected in the vendor directory, is one thing,that does not say, why, in this path /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError), expected by the whole opensuse package, there is the number 1.16.4, it must come from somewhere, and it is not coming from the underlying system, because I checked that this path does not exists, which is why I had to uninstall version 1.17, and reinstall a 1.16.x instead.

So I think we don't have the final word yet, nevertheless,you pointed at a part of portusctl code that is particularly ugly : I mean , really not worth invoking portusctl to run bundle commands.

traces of ruby 2.5 are pulled in by the spec file which is doing a lot of version mangling. I cleaned that up to - for now

I don't see another possibility : Don't you think it was this spec file , defining the portus zypper package, which pulled the 1.16.x packages... maybe the 15.0 OBS repo references in the spec file... ? That's what means your result, when you say ;

With steps 2..5 I completely avoid the initial ruby error about searching a particular version of bundler in /srv/Portus/vendor

As for my error logs, It was not in /srv/Portus/vendor, but at /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle that the executable was searched n not found... Oh but I see, that's why you set GEM_PATH=/srv/Portus/vendor, yes, it's the /init script sets ip for us to /srv/Portus/vendor/bundle/ruby/2.6.0. Ad I had the error on the /usr/lib64/ruby/gems/2.6.0/gems/[...] path, not in /srv/Portus/vendor, nor /srv/Portus/vendor/bundle/ruby/2.6.0...

So ok about the GEM_PATH, yet we don't have this 1.16 value somewhere... So my best guess is that it was the 15.0 OBS repo references in the spec file, causing installation of old 1.16.x version of bundler, within the Portus zypper package, what do you think ?

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 9, 2020

Successful portusctl build from source

So just a detail, cause portusctl not that important, but I could successfully build the protusctl golang executable, using the docker container defined above to build n run portus (there's GOLANG stack installed in there, also maybe required md2man debian package installed). I simply executed the following in that container, which worked (and another bizarre thing about a build dependency, that is, an md2man unavoidable error, plus the cp -fR fix.... well now it works) :

#!/bin/bash


if [ "x$PORTUSCTL_COMMIT_ID" == "x" ]; then
  echo " The [PORTUSCTL_COMMIT_ID] must be set, to build PORTUSCTL from source"
  exit 1
fi;

if [ "x$PORTUSCTL_VERSION" == "x" ]; then
  echo " The [PORTUSCTL_VERSION] must be set, to build PORTUSCTL from source"
  exit 1
fi;

# export PORTUSCTL_COMMIT_ID=${PORTUSCTL_COMMIT_ID:-'HEAD'}
# export PORTUSCTL_VERSION=${PORTUSCTL_VERSION}

git clone https://github.com/openSUSE/portusctl
cd portusctl/

git checkout $PORTUSCTL_COMMIT_ID

export PATH=$PATH:/usr/local/go/bin
export GOPATH=$(pwd)/vendor
export GOBIN=$GOPATH/bin

echo "GOPATH=[$GOPATH]"
echo "GOBIN=[$GOBIN]"

go get gopkg.in/urfave/cli.v1


echo ''
echo '  ==>> Now installing dependency [cpuguy83/go-md2man], to build [portusctl]'
echo ''

go get github.com/cpuguy83/go-md2man/md2man

echo ''
echo '  ==>> The error about [cpuguy83/go-md2man] is expected, and wont prevent building [portusctl]'
echo '       This imperfection in the [portusctl] build from source will soon be corrected by the pokus dev team'
echo ''

# -----
# Correcting a few bizarre bugs about md2man
mkdir -p $GOPATH/src/github.com/cpuguy83/go-md2man/v2/ 
cp -fR $GOPATH/src/github.com/cpuguy83/go-md2man/md2man $GOPATH/src/github.com/cpuguy83/go-md2man/v2/



make install


echo "building portusctl from source worked"

./vendor/bin/portusctl --version
  • worth mentionning about md2man :
root@c9ece586cb3e:/srv/Portus/portusctl# portusctl --version
portusctl version .
Copyright (C) 2017-2019 Miquel Sabaté Solà <msabate@suse.com>
License GPLv3+: GNU GPL version 3 or later "<http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
root@c9ece586cb3e:/srv/Portus/portusctl# git remote -v
origin	https://github.com/openSUSE/portusctl (fetch)
origin	https://github.com/openSUSE/portusctl (push)
root@c9ece586cb3e:/srv/Portus/portusctl# cat ./man/portusctl-version.1.md| head -n 15
PORTUSCTL 1 "portusctl User manuals" "SUSE LLC." "NOVEMBER 2017"
================================================================

# NAME
portusctl version \- Print the client and server version information

# SYNOPSIS

**portusctl version**

# DESCRIPTION
This command differs from the plain **-v, --version** global flag in the fact
that it outputs a full description of the versions being targeted. In
particular, it prints:

root@c9ece586cb3e:/srv/Portus/portusctl# ls -allh ./man/
total 56K
drwxr-xr-x  2 root root 4.0K Mar  9 21:07 .
drwxr-xr-x 10 root root 4.0K Mar  9 21:08 ..
-rw-r--r--  1 root root 1.1K Mar  9 21:07 portusctl-bootstrap.1.md
-rw-r--r--  1 root root 1.2K Mar  9 21:07 portusctl-create.1.md
-rw-r--r--  1 root root  788 Mar  9 21:07 portusctl-delete.1.md
-rw-r--r--  1 root root 1.9K Mar  9 21:07 portusctl-exec.1.md
-rw-r--r--  1 root root 1.8K Mar  9 21:07 portusctl-explain.1.md
-rw-r--r--  1 root root 2.0K Mar  9 21:07 portusctl-get.1.md
-rw-r--r--  1 root root 1.1K Mar  9 21:07 portusctl-health.1.md
-rw-r--r--  1 root root 1.1K Mar  9 21:07 portusctl-update.1.md
-rw-r--r--  1 root root 1.3K Mar  9 21:07 portusctl-validate.1.md
-rw-r--r--  1 root root 1.2K Mar  9 21:07 portusctl-version.1.md
-rw-r--r--  1 root root 3.1K Mar  9 21:07 portusctl.1.md
-rw-r--r--  1 root root 1.6K Mar  9 21:07 sanitize.go
root@c9ece586cb3e:/srv/Portus/portusctl# 

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 10, 2020

Released portusctl and portus build from source, all in debian

https://github.com/pokusio/portus-build-from-source/releases/tag/0.0.1

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Apr 13, 2020

Hi @Josua-SR I want to confirm you I just today testdd including my debian based build from source of Portus : and it works great.

Josua I really think that the OpenSUSE original design using their package managers is really useless, certainly as badly managed as all other things. I hope you will forgive me, if it s you that hey left alone to manage the zypper packages, in which case, I propose we work together, to provide both approaches (apt-get or if you really want to zypper), but with a goal that everything works with just one copy paste by users, in their shell session, or any, but single one command.

I don't think it is you, since you had to raise up your own obs account and run your packages pipelines, but still, they might have accepted merge/pulll requests from you.

Plus I want to share with you a great result I understood today, on probably one of the most valuable business case, and I think a very good idea I had about it, see #2281 (comment)

@stale
Copy link

stale bot commented Jul 12, 2020

Thanks for all your contributions!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Jul 12, 2020
@stampeder
Copy link

I just found Portus. It looked like a great solution for my private registry needs. Tried installing the official version and it bombed. It didn't like Zypper. I use Ubuntu as my host OS. Can you point me to an install that uses apt-get and creates the GUI container for Portus that I can install and use? It seems to be a great tool if I can get it to work.
Thanks.
Glenn.

@stale stale bot closed this as completed Jul 25, 2020
@SuperSandro2000
Copy link

@stampeder Use the Docker Image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants