Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

Garbage Collector removes all tags #2241

Closed
ghost opened this issue Oct 14, 2019 · 44 comments
Closed

Garbage Collector removes all tags #2241

ghost opened this issue Oct 14, 2019 · 44 comments
Labels

Comments

@ghost
Copy link

ghost commented Oct 14, 2019

Description

I Activated Garbage Collector on Portus Background process with keep_latest: 5 and older_than: 100
But it deletes all images older_than 100 ignoring the keep_latest flag. In result I have old repositories wiped all completely

Steps to reproduce

  1. Using Portus 2.4 with two processes: Portus and Portus Background
  2. Enable Garbage Collector with following options:
  garbage_collector:
    enabled: true
    older_than: 100
    keep_latest: 5
    tag: ""
  1. Restart Portus Background process
  2. All images older than 100 days are deleted, keep_latest is ignored.
  • Expected behavior: I expected that an old repository it kept latest 5 tags
  • Actual behavior: All images older than 100 days are deleted

Here initial logs:

[Initialization] Running: 'Registry events', 'Garbage collector'
[catalog] Removed the tag 'master'.
[catalog] Removed the image 'bsdash'.
[catalog] Removed the tag 'master'.
[catalog] Removed the image 'bsdash-gitlabreceiver'.
[catalog] Removed the tag '5.13'.
[catalog] Removed the tag '4.0.28'.
[catalog] Removed the tag '4.0.29'.
[catalog] Removed the tag '4.0.30'.
[catalog] Removed the tag '5.14'.
Handling 'delete' event:
{
  "id": "16748f9c-95cd-496d-ba29-2d46881f657b",
  "timestamp": "2019-10-14T17:04:02.347313389+02:00",
  "action": "delete",
  "target": {
    "digest": "sha256:2aa456083567cc6c63aafca541805fe551088abd347f32d5adca245ee3cc8100",
    "repository": "bs/bsdash"
  },
  "request": {
    "id": "2220cb85-5421-405f-8b0b-ade62cb5c504",
    "addr": "172.17.0.2:46500",
    "host": "registry.bscompany.eu:5000",
    "method": "DELETE",
    "useragent": "Ruby"
  },
  "actor": {
    "name": "portus"
  },
  "source": {
    "addr": "itportus01:5000",
    "instanceID": "5dfbe8ef-e81d-4ce6-84b6-470c720ea725"
  }
}

Deployment information

Deployment method: Portus is deployed as a standalone Container (not Compose) which connects to local MariaDB and Registry.

Configuration:

email:
  from: portus@bscompany.eu
  name: Portus
  reply_to: ''
  smtp:
    enabled: false
    address: smtp.example.com
    port: 587
    domain: example.com
    ssl_tls: ''
    enable_starttls_auto: false
    openssl_verify_mode: none
    ca_path: ''
    ca_file: ''
    user_name: ''
    password: "****"
    authentication: login
gravatar:
  enabled: true
delete:
  enabled: true
  contributors: false
  garbage_collector:
    enabled: false
    older_than: 30
    keep_latest: 5
    tag: ''
ldap:
  enabled: false
  hostname: ldap_hostname
  port: 389
  timeout: 5
  encryption:
    method: ''
    options:
      ca_file: ''
      ssl_version: TLSv1_2
  base: ''
  admin_base: ''
  group_base: ''
  filter: ''
  uid: uid
  authentication:
    enabled: false
    bind_dn: ''
    password: "****"
  group_sync:
    enabled: true
    default_role: viewer
  guess_email:
    enabled: false
    attr: ''
oauth:
  local_login:
    enabled: true
  google_oauth2:
    enabled: false
    id: ''
    secret: ''
    domain: ''
    options:
      hd: ''
  open_id:
    enabled: false
    identifier: ''
    domain: ''
  openid_connect:
    enabled: false
    issuer: ''
    identifier: ''
    secret: ''
  github:
    enabled: false
    client_id: ''
    client_secret: ''
    organization: ''
    team: ''
    domain: ''
  gitlab:
    enabled: false
    application_id: ''
    secret: ''
    group: ''
    domain: ''
    server: ''
  bitbucket:
    enabled: false
    key: ''
    secret: ''
    domain: ''
    options:
      team: ''
first_user_admin:
  enabled: true
signup:
  enabled: false
check_ssl_usage:
  enabled: true
registry:
  jwt_expiration_time:
    value: 15
  catalog_page:
    value: 100
  timeout:
    value: 2
  read_timeout:
    value: 120
machine_fqdn:
  value: portus.bscompany.eu
display_name:
  enabled: false
user_permission:
  change_visibility:
    enabled: true
  create_team:
    enabled: true
  manage_team:
    enabled: true
  create_namespace:
    enabled: true
  manage_namespace:
    enabled: true
  create_webhook:
    enabled: true
  manage_webhook:
    enabled: true
  push_images:
    policy: allow-teams
security:
  clair:
    server: ''
    health_port: 6061
    timeout: 900
  zypper:
    server: ''
  dummy:
    server: ''
anonymous_browsing:
  enabled: true
background:
  registry:
    enabled: true
  sync:
    enabled: true
    strategy: initial
pagination:
  per_page: 10
  before_after: 2

Portus version: 2.4.3@5a616c0ef860567df5700708256f42505cdb9952

env_portus: environment file used for customizing Portus Foreground:

PORTUS_MACHINE_FQDN_VALUE=portus.bscompany.eu
PORTUS_PUMA_HOST=0.0.0.0:3000

PORTUS_SECRET_KEY_BASE=***
PORTUS_KEY_PATH=/certificates/***
PORTUS_PASSWORD=***

CCONFIG_PREFIX=PORTUS

PORTUS_DB_ADAPTER=mysql2
PORTUS_DB_HOST=portus.host.local
PORTUS_DB_USERNAME=***
PORTUS_DB_PASSWORD=***
PORTUS_DB_DATABASE=portus_production
PORTUS_DB_POOL=5

RAILS_SERVE_STATIC_FILES=true

PORTUS_PUMA_TLS_KEY=/certificates/***
PORTUS_PUMA_TLS_CERT=/certificates/***

PORTUS_LOG_LEVEL=info

TZ=Europe/Rome

We are running portus with:

docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config.yml:/srv/Portus/config/config.yml \
       -p 3000:3000 --name portus --env-file=/srv/portus/config/env_portus opensuse/portus:2.4

env_background: environment file used for customizing Portus Background:

PORTUS_MACHINE_FQDN_VALUE=portus.bscompany.eu
PORTUS_PUMA_HOST=0.0.0.0:3000
PORTUS_BACKGROUND=true

PORTUS_SECRET_KEY_BASE=***
PORTUS_KEY_PATH=/certificates/***
PORTUS_PASSWORD=***

CCONFIG_PREFIX=PORTUS

PORTUS_DB_ADAPTER=mysql2
PORTUS_DB_HOST=portus.host.local
PORTUS_DB_USERNAME=***
PORTUS_DB_PASSWORD=***
PORTUS_DB_DATABASE=portus_production
PORTUS_DB_POOL=5

RAILS_SERVE_STATIC_FILES=true

PORTUS_PUMA_TLS_KEY=/certificates/***
PORTUS_PUMA_TLS_CERT=/certificates/***

PORTUS_LOG_LEVEL=info

TZ=Europe/Rome

Then we are running portus background:

docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config.yml:/srv/Portus/config/config.yml \
       --name portus_background --env-file=/srv/portus/config/env_background opensuse/portus:2.4

Thanks in advance
Roberto

@shibug
Copy link

shibug commented Dec 4, 2019

This is affecting us very badly. any update on this?

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Dec 16, 2019

Description

I Activated Garbage Collector on Portus Background process with keep_latest: 5 and older_than: 100
But it deletes all images older_than 100 ignoring the keep_latest flag. In result I have old repositories wiped all completely

[...]
Portus version: 2.4.3@5a616c0ef860567df5700708256f42505cdb9952

Thanks in advance
Roberto

Hi @robgiovanardi ? I have a feeling, but no more time just wanted to give you an important tip :

  • Is the PORTUS_BACKGROUND=true environement variable set ?
  • It is improbable, that you can operate in production a portus, with one standalone container : the documentation will confirm you that a second container, at least, is required. They call it the background. At that's why docker-compose at least, if not k8s, is natural for a prod deployment : container orchestration of several containers.

Btw, Garbage collection sounds soooo much like a background job, doesn't it ? (Imagine a Stop-the-world - java dah - in Portus....). But Java has nothing to do with our issue.

Hope this will help

@ghost
Copy link
Author

ghost commented Dec 16, 2019

Hi @Jean-Baptiste-Lasselle
thanks for your answer. I confirm we have two containers:

CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                    NAMES
96e14d4670ad        opensuse/portus:2.4   "/init"             7 months ago        Up 2 weeks          3000/tcp                 portus_background
e23352be8846        opensuse/portus:2.4   "/init"             7 months ago        Up 4 weeks          0.0.0.0:3000->3000/tcp   portus

Only portus_backgound container has PORTUS_BACKGROUND=true set and this is the one container with garbage_collector feature enabled:

  garbage_collector:
    enabled: true
    older_than: 100
    keep_latest: 5
    tag: ""

So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..

@shibug
Copy link

shibug commented Dec 16, 2019

We also have the same problem and can confirm that the keep_latest setting doesn’t work.

@Jean-Baptiste-Lasselle
Copy link

Hi @Jean-Baptiste-Lasselle
thanks for your answer. I confirm we have two containers:

CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                    NAMES
96e14d4670ad        opensuse/portus:2.4   "/init"             7 months ago        Up 2 weeks          3000/tcp                 portus_background
e23352be8846        opensuse/portus:2.4   "/init"             7 months ago        Up 4 weeks          0.0.0.0:3000->3000/tcp   portus

Only portus_backgound container has PORTUS_BACKGROUND=true set and this is the one container with garbage_collector feature enabled:

  garbage_collector:
    enabled: true
    older_than: 100
    keep_latest: 5
    tag: ""

So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..

hi @robgiovanardi thank you so much for ur feedback, i mean it's very interesting, i haven't tested the feature yet, but it's at least very important business case.

But i can try n help you :

  • next step : how is communication established between ur registry and portus_background containers ? I wanna ask here, because both of your portus and portus_background are using same port number
  • and maybe that's why you went standalone, so that port conflict does not blow everything. Plus, if tried to access "from the outside", your portus_background is unreachable, it's always your portus container who gets the requests on to 3000. Is it why?

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Dec 16, 2019

@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside portus' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking.

I have another idea, too : for your portus container (not the portus_background) , do you have any occurrence of the string garbage_collector in the config file /srv/Portus/config/config.yml ?

@ghost
Copy link
Author

ghost commented Dec 17, 2019

Hi @Jean-Baptiste-Lasselle
thanks for your answer. I confirm we have two containers:

CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                    NAMES
96e14d4670ad        opensuse/portus:2.4   "/init"             7 months ago        Up 2 weeks          3000/tcp                 portus_background
e23352be8846        opensuse/portus:2.4   "/init"             7 months ago        Up 4 weeks          0.0.0.0:3000->3000/tcp   portus

Only portus_backgound container has PORTUS_BACKGROUND=true set and this is the one container with garbage_collector feature enabled:

  garbage_collector:
    enabled: true
    older_than: 100
    keep_latest: 5
    tag: ""

So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..

hi @robgiovanardi thank you so much for ur feedback, i mean it's very interesting, i haven't tested the feature yet, but it's at least very important business case.

But i can try n help you :

  • next step : how is communication established between ur registry and portus_background containers ? I wanna ask here, because both of your portus and portus_background are using same port number
    portus foreground is externally facing, so the 3000 port exposed; portus_background has no exposed port because it should communicate with registry, which is located on the docker host.
  • and maybe that's why you went standalone, so that port conflict does not blow everything. Plus, if tried to access "from the outside", your portus_background is unreachable, it's always your portus container who gets the requests on to 3000. Is it why?
    I supposed portus_background doesn't need to be externally accessible. Am I wrong?

I forgot to mention environment variable used for background and foreground, I'm adding that to Description

@ghost
Copy link
Author

ghost commented Dec 17, 2019

@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside portus' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking.

I have another idea, too : for your portus container (not the portus_background) , do you have any occurrence of the string garbage_collector in the config file /srv/Portus/config/config.yml ?

Actually, yes, because we are using the very same config.yaml for both foreground and background, and running them with different environment variables, I just updated initial description including those envs.

Let me try a different config.yml

@Jean-Baptiste-Lasselle
Copy link

background

Excellent news @robgiovanardi !!!! Indeed, my idea was that the comunication between your registry and the portus_background actually never happens, and here is what it involves:

  • There is one hugedifference (extremely important) between portus container and the portus_background container : the portus_background has the - PORTUS_BACKGROUND=true environement variable. The portus container hasnot, (and must not) have that environmentvariable value set to true,i think it defaults to false.
  • So okay : that a massive difference, it makes what runs into those two contaners just as different as windows and linux.
  • And that 'swhy it is SO importantthat youmake sure that the registry communicates with the portus_background : the docker-compose ad deployment examples in the portus project for that sake, are awfully misleading, using same (network) identities for completely different services, for example.
  • So her is what I think : your no.1 prirority is to make sure communication happens between your registry and the portus_background
  • I have a question : reading how you speak about it, I think it's possible the private docker registry which communicates with your portus, was already there long before you tried portus, am I right? If yes,then you confirm this private docker registry you wan to operate is not in the docker-compose.yml (which has portus inside) ?

One final amusement remark : The examples in the portus distrib are awful..,And it's funny I heave a feeling thy were kind of teared off a legacy docker swarm cluster... I Might be wrong. Or not . :)
Anyway,I actually understand the point of view of OpenSUSE, and I am here to support community, because we are going to make portus work together, and that is great(ly strategic in the container planet). And I am so thanking OpenSUSE guys for what they dropped us. Plus i started years ago because of OpenStack.
OpenSUSE guys will understand the message.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Dec 17, 2019

@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside portus' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking.
I have another idea, too : for your portus container (not the portus_background) , do you have any occurrence of the string garbage_collector in the config file /srv/Portus/config/config.yml ?

Actually, yes, because we are using the very same config.yaml for both foreground and background, and running them with different environment variables, I just updated initial description including those envs.

Let me try a different config.yml

yes, do your thing with the config.yml :

  • make sure the the garbage_collector config is present for the portus_background,
  • and remove it completely from the portus container's config.yml
  • then I'm almost sure of whats gonna happen now : you wil try and try again to get a garbage collection, but it won't ever happen anymore. (test with images you don't care to delete, only them with the tag parameter)

having accurate results on that test, will help me a lot. Still is sure that we will have to ironize your network setup, so it's more explicitly telling the operator who 's talking to who and for what purpose. You security guys will like that too.

@ghost
Copy link
Author

ghost commented Dec 18, 2019

background

Excellent news @robgiovanardi !!!! Indeed, my idea was that the comunication between your registry and the portus_background actually never happens, and here is what it involves:

  • There is one hugedifference (extremely important) between portus container and the portus_background container : the portus_background has the - PORTUS_BACKGROUND=true environement variable. The portus container hasnot, (and must not) have that environmentvariable value set to true,i think it defaults to false.
  • So okay : that a massive difference, it makes what runs into those two contaners just as different as windows and linux.
  • And that 'swhy it is SO importantthat youmake sure that the registry communicates with the portus_background : the docker-compose ad deployment examples in the portus project for that sake, are awfully misleading, using same (network) identities for completely different services, for example.
  • So her is what I think : your no.1 prirority is to make sure communication happens between your registry and the portus_background

Let me better understand: communication would happens from portus_background to registry, right? Not from registry to portus_background?
If this is true, then i can boot up a portus_background without externally facing port but still network: now the portus_background can queries mysqldb, find the registry network settings (attached screenshot, in this case, this one: )
image
and then portus_background can do anything he want with the registry

  • I have a question : reading how you speak about it, I think it's possible the private docker registry which communicates with your portus, was already there long before you tried portus, am I right? If yes,then you confirm this private docker registry you wan to operate is not in the docker-compose.yml (which has portus inside) ?

Yes, you right, the registry is installed as legacy application, with zypper, not deployed with docker nor docker-compose

One final amusement remark : The examples in the portus distrib are awful..,And it's funny I heave a feeling thy were kind of teared off a legacy docker swarm cluster... I Might be wrong. Or not . :)
Anyway,I actually understand the point of view of OpenSUSE, and I am here to support community, because we are going to make portus work together, and that is great(ly strategic in the container planet). And I am so thanking OpenSUSE guys for what they dropped us. Plus i started years ago because of OpenStack.
OpenSUSE guys will understand the message.

@ghost
Copy link
Author

ghost commented Dec 18, 2019

Hi @Jean-Baptiste-Lasselle
I can confirm that the problem is still present:

  • completely removed garbage entries from config.yml for portus:
  #garbage_collector:
  #  enabled: true

  #  # Remove images not pulled and older than a specific value. This value is
  #  # interpreted as the number of days.
  #  #
  #  # e.g.: If an image wasn't pulled in the latest 30 days and the image wasn't
  #  # updated somehow in the latest 30 days, the image will be deleted.
  #  older_than: 30

  #  # Keep the latest X images regardless if it's older than the value set in
  #  # `older_than` configuration.
  #  keep_latest: 15

  #  # Provide a string containing a regular expression. If you provide a
  #  # valid regular expression, garbage collector will only be applied into tags
  #  # matching a given name.
  #  #
  #  # Valid values might be:
  #  #   - "jenkins": if you anticipate that you will always have a tag with a
  #  #     specific name, you can simply use that.
  #  #   - "build-\\d+": your tag follows a format like "build-1234" (note that
  #  #     we need to specify "\\d" and not just "\d").
  #  tag: ""
  • started portus_background with garbage enabled and with an external port:
docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config_background.yml:/srv/Portus/config/config.yml -p3001:3000 --name portus_background --env-file=/srv/portus/config/env_background opensuse/portus:2.4

The problem is still there

Things to notice: I enabled debug mode on portus_background: it drop this query to find if there's some images to delete:

(0.2ms)  SELECT COUNT(*) FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:07:00.047409')

I used older_than: 30 garbage configuration, so why the updated_at time 2019-11-18. But i can't see other logs that indicate the keep_latest

Some debug logs

   (0.4ms)  SELECT COUNT(*) FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:03:08.231707')
  User Load (0.5ms)  SELECT  `users`.* FROM `users` WHERE `users`.`username` = 'portus' LIMIT 1
  Tag Load (0.7ms)  SELECT `tags`.* FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:03:08.231707')
  Repository Load (0.3ms)  SELECT  `repositories`.* FROM `repositories` WHERE `repositories`.`id` = 1 LIMIT 1
  SQL (1.5ms)  UPDATE `tags` SET `tags`.`marked` = 1 WHERE `tags`.`digest` = 'sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a' AND `tags`.`repository_id` = 1          Registry Load (0.3ms)  SELECT  `registries`.* FROM `registries`  ORDER BY `registries`.`id` ASC LIMIT 1
  Namespace Load (0.4ms)  SELECT  `namespaces`.* FROM `namespaces` WHERE `namespaces`.`id` = 3 LIMIT 1
  Tag Load (1.1ms)  SELECT  `tags`.* FROM `tags` WHERE `tags`.`digest` = 'sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a' AND `tags`.`repository_id` = 1  ORDER BY `tags`.`id` ASC LIMIT 1000
[catalog] Removed the tag 'latest'.
  Tag Load (0.2ms)  SELECT  `tags`.* FROM `tags` WHERE `tags`.`id` = 1 LIMIT 1
   (0.1ms)  BEGIN
  ScanResult Load (0.2ms)  SELECT `scan_results`.* FROM `scan_results` WHERE `scan_results`.`tag_id` = 1
  SQL (0.6ms)  DELETE FROM `tags` WHERE `tags`.`id` = 1
   (4.6ms)  COMMIT

with tag_load it select tags marked = 0 updated 30 days ago; then at the end it run a DELETE FROM tags where tags.id =1. Seems no keep_latest checks are done at all

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Dec 19, 2019

hi @robgiovanardi :

Communication between registry and background_portus

Let me better understand: communication would happens from portus_background to registry,
right? Not from registry to portus_background?

Absolutely, accurately;

About checking if your background can reach registry (at least, not even database)

Your portus webpage screenshot :

portus webpage

Does not make think that your background_portus can reach the registry. Do you have any reason to think otherwise (actually asking, maybe i'm forgetting about something) ? To me , it could just be the portus container being able to query the portus database, why else would we have such a configuration for the portus (if not to reach database) :

      - PORTUS_DB_HOST=db
      - PORTUS_DB_DATABASE=portus_production
      - PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
      - PORTUS_DB_POOL=5

About your background_container

  • On the machine on which you execute the command :
docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config_background.yml:/srv/Portus/config/config.yml -p3001:3000 --name portus_background --env-file=/srv/portus/config/env_background opensuse/portus:2.4

You have created a directory with path /srv/Portus/config/env_background ? I mean the path just looks like a path inside, not outside a container.

  • regardless from this path question, about --env-file, I would need to have a look at the content of /srv/Portus/config/env_background

About portus_background logs

Well what we see in logs, are SQL queries made from portus_background, to the database.
So, about the garbage collection process, and without diving into timestamps details :

  • ok, it's deleting entries in the portus database. So they don't appear anymore in the Portus WebUI.
  • but do you have anything else in the portus_background container logs, that let you think that anything is actually removed inside the registry storage ? Given What i read in portus' documentation, images are supposed to be deleted from registry storage as well, not just the portus app database. And that's the whole point of garbage collection : making some space.
  • All in all, these portus_background logs make us sure that communication between the portus_background, and the portus database, is going on ok. It tells us nothing about communication between portus_background, and registry you have installed out of any container, on your images.culturebase.org machine/vm. Here is what can give us more informations here : do the same test again, not changing any config, and let's see :
    • if in the logs of your registry, we have anything that let's us think that anyone, is trying to delete docker images in your registry, be it the portus_background container, or any other (like portus).
    • if in the logs of portus_background, we have anything that let's us think that here's communication between portus_background and you registry
    • another thing would be helpful to confirm my hypothesis : can we see your registry config.yml file, especially the storage section ? That will help me confirm you that the mariadb database is indeed, used by portus, but not as storage for your docker images. For example,in the config I'm running on my servers, I have this storage configuration section, inside the config.yml for my private docker registry :
storage:
  filesystem:
    rootdirectory: /var/lib/registry
  delete:
    enabled: true

About the garbage collection process, and without diving into timestamps details you have pointed out : I did not check it all, but it looks like indeed, yes, there is here a bug, the SQL queries do not spare the keep_latest.

last thing

I have not yet finished automation of reporducing your business case, but I will evenutally by end of december, worst case, probably before. Then I'll battle test your remarks on keep_latest, and give you feed back on my results.

Again, I will support your case untill it's solved, if you don't want some specific details to be exposed in this conversation where we are working, you can reach me by email, to quickly give me details
jean.baptiste.lasselleATgmail.com . I will sign up any Non Disclosure Agreement if necessary, without any financial counter parties or fees.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 15, 2020

hi @robgiovanardi just to keep you informed, I am totally working on the matter, still haven't finished complete automation, and I today found something that sounds very good, in relation with garbage collection : #2275 (comment)

What those

# Sync config
- PORTUS_BACKGROUND_REGISTRY_ENABLED=true
- PORTUS_BACKGROUND_SYNC_ENABLED=true
- PORTUS_BACKGROUND_SYNC_STRATEGY=update

made me think ... Maybe there exists things like

  • PORTUS_BACKGROUND_GARBAGE_COLLECTION_ENABLED ,
  • PORTUS_BACKGROUND_GC_ENABLED=true # GC for Garbage Collection
  • PORTUS_BACKGROUND_GC_OLDER_THAN=100
  • PORTUS_BACKGROUND_GC_KEEP_LATEST=5

Never the less, I wouldn't be surprised that the PORTUS_BACKGROUND_REGISTRY_ENABLED=true is required to have Garbage Collection working fine.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 15, 2020

@robgiovanardi I think I found your solution!!!!, and oh my god, the idea I just wrote 5minutes ago, inspred by #2275 (comment) , gave full reward !!!!

have a look out there : https://github.com/Ashtonian/server-setup/blob/bc9ac031a18f1c686da5a662d3cf969009a50c38/portus/docker-compose.yml

So Yesssss! there exist PORTUS_BACKGROUND_GARBAGE_COLLECTION_XXXX variables to activate and configure Garbage collection !!! :D :D :D thank you sooo much @kylegoetz and Ashtonian

And so, what you need to do, is to ad the following env. variables to both your background and your portus services in docker-compose.yml

      - PORTUS_DELETE_ENABLED=true
      - PORTUS_DELETE_CONTRIBUTORS=false
      - PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
      - PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
      - PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5

Honestly, I'll try as soon aspossible to set that only for the background, just to check if it works, cause there 's a potential non-necessary copy-paste in this example.

I have a lot of other work on Portus, so I can't do that this weekend, I am so dying that you do that and give me feedback even before I run it 😄

The whole docker-compose.yml, so we don't lose it

I found it seraching github with string PORTUS_BACKGROUND_REGISTRY_ENABLED, and got only 4 results in code, in the whole of github.com as of 15/02/2020!!

Even funnier, 😆 , none of those 4 results are in portus documentation !! I had take the screenshot before there are more results on github.com ! 😆

PORTUS_BACKGROUND_REGISTRY_ENABLED_none_of_result_search_in_portus_documentation

version: "3.7"

services:
  portus:
    image: opensuse/portus:2.4.3
    # env_file:
    #   - ./portus.env
    environment:
      - PORTUS_MACHINE_FQDN_VALUE=portus.ashlab.dev
      - PORTUS_DB_HOST=db
      - PORTUS_DB_DATABASE=portus_production
      - PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
      - PORTUS_DB_POOL=5
      - PORTUS_SECRET_KEY_BASE=${SECRET_KEY_BASE}
      - PORTUS_KEY_PATH=/certificates/portus.ashlab.dev/privatekey.key
      - PORTUS_PASSWORD=${PORTUS_PASSWORD}
      - PORTUS_CHECK_SSL_USAGE_ENABLED=false

      - PORTUS_SIGNUP_ENABLED=false
      - RAILS_SERVE_STATIC_FILES=true

      - PORTUS_GRAVATAR_ENABLED=true
      - PORTUS_DELETE_ENABLED=true
      - PORTUS_DELETE_CONTRIBUTORS=false
      - PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
      - PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
      - PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5
      - PORTUS_ANONYMOUS_BROWSING_ENABLED=false

      - PORTUS_OAUTH_GITHUB_ENABLED=true
      - PORTUS_OAUTH_GITHUB_CLIENT_ID=${PORTUS_OAUTH_GITHUB_CLIENT_ID}
      - PORTUS_OAUTH_GITHUB_CLIENT_SECRET=${PORTUS_OAUTH_GITHUB_CLIENT_SECRET}
      - PORTUS_OAUTH_GITHUB_ORGANIZATION=karsto
      # # - PORTUS_OAUTH_GITHUB_TEAM=''
      # # - PORTUS_OAUTH_GITHUB_DOMAIN=''


      #       - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
    # ports:
    #   - 3000:3000
    depends_on:
      - db
    links:
      - db
    volumes:
      - traefik_certs_raw:/certificates:ro
      # - secrets:/certificates:ro
    networks:
      - portus
      - public
    labels:
      - "traefik.enable=true"
      # - "traefik.http.middlewares.sslHeaders.headers.SSLHost=portus.ashlab.dev"
      - "traefik.http.routers.portus.rule=Host(`portus.ashlab.dev`)"
      - "traefik.http.routers.portus.middlewares=https_redirect, sslHeaders"
      - "traefik.http.routers.portus.service=portus"
      - "traefik.http.routers.portus.tls=true"
      - "traefik.http.routers.portus.tls.certresolver=le"
      - "traefik.http.services.portus.loadbalancer.server.port=3000"
      - "traefik.http.services.portus.loadbalancer.server.scheme=http"
      - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
      - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
      # - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
      # - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
      # - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
      # - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
      # - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
    deploy:
      labels:
        - "traefik.enable=true"
        # - "traefik.http.middlewares.sslHeaders.headers.SSLHost=portus.ashlab.dev"
        - "traefik.http.routers.portus.rule=Host(`portus.ashlab.dev`)"
        - "traefik.http.routers.portus.middlewares=https_redirect, sslHeaders"
        - "traefik.http.routers.portus.service=portus"
        - "traefik.http.routers.portus.tls=true"
        - "traefik.http.routers.portus.tls.certresolver=le"
        - "traefik.http.services.portus.loadbalancer.server.port=3000"
        - "traefik.http.services.portus.loadbalancer.server.scheme=http"
        # - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
        # - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
        # - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
        # - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
        # - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
        # - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
        # - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
        # - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
        # - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
        # - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
  background:
    image: opensuse/portus:2.4.3
    depends_on:
      - portus
      - db
    environment:
      # Theoretically not needed, but cconfig's been buggy on this...
      - CCONFIG_PREFIX=PORTUS
      - PORTUS_MACHINE_FQDN_VALUE=portus.ashlab.dev
      - PORTUS_DB_HOST=db
      - PORTUS_DB_DATABASE=portus_production
      - PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
      - PORTUS_DB_POOL=5
      - PORTUS_SECRET_KEY_BASE=${SECRET_KEY_BASE}
      - PORTUS_KEY_PATH=/certificates/portus.ashlab.dev/privatekey.key
      - PORTUS_PASSWORD=${PORTUS_PASSWORD}
      #       - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
      # - PORTUS_CHECK_SSL_USAGE_ENABLED=false
      - PORTUS_GRAVATAR_ENABLED=true
      - PORTUS_DELETE_ENABLED=true
      - PORTUS_DELETE_CONTRIBUTORS=false
      - PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
      - PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
      - PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5

      - PORTUS_OAUTH_GITHUB_ENABLED=true
      - PORTUS_OAUTH_GITHUB_CLIENT_ID=${PORTUS_OAUTH_GITHUB_CLIENT_ID}
      - PORTUS_OAUTH_GITHUB_CLIENT_SECRET=${PORTUS_OAUTH_GITHUB_CLIENT_SECRET}
      - PORTUS_OAUTH_GITHUB_ORGANIZATION=karsto
      # - PORTUS_OAUTH_GITHUB_TEAM=''
      # - PORTUS_OAUTH_GITHUB_DOMAIN=''
      - PORTUS_ANONYMOUS_BROWSING_ENABLED=false

      - PORTUS_BACKGROUND=true
      - PORTUS_BACKGROUND_REGISTRY_ENABLED=true
      - PORTUS_BACKGROUND_SYNC_ENABLED=true
      - PORTUS_BACKGROUND_SYNC_STRATEGY=update-delete
    links:
      - db
    # env_file:
    #   - ./portus.env
    volumes:
      - traefik_certs_raw:/certificates:ro
    networks:
      - portus

  db:
    image: library/mariadb:10.0.33
    command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
    # env_file:
    #   - ./portus.env
    environment:
      - MYSQL_DATABASE=portus_production
      - MYSQL_ROOT_PASSWORD=${DATABASE_PASSWORD}
    volumes:
      - mariadb:/var/lib/mysql
    networks:
      - portus

  # clair: TODO:
  #   image: quay.io/coreos/clair
  #   restart: unless-stopped
  #   depends_on:
  #     - postgres
  #   links:
  #     - postgres
  #     - portus
  #   ports:
  #     - "6060-6061:6060-6061"
  #   volumes:
  #     - /tmp:/tmp
  #     - ./clair/clair.yml:/clair.yml
  #   command: [-config, /clair.yml]

  registry:
    image: library/registry:2.7.1
    # env_file:
    #   - ./portus.env
    environment:
      # REGISTRY_HTTP_ADDR: registry.ashlab.dev
      # Authentication
      REGISTRY_AUTH_TOKEN_REALM: https://portus.ashlab.dev/v2/token
      REGISTRY_AUTH_TOKEN_SERVICE: registry.ashlab.dev
      REGISTRY_AUTH_TOKEN_ISSUER: portus.ashlab.dev
      REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certificates/portus.ashlab.dev/certificate.crt

      # Portus endpoint
      REGISTRY_NOTIFICATIONS_ENDPOINTS: >
        - name: portus
          url: https://portus.ashlab.dev/v2/webhooks/events
          timeout: 2000ms
          threshold: 5
          backoff: 1s
    volumes:
      - traefik_certs_raw:/certificates:ro
      - registry:/var/lib/registry
      - secrets:/secrets:ro
      - ./config.yml:/etc/docker/registry/config.yml:ro
    ports:
      # - 5000:5000
      - 5001:5001 # required to access debug service
    links:
      - portus:portus
    networks:
      - portus
      - public
    labels:
      - "traefik.enable=true"
      # - "traefik.http.middlewares.sslHeaders.headers.SSLHost=registry.ashlab.dev"
      - "traefik.http.routers.registry.rule=Host(`registry.ashlab.dev`)"
      - "traefik.http.routers.registry.middlewares=https_redirect, sslHeaders"
      - "traefik.http.routers.registry.service=registry"
      - "traefik.http.routers.registry.tls=true"
      - "traefik.http.routers.registry.tls.certresolver=le"
      - "traefik.http.services.registry.loadbalancer.server.port=5000"
      - "traefik.http.services.registry.loadbalancer.server.scheme=http"
      # - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
      # - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
      # - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
      # - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
      # - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
      # - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
      # - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
    deploy:
      labels:
      - "traefik.enable=true"
      # - "traefik.http.middlewares.sslHeaders.headers.SSLHost=registry.ashlab.dev"
      - "traefik.http.routers.registry.rule=Host(`registry.ashlab.dev`)"
      - "traefik.http.routers.registry.middlewares=https_redirect, sslHeaders"
      - "traefik.http.routers.registry.service=registry"
      - "traefik.http.routers.registry.tls=true"
      - "traefik.http.routers.registry.tls.certresolver=le"
      - "traefik.http.services.registry.loadbalancer.server.port=5000"
      - "traefik.http.services.registry.loadbalancer.server.scheme=http"
      # - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
      # - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
      # - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
      # - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
      # - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
      # - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
      # - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
      # - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"

volumes:
  secrets:
    driver: local
    driver_opts:
      type: "none"
      o: "bind,rw"
      device: "/mnt/workspace/portus/secrets"
  traefik_certs_raw:
    driver: local
    driver_opts:
      type: "none"
      o: "bind,ro"
      device: "/mnt/workspace/traefik_certs_raw/"


  mariadb:
  registry:

networks:
  public:
    external: true
  portus:

@Jean-Baptiste-Lasselle
Copy link

@robgiovanardi so apply environment variables I gave you, inserting them into your env_background file

@diranged
Copy link

@Jean-Baptiste-Lasselle Hey - is there a reason why the latest garbage collection code (d847071) isn't in the 2.4.3 release? As far as I can tell the release was made in May, but this code was merged in January? This is a pretty crucial patch. Can we get a 2.4.4 release?

@Jean-Baptiste-Lasselle
Copy link

release

hi Matt @diranged , Actually I am not an OpenSUSE engineer,or am not 'yet?) part of the official portus support or dev team. So I can't make release. Looks like you're gonna have to make a personal release n your infrastructure, building portus from source (tagged 2.4.4-private-release?)
I think there 's a docker image portus:2.5, though there is no 2.5 yet

@ghost
Copy link
Author

ghost commented Feb 17, 2020

Hi @Jean-Baptiste-Lasselle thanks for your help.

About environment variables, you probably read the doc: http://port.us.org/docs/Configuring-Portus.html:

In Portus we follow a naming convention for environment variables: first of all we have the PORTUS_ prefix, and then we add each key in uppercase. So, for example, the previous example can be tweaked by setting: PORTUS_FEATURE_ENABLED and PORTUS_FEATURE_VALUE

So using

- PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5

is equal to use:

delete:
  garbage_collector:
    keep_latest: 5

Anyway I tested the env var but nothing changed: keep_latest was ignored and all tags was deleted.

@diranged You're right, that fundamental commit isn't in the 2.4.3 release.. I'll try to build the latest release and coming back for a feedback.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 17, 2020

ignored

@robgiovanardi @diranged so thank you both of you I think you are right, the keep_latest shallwork only when we get a release with that commit meaning until then, we have to make a build from source to benefit that feature.

If so :

@robgiovanardi so thank you about :

In Portus we follow a naming convention for environment variables: first of all we have the PORTUS_ prefix, and then we add each key in uppercase. So, for example, the previous example can be tweaked by setting: PORTUS_FEATURE_ENABLED and PORTUS_FEATURE_VALUE

So ok, we can infer which env. variables to use, from the config files descriptions :

GARBAGE_COLLECTOR_KEEP_LATEST_AVAILABLE_FROM_2.4 wrong

My analysis to really understand where we are now (starting from 17/02/2020)

  • pull request that was merged with code fix to enable keep_latest : config: improved garbage collector options #2095
  • other related suse team delivery (docs markdown update for keep_latest option) : 925bfc4
  • pull request from @vitoravelino was a branch created from master, and merged back to master, on Jan 16 , 2019 as shown in the portus repo github graph :

pull r 2095 grph

  • Now, as I have a look at the portus source code git repo git graph, I see :
    • there exist a 2.3.7 release, yet, no 2.3.7 branch exist :
      release 2.3.7
      there are branches named v2.5, v2.5,
    • there is no branch for release 2.3.7, but there is a v2.3 branch, and actually, there is a branch for every major release,at least since version 2.0 :

not for 2.3.7, but v2.3

a branch for major releases

branch 2.4 creation date

  • I also checked here https://github.com/SUSE/Portus/network , that nothing from master branch was merged back to branch v2.4, since its creation (as I, and you, expected)
  • so now I am wondering : where (on which branch) is release tag 2.4.3 ?
jbl@poste-devops-jbl-16gbram:~$ mkdir crusty
jbl@poste-devops-jbl-16gbram:~$ cd crusty
jbl@poste-devops-jbl-16gbram:~/crusty$ date
Mon Feb 17 18:06:53 CET 2020
jbl@poste-devops-jbl-16gbram:~/crusty$ git clone "https://github.com/SUSE/Portus" .
Cloning into '.'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 38886 (delta 0), reused 0 (delta 0), pack-reused 38883
Receiving objects: 100% (38886/38886), 36.51 MiB | 22.24 MiB/s, done.
Resolving deltas: 100% (18381/18381), done.
jbl@poste-devops-jbl-16gbram:~/crusty$ git branch --contains tags/2.4.3
jbl@poste-devops-jbl-16gbram:~/crusty$ git tag -ln
1.0.0           Merge pull request #159 from jordimassaguerpla/fix_css_landing_page
1.0.1           teams: fixed regression where namespaces could not be created from team page
2.0.0           Version 2.0.0
2.0.1           First patch level release since 2.0.0
2.0.2           Fixed an issue regarding distribution 2.3 support
2.0.3           More fixes on the docker 1.10 & distribution 2.3 versions
2.0.4           Small fixes
2.0.5           Small fixes
2.1.0           2.1.0 release. Read the changelog in the CHANGELOG.md file
2.1.1           Important fixes and some small improvements
2.2.0           Final release of 2.2.0
2.2.0-rc1       First release candidate of the 2.2.0 release
2.2.0rc2        Added somes fixes to activities
2.3.0           2.3.0
2.3.1           2.3.1 security update
2.3.2           Security fixes
2.3.3           Bug fixes since 2.3.2
2.3.4           Added some fixes
2.3.5           Update on sprocket
2.3.6           Release with a couple of important fixes
2.3.7           Minor upgrades on vulnerable gems
2.4.0           Release 2.4.0
2.4.1           Bug fixes and gem upgrades
2.4.2           Minor fixes and support for registries 2.7.x
2.4.3           Minor patch-level release
jbl@poste-devops-jbl-16gbram:~/crusty$ git branch -a --contains tags/2.4.3
  remotes/origin/v2.4
jbl@poste-devops-jbl-16gbram:~/crusty$ 
  • Alright, now we all agree that tag 2.4.3, which is latest available release of portus, can't ever have @viovanov fix about garbage collector, if suse team sticks to its current git workflow on the project.
  • Since creation date of branch v2.5 happened after @viovanov contrib, any v2.5.x future release will include.
  • But we don't have any v2.5.x release yet, so no release include that fix yet.

So I think for now,we have a proof, we need to build from source to get the keep_latest Garbage Collector feature in portus.

And Also see here an improvement opportunity, with portus CI/CD as of Mon Feb 17 18:35:58 CET 2020 :

  • ok, no 2.5.x release is available yet
  • so why doc update about keep_latest, 925bfc4 is not on branch v2.5 ? So that you are sure releases and related docs and in sync...
  • At least, currently, the docs published propose an option which is not available in any release, and worse, it is explicitly marked as available in 2.4.x, while I just proved it is definitely not :

GARBAGE_COLLECTOR_KEEP_LATEST_AVAILABLE_FROM_2.4 wrong

Also can't help writing, what about adopting the git-flow in Portus project ? (If it really is important to pull and merge to master, whatever the reason is, there are hotfix in the git-flow ...)

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 22, 2020

hi @robgiovanardi Did you try and use portus:2.5 docker image, instead of building portus from source, to see if you get the keep_latest feature ?

I mean :

@ghost
Copy link
Author

ghost commented Feb 22, 2020

Hi @Jean-Baptiste-Lasselle I still have no time to do my tests, but thanks for points me out that docker release. This will speed up my tests

@Jean-Baptiste-Lasselle
Copy link

Hi @Jean-Baptiste-Lasselle I still have no time to do my tests, but thanks for points me out that docker release. This will speed up my tests

Very interesting test automation case though :

Test Suite Bundle 1: short-term GC

  • Suite 1: without tag regexp
  • Suite 2: with tag regexp dronie-*

Suite 1

  • test ENV :
    • PORTUS_DELETE_ENABLED=false : the test starts with 5 days, during the first 3 days, any deletion forbidden, and we push oci container images everyday for 5 days. at day 4 at midnight plus one minute, we set PORTUS_DELETE_ENABLED=true and restart the portus and background services. So garbage collection should start at day 6, midnight, plus one minute. And always keep last 3 tags for any repository (OCI container image).
    • PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=2
    • PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=3
    • PORTUS_DELETE_GARBAGE_COLLECTOR_TAG=''
  • test-setup :
    • day 0 :
      • registry + portus provisioning.

      • also, create 2 users, the first super admin, and another beeio. Create team beebee, add beeio to beebee, and create the hive namespace for the beebee team. Finally creating a token named buzz, for the beeio user. All this using Portus API

      • pushing 3 images every day, for 5 days, from day 0 using beeio username and its buzz token

      • day 0 :

export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE

### NODE ON ALPINE

docker pull node:8-alpine
docker tag node:8-alpine $OCI_SERVICE/hive/node:8-alpine
# docker logged-in
docker push $OCI_SERVICE/hive/node:8-alpine


### HELM ON ALPINE

docker pull alpine/helm:3.1.1
docker tag alpine/helm:3.1.1-alpine $OCI_SERVICE/hive/helm:3.1.1-alpine
docker push $OCI_SERVICE/hive/helm:3.1.1-alpine

### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.4.1
docker tag exositebot/atlantis-terragrunt:version-1.4.1 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
docker push $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
* day 1 :  
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in

### NODE ON ALPINE

docker pull node:9-alpine
docker tag node:9-alpine $OCI_SERVICE/hive/node:9-alpine
docker push $OCI_SERVICE/hive/node:9-alpine

### HELM ON ALPINE

docker pull alpine/helm:3.1.0
docker tag alpine/helm:3.1.0-alpine $OCI_SERVICE/hive/helm:3.1.0-alpine
docker push $OCI_SERVICE/hive/helm:3.1.0-alpine

### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.4.0
docker tag exositebot/atlantis-terragrunt:version-1.4.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
  • day 2 :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in

### NODE ON ALPINE

docker pull node:10-alpine
docker tag node:10-alpine $OCI_SERVICE/hive/node:10-alpine
docker push $OCI_SERVICE/hive/node:10-alpine

### HELM ON ALPINE

docker pull alpine/helm:3.0.3
docker tag alpine/helm:3.0.3-alpine $OCI_SERVICE/hive/helm:3.0.3-alpine
docker push $OCI_SERVICE/hive/helm:3.0.3-alpine

### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.3.0
docker tag exositebot/atlantis-terragrunt:version-1.3.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
  • day 3 :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in

### NODE ON ALPINE

docker pull node:11-alpine
docker tag node:11-alpine $OCI_SERVICE/hive/node:11-alpine
docker push $OCI_SERVICE/hive/node:11-alpine

### HELM ON ALPINE

docker pull alpine/helm:2.15.2
docker tag alpine/helm:2.15.2-alpine $OCI_SERVICE/hive/helm:2.15.2-alpine
docker push $OCI_SERVICE/hive/helm:2.15.2-alpine

### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.1.0
docker tag exositebot/atlantis-terragrunt:version-1.1.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
  • day 4 :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in

### NODE ON ALPINE

docker pull node:12-alpine
docker tag node:12-alpine $OCI_SERVICE/hive/node:12-alpine
docker push $OCI_SERVICE/hive/node:12-alpine

### HELM ON ALPINE

docker pull alpine/helm:2.15.1
docker tag alpine/helm:2.15.1-alpine $OCI_SERVICE/hive/helm:2.15.1-alpine
docker push $OCI_SERVICE/hive/helm:2.15.1-alpine

### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.2.0
docker tag exositebot/atlantis-terragrunt:version-1.2.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
  • All test images :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE


### NODE ON ALPINE

docker pull node:8-alpine
docker pull node:9-alpine
docker pull node:10-alpine
docker pull node:11-alpine
docker pull node:12-alpine

docker tag node:8-alpine $OCI_SERVICE/hive/node:8-alpine
docker tag node:9-alpine $OCI_SERVICE/hive/node:9-alpine
docker tag node:10-alpine $OCI_SERVICE/hive/node:10-alpine
docker tag node:11-alpine $OCI_SERVICE/hive/node:11-alpine
docker tag node:12-alpine $OCI_SERVICE/hive/node:12-alpine

### HELM ON ALPINE

docker pull alpine/helm:3.1.1
docker pull alpine/helm:3.1.0
docker pull alpine/helm:3.0.3
docker pull alpine/helm:2.15.2
docker pull alpine/helm:2.15.1

docker tag alpine/helm:3.1.1-alpine $OCI_SERVICE/hive/helm:3.1.1-alpine
docker tag alpine/helm:3.1.0-alpine $OCI_SERVICE/hive/helm:3.1.0-alpine
docker tag alpine/helm:3.0.3-alpine $OCI_SERVICE/hive/helm:3.0.3-alpine
docker tag alpine/helm:2.15.2-alpine $OCI_SERVICE/hive/helm:2.15.2-alpine
docker tag alpine/helm:2.15.1-alpine $OCI_SERVICE/hive/helm:2.15.1-alpine


### ATLANTIS TERRAGRUNT

docker pull exositebot/atlantis-terragrunt:version-1.4.1
docker pull exositebot/atlantis-terragrunt:version-1.4.0
docker pull exositebot/atlantis-terragrunt:version-1.3.0
docker pull exositebot/atlantis-terragrunt:version-1.1.0
docker pull exositebot/atlantis-terragrunt:version-1.2.0

docker tag exositebot/atlantis-terragrunt:version-1.4.1 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
docker tag exositebot/atlantis-terragrunt:version-1.4.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
docker tag exositebot/atlantis-terragrunt:version-1.3.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
docker tag exositebot/atlantis-terragrunt:version-1.1.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
docker tag exositebot/atlantis-terragrunt:version-1.2.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
  • On day 6, we run a first tests suite, starting 9:00am :
    • we check all images inventory, and keep that as a oci-images.json file using registry's Catalog API endpoint, (not Portus API)
    • we call Portus API, to list all container images , according Portus ,and save that asjson into portus.oci-images.json
    • we have an expected-oci-inventory.day6.json, containing the list of all images expected to be found in the registry on day 6, that is to say, all images but those pushed on day 0.
    • we compare expected-oci-inventory.day6.json, and oci-images.json, generate a nice diff report
    • we compare expected-oci-inventory.day6.json, and portus.oci-images.json, generate a nice diff report
  • On day 0, day 1, day 2, day 3, day 4, and day 5, we run similar test,using same json diff technique

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 23, 2020

Shared Helper for load tests

So I have files with huge list of existing image tags :

# preparation of huge test dataset for
# tests like loead tests on portus

export namespace=library
export repo_name=ubuntu

echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=notary
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=centos
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=debian
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=archlinux
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=registry
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=node
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=busybox
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=dkron
export repo_name=dkron
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


export namespace=library
export repo_name=httpd
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags | awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}'>> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=tomcat
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=golang
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=python
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=rails
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json

export namespace=library
export repo_name=ruby
echo '{' > all.${namespace}.${repo_name}.tags.json
curl  -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json


ls -allh *.tags.json

# And we can have 1,2,5,10,20, 30, 50, 100, 150, 200, 300, etc...500 simultaneous docker clients constantly pulling and pushing any randomly picked tag among this list

@ghost
Copy link
Author

ghost commented Feb 24, 2020

Hi @Jean-Baptiste-Lasselle I tried to use portus:2.5 but got:

So I can't proceed with that. Anyway thanks for your support

@Jean-Baptiste-Lasselle
Copy link

Hi @Jean-Baptiste-Lasselle I tried to use portus:2.5 but got:

* #2197

* #2200

So I can't proceed with that. Anyway thanks for your support

Hi @robgiovanardi It's just a pleasure to support users who feedback so fast and share. Plus I love the General issue with the Portus project, it's like very significant, I believe, about the revolution currently happening all over the world with the cloud.

So, Ok, I duly note your results about 2.5, and definitely will feedback my further work on this issue : Yes, we will get the keep_latest feature, should it take me to take over the whole portus project.

So next: I'll reproduce those two issues #2197 and #2200

I am so not surprised by #2197 , because while solving this issue I found out that there is (February 2020) no rails official image based on ruby above 2.4. I bet without even reading it, it's exactly the problem for #2197 , that is rails framework version should be upgraded (surely what the opensuse developers do on a regular basis), because of whatever dependency requiring minimum rails version blabla.

One of the cloud's critical challenge : mastering the dependency hell.

Note there's here a distribution management problem of the OpenSUSE project Portus, that we are currently, along with our friend @diranged , clearly identifying

The dream problem for a devops like me. :)

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 24, 2020

@robgiovanardi just to say i now read both :

All in all I'd say 99% chances I provide you with a fix for your setup, before end of next week
andi'lldo that with a repo here : https://github.com/pokusio

@Jean-Baptiste-Lasselle
Copy link

Just to write it down,like that, I have a feeling openSUSE is conducted a huge migration on containers, so that it works with podman, or any other #nobigfatdaemon oci runtime ecosystem. I think that's what the left maintainers like @mssola are concerned about, just migrating portus distributed containers in that broader context. like what they have in mind constantly is Kubernetes

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Feb 28, 2020

Also to share with community, the tools list I'm gonna test to manage batch jobs :

Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 1, 2020
Purpose of this Release : Sharing a finally successfully built `portus:2.5` container, workingon SUSE/Portus#2241
Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 1, 2020
Purpose of this Release : Sharing a finally successfully built `portus:2.5` container, workingon SUSE/Portus#2241
Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 1, 2020
Purpose of this Release : Sharing a finally successfully built `portus:2.5` container, workingon SUSE/Portus#2241
@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 1, 2020

Hi @robgiovanardi I have news here :

  • I finally succeeded building a portus:2.5 docker image,
  • how to build it exactly like I did : https://github.com/pokusio/opensuzie-oci-library/releases/0.0.1
  • And I just finished testing it. Hourra! Portus starts, I create first super admin, teams namespace users, then I start docker push : ouch, I have a background KO, and here it is in the logs :
background_1                        | /usr/bin/bundle:23:in `load': cannot load such file -- /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError)
background_1                        | 	from /usr/bin/bundle:23:in `<main>'
  • It's quite the exact same error I got stuck into, when I redesigned the whole container from Debian. So I am converging in my analysis, but I confirm it's going to take time to repair the whole build from source, and then repair the source code.
  • having reproduced a build from source within a Debian context, adding GOLANG, and RUBY and Rails environment, etc, I think today the problem lies in the source code itself.
  • And that would not be astonishing :
    • I think I can say today, that there are very little source code work on the SUSE Team behalf, or community. (We are trying to use a feature commited and merged to master a year ago...)
    • And today, we kind of made it clear there are still major problems unsolved by the source code : background critical feature, like sync and garbage collector, content trust support (search notary in the issue list ... )

But there is that very strange thing I noticed

And that, I note to bear it in mind :

  • As said before, I had that ruby error when I tried building from source, in a debian container.
  • And I remember I finally kept playing around with resetting ruby env. vars, especially GEM_PATH and GEM_HOME.
  • I swear I have used for both background and portus services, the same image, built exactly like in release 0.0.1 of in my repo , I double checked , triple checked my docker-compose, after every test.
  • And yet, look at the logs below :
    • the value of GEM_PATH for portus is /srv/Portus/vendor/bundle/ruby/2.5.3
    • the value of GEM_PATH for background is /srv/Portus/vendor/bundle/ruby/2.6.0
  • More, on comparing those logs :
  • during execution of the /init script, when it is the portus, service, we have log lines we don't have, for the execution of the exact same /init script, but this time fo the background :
portuscontainer                     | + export RACK_ENV=production
portuscontainer                     | + RACK_ENV=production
portuscontainer                     | + export RAILS_ENV=production
portuscontainer                     | + RAILS_ENV=production
portuscontainer                     | + export CCONFIG_PREFIX=PORTUS
portuscontainer                     | + CCONFIG_PREFIX=PORTUS
portuscontainer                     | + '[' -z '' ']'
portuscontainer                     | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
if [ -z "$PORTUS_GEM_GLOBAL" ]; then
    export GEM_PATH="/srv/Portus/vendor/bundle/ruby/2.6.0"
fi
  • Ok, tomorrow, I'll try and set PORTUS_GEM_GLOBAL to the correct value, so that it forces a fixed ruby version, for both background and portus to 2.5.3 (cause it works for portus, so set that too, for background) .
  • Even before that, I'll just try and change the above if, by the following :
if [ -z "$PORTUS_GEM_GLOBAL" ]; then
    export GEM_PATH="/srv/Portus/vendor/bundle/ruby/2.5.3"
fi
  • I'll also do this : wtf unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE for portus conainer, and it's RAILS_ENV=production ...? (secrets should ALWAYS be in files in prodcution, never in env vars). I have to check secret management process, and there only one point in this /init script, where there is an unset command, in the file_env function that was copy pasted from https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh . And by the way, I have seen people having issues about LDAP integration, well, they might wanna know about that...

I guess I have my tomorrow 's TODO List.

Funny thing

Check out the commit (and its commit message) of @mssola on https://github.com/opensuse/docker-containers , which is today (1st of March 2020) :

In order to keep Virtualization:containers:Portus cleaner, we have removed some
packages from there and we are fetching them now from devel:languages:ruby. I've
changed the code so the GPG key for this repo is handled as well.

Moreover, this commit also contains some needed changes on the init file as for
the migration to ruby 2.6.
Signed-off-by: default avatarMiquel Sabaté Solà msabate@suse.com

:)
Note also the commit dates back to 10 months ago :

  • That's why they don't have releases, on that repo, though they have branches for portus major version. For example they have branches named portus-2.1, portus-2.2, portus-2.3, etc.. portus-2.5, and it's the last commit on each of these branches, that define the Docker image for all portus updates in the same minor release. There, there is complexity with this choice, because this means that all Dockerfiles on a given bracnh must not ever crash all updates in a minor release (no breaking change, or big newfeature concept), of portus. A litlle less than that, because it is a reccurent suite, but it does not matter to our issue.
  • Now I can confirm how the OpenSUSE Team installs and updates the portus sofware in the containers they push to docker hub :
    • they package portus as a zypper / open suse package, update those new linux suse packages to the public zypper linux package repository : it's like you could apt-get install -y portus, but instead they zypper add portus.
    • More accurately, they don't explicitly install any portus package, instead, they:
      • configure an entire Linux zypper package repository, (like on debian adding it to the 'sources.d.list/portus.repos.list'), dedicated to portus, namely obs://Virtualization:containers:Portus/openSUSE_Leap_15.1
      • and they update the system to install all package from that new repo, using zypper refresh,
    • same thing for installing ruby envrionment, the repository is obs://devel:languages:ruby/openSUSE_Leap_15.1 . Btw I think in zypper -ar $SUSE_LX_PKG_REPO_URI, ar stands for add repository
    • And they simply re-run the same build, last version on master of

The logs from my portus and background services (and the /init script)

using my portus:2.5 repaired image

jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f portus|more
Attaching to portuscontainer
portuscontainer                     | + mkdir -p /secrets/certificates
portuscontainer                     | + mkdir -p /secrets/rails
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | PORTUS PKI-INIT
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo 'PORTUS PKI-INIT'
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + set -x
portuscontainer                     | + mkdir -p /certificates
portuscontainer                     | + cp /secrets/certificates/portus.crt /certificates
portuscontainer                     | + cp /secrets/certificates/portus-oci-registry.crt /certificates
portuscontainer                     | + cp /secrets/certificates/portus-background.crt /certificates
portuscontainer                     | + update-ca-certificates
portuscontainer                     | + set -e
portuscontainer                     | + secrets=(PORTUS_DB_PASSWORD PORTUS_PASSWORD PORTUS_SECRET_KEY_BAS
E PORTUS_EMAIL_SMTP_PASSWORD PORTUS_LDAP_AUTHENTICATION_PASSWORD)
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z portus ]]
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z 12341234 ]]
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_SECRET_KEY_BASE
portuscontainer                     | + local var=PORTUS_SECRET_KEY_BASE
portuscontainer                     | + local fileVar=PORTUS_SECRET_KEY_BASE_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' /secrets/rails/portus.secret.key.base ']'
portuscontainer                     | + val=4e779b234f79de439e962b1f07991de41fe4baf611625545b5513405b7036
c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a
portuscontainer                     | + export PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe
4baf611625545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a
portuscontainer                     | + PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf611
625545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a
portuscontainer                     | + unset PORTUS_SECRET_KEY_BASE_FILE
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_EMAIL_SMTP_PASSWORD
portuscontainer                     | + local var=PORTUS_EMAIL_SMTP_PASSWORD
portuscontainer                     | + local fileVar=PORTUS_EMAIL_SMTP_PASSWORD_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + export PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | + PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | + unset PORTUS_EMAIL_SMTP_PASSWORD_FILE
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_LDAP_AUTHENTICATION_PASSWORD
portuscontainer                     | + local var=PORTUS_LDAP_AUTHENTICATION_PASSWORD
portuscontainer                     | + local fileVar=PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + export PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | + PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | + unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE
portuscontainer                     | + update-ca-certificates
portuscontainer                     | + export PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | + PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | + export RACK_ENV=production
portuscontainer                     | + RACK_ENV=production
portuscontainer                     | + export RAILS_ENV=production
portuscontainer                     | + RAILS_ENV=production
portuscontainer                     | + export CCONFIG_PREFIX=PORTUS
portuscontainer                     | + CCONFIG_PREFIX=PORTUS
portuscontainer                     | + '[' -z '' ']'
portuscontainer                     | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | + '[' debug == debug ']'
portuscontainer                     | + printenv
portuscontainer                     | PORTUS_DB_PASSWORD=portus
portuscontainer                     | PORTUS_DB_HOST=db
portuscontainer                     | HOSTNAME=b586460b2733
portuscontainer                     | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060
portuscontainer                     | RAILS_SERVE_STATIC_ASSETS='true'
portuscontainer                     | PORTUS_DB_POOL=5
portuscontainer                     | CCONFIG_PREFIX=PORTUS
portuscontainer                     | PORTUS_KEY_PATH=/secrets/certificates/portus.key
portuscontainer                     | PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | PWD=/
portuscontainer                     | PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | HOME=/root
portuscontainer                     | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io
portuscontainer                     | RAILS_SERVE_STATIC_FILES='true'
portuscontainer                     | PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061
portuscontainer                     | RAILS_ENV=production
portuscontainer                     | PORTUS_PASSWORD=12341234
portuscontainer                     | PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf61162
5545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a
portuscontainer                     | RACK_ENV=production
portuscontainer                     | PORTUS_SERVICE_FQDN_VALUE=portus.pegasusio.io
portuscontainer                     | PORTUS_LOG_LEVEL=debug
portuscontainer                     | PORTUS_PUMA_TLS_CERT=/secrets/certificates/portus.crt
portuscontainer                     | PORTUS_SECURITY_CLAIR_TIMEOUT=900s
portuscontainer                     | SHLVL=2
portuscontainer                     | PORTUS_PUMA_TLS_KEY=/secrets/certificates/portus.key
portuscontainer                     | PORTUS_DB_DATABASE=portus_production
portuscontainer                     | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
portuscontainer                     | _=/usr/bin/printenv
portuscontainer                     | + cd /srv/Portus
portuscontainer                     | + '[' '!' -z '' ']'
portuscontainer                     | + '[' -z '' ']'
portuscontainer                     | + setup_database
portuscontainer                     | + wait_for_database 1
portuscontainer                     | + should_setup=1
portuscontainer                     | + TIMEOUT=90
portuscontainer                     | + COUNT=0
portuscontainer                     | + RETRY=1
portuscontainer                     | + '[' 1 -ne 0 ']'
portuscontainer                     | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep 
DB) in
portuscontainer                     | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb
portuscontainer                     | ++ grep DB
portuscontainer                     | [WARN] couldn't connect to database. Skipping PublicActivity::Activ
ity#parameters's serialization
portuscontainer                     | Waiting for mariadb to be ready in 5 seconds
portuscontainer                     | + '[' 0 -ge 90 ']'
portuscontainer                     | + echo 'Waiting for mariadb to be ready in 5 seconds'
portuscontainer                     | + sleep 5
portuscontainer                     | + COUNT=5
portuscontainer                     | + '[' 1 -ne 0 ']'
portuscontainer                     | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep 
DB) in
portuscontainer                     | ++ grep DB
portuscontainer                     | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb
portuscontainer                     | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex
ist. Skipping PublicActivity::Activity#parameters's serialization
portuscontainer                     | + '[' 1 -eq 1 ']'
portuscontainer                     | + echo 'Initializing database'
portuscontainer                     | + portusctl exec rake db:setup
portuscontainer                     | Initializing database
portuscontainer                     | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex
ist. Skipping PublicActivity::Activity#parameters's serialization
portuscontainer                     | Database 'portus_production' already exists
portuscontainer                     | [schema] Selected the schema for mysql
portuscontainer                     |    (0.3ms)  SET NAMES utf8,  @@SESSION.sql_mode = 
CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'),  @@SESSION.sql_auto_is_null = 0, @@S
ESSION.wait_timeout = 2147483
portuscontainer                     | [Mailer config] Host:     portus.pegasusio.io
portuscontainer                     | [Mailer config] Protocol: https://
portuscontainer                     |    (0.3ms)  SET NAMES utf8,  @@SESSION.sql_mode = 
CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'),  @@SESSION.sql_auto_is_null = 0, @@S
ESSION.wait_timeout = 2147483
  • and here my background container :
jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f background|more
Attaching to compose_background_1
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | PORTUS BACKGROUND PKI-INIT
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | + mkdir -p /certificates
background_1                        | + cp /secrets/certificates/portus.crt /certificates
background_1                        | + cp /secrets/certificates/portus-oci-registry.crt /certificates
background_1                        | + cp /secrets/certificates/portus-background.crt /certificates
background_1                        | + update-ca-certificates
background_1                        | PORTUS_DB_PASSWORD=portus
background_1                        | PORTUS_DB_HOST=db
background_1                        | HOSTNAME=694ac5463bed
background_1                        | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060
background_1                        | PORTUS_DB_POOL=5
background_1                        | CCONFIG_PREFIX=PORTUS
background_1                        | PORTUS_KEY_PATH=/secrets/certificates/portus-background.key
background_1                        | PWD=/
background_1                        | PORTUS_PUMA_HOST=0.0.0.0:3000
background_1                        | HOME=/root
background_1                        | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io
background_1                        | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.6.0
background_1                        | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061
background_1                        | RAILS_ENV=production
background_1                        | PORTUS_PASSWORD=12341234
background_1                        | PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf61162
5545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a
background_1                        | RACK_ENV=production
background_1                        | PORTUS_LOG_LEVEL=debug
background_1                        | PORTUS_SECURITY_CLAIR_TIMEOUT=900s
background_1                        | SHLVL=2
background_1                        | PORTUS_DB_DATABASE=portus_production
background_1                        | PORTUS_BACKGROUND=true
background_1                        | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
background_1                        | _=/usr/bin/printenv
background_1                        | [WARN] couldn't connect to database. Skipping PublicActivity::Activ
ity#parameters's serialization
background_1                        | Waiting for mariadb to be ready in 5 seconds
background_1                        | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex
ist. Skipping PublicActivity::Activity#parameters's serialization
background_1                        | /usr/bin/bundle:23:in `load': cannot load such file -- /usr/lib64/r
uby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError)
background_1                        | 	from /usr/bin/bundle:23:in `'
background_1                        | Database ready
background_1                        | [schema] Selected the schema for mysql
background_1                        |    (0.4ms)  SET NAMES utf8,  @@SESSION.sql_mode = 
CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'),  @@SESSION.sql_auto_is_null = 0, @@S
ESSION.wait_timeout = 2147483
background_1                        | [Mailer config] Host:     portus.pegasusio.io
background_1                        | [Mailer config] Protocol: https://
background_1                        |   User Exists (0.4ms)  SELECT  1 AS one FROM `user
s` WHERE `users`.`username` = 'portus' LIMIT 1
background_1                        |   User Load (0.5ms)  SELECT  `users`.* FROM `users
` WHERE `users`.`username` = 'portus' LIMIT 1
background_1                        |    (0.4ms)  BEGIN
background_1                        |   User Update (0.5ms)  UPDATE `users` SET `encrypt
ed_password` = '$2a$10$EYWWJKtYCLV2MWePMBEa1OTnss/pdtX/s1znYb6jWKc1lhJJJ119.', `updated_at` = '2020-03-01 04:31:
16' WHERE `users`.`id` = 1
background_1                        |    (0.3ms)  COMMIT
--Plus--
  • my docker-compose config :
networks:
  pipeline_portus:
    driver: bridge
services:
  background:
    depends_on:
    - db
    - portus
    entrypoint:
    - /bin/bash
    - -c
    - /init-pki && /bin/chmod +x /init && /init
    environment:
      CCONFIG_PREFIX: PORTUS
      PORTUS_BACKGROUND: "true"
      PORTUS_DB_DATABASE: portus_production
      PORTUS_DB_HOST: db
      PORTUS_DB_PASSWORD: portus
      PORTUS_DB_POOL: '5'
      PORTUS_KEY_PATH: /secrets/certificates/portus-background.key
      PORTUS_LOG_LEVEL: debug
      PORTUS_MACHINE_FQDN_VALUE: portus.pegasusio.io
      PORTUS_PASSWORD: '12341234'
      PORTUS_SECRET_KEY_BASE_FILE: /secrets/rails/portus.secret.key.base
      PORTUS_SECURITY_CLAIR_HEALTH_PORT: '6061'
      PORTUS_SECURITY_CLAIR_SERVER: http://clair.pegasusio.io:6060
      PORTUS_SECURITY_CLAIR_TIMEOUT: 900s
    extra_hosts:
    - oci-registry.pegasusio.io:192.168.1.22
    - portus.pegasusio.io:192.168.1.22
    image: opensuzie/portus:2.5
    links:
    - db
    networks:
      pipeline_portus:
        aliases:
        - portus-backservice.pegasusio.io
    volumes:
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/portus_background/init-pki:/init-pki:ro
  clair:
    build:
      context: /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/oci/clair
    command:
    - -config
    - /clair.yml
    depends_on:
    - postgres
    entrypoint:
    - /usr/bin/dumb-init
    - --
    - /clair.customized
    image: oci-registry.pegasusio.io/pokus/clair:v2.1.2
    links:
    - postgres
    networks:
      pipeline_portus:
        aliases:
        - clair.pegasusio.io
    ports:
    - 6060:6060/tcp
    - 6061:6061/tcp
    restart: unless-stopped
    volumes:
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/tmpclair:/tmp:rw
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/clair/clair.yml:/clair.yml:rw
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/clair/clair.customized:/clair.customized:rw
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets/certificates:/secrets/certificates:rw
  db:
    command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
      --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
    environment:
      MYSQL_DATABASE: portus_production
      MYSQL_ROOT_PASSWORD: portus
    extra_hosts:
    - oci-registry.pegasusio.io:192.168.1.22
    - portus.pegasusio.io:192.168.1.22
    image: library/mariadb:10.0.23
    networks:
      pipeline_portus:
        aliases:
        - db.pegasusio.io
    volumes:
    - /var/lib/portus/mariadb:/var/lib/mysql:rw
  nginx:
    extra_hosts:
    - oci-registry.pegasusio.io:192.168.1.22
    - portus.pegasusio.io:192.168.1.22
    image: library/nginx:alpine
    links:
    - registry:registry
    - portus:portus
    networks:
      pipeline_portus: null
    ports:
    - 80:80/tcp
    - 443:443/tcp
    volumes:
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/nginx:/etc/nginx:ro
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
    - static:/srv/Portus/public:ro
  portus:
    container_name: portuscontainer
    entrypoint:
    - /bin/bash
    - -c
    - /init
    environment:
      PORTUS_DB_DATABASE: portus_production
      PORTUS_DB_HOST: db
      PORTUS_DB_PASSWORD: portus
      PORTUS_DB_POOL: '5'
      PORTUS_KEY_PATH: /secrets/certificates/portus.key
      PORTUS_LOG_LEVEL: debug
      PORTUS_MACHINE_FQDN_VALUE: portus.pegasusio.io
      PORTUS_PASSWORD: '12341234'
      PORTUS_PUMA_TLS_CERT: /secrets/certificates/portus.crt
      PORTUS_PUMA_TLS_KEY: /secrets/certificates/portus.key
      PORTUS_SECRET_KEY_BASE_FILE: /secrets/rails/portus.secret.key.base
      PORTUS_SECURITY_CLAIR_HEALTH_PORT: '6061'
      PORTUS_SECURITY_CLAIR_SERVER: http://clair.pegasusio.io:6060
      PORTUS_SECURITY_CLAIR_TIMEOUT: 900s
      PORTUS_SERVICE_FQDN_VALUE: portus.pegasusio.io
      RAILS_SERVE_STATIC_ASSETS: '''true'''
      RAILS_SERVE_STATIC_FILES: '''true'''
    extra_hosts:
    - oci-registry.pegasusio.io:192.168.1.22
    - portus.pegasusio.io:192.168.1.22
    image: opensuzie/portus:2.5
    links:
    - db
    networks:
      pipeline_portus:
        aliases:
        - portus.pegasusio.io
        - portus
    ports:
    - 3000:3000/tcp
    volumes:
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/portus/init:/init:ro
    - static:/srv/Portus/public:rw
  portus_secret_base_key_generator:
    build:
      args:
        RAILS_VERSION: 5.0.1
        RUBY_VERSION: 2.5.0
      context: /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/oci/secrets/generators/rails_secret_base_key
    environment:
      PORTUS_SECRET_KEY_BASE_FILE_NAME: portus.secret.key.base
      VAULT_ADDR: https://vault.pegasusio.io:8233
      VAULT_KV_ENGINE: dev_culturebase_org
      VAULT_KV_ENGINE_SECRET_KEY: secret_base_key
      VAULT_KV_ENGINE_SECRET_PATH: production/portus/rails
      VAULT_TOKEN_FILE: /secrets/portus_secret_base_key_generator/vault.token
    image: railsecretmngr:0.0.1
    volumes:
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets/rails:/usr/src/portusecretkeybase/share:rw
  postgres:
    environment:
      POSTGRES_PASSWORD: portus
    image: library/postgres:10-alpine
    networks:
      pipeline_portus:
        aliases:
        - pgclair.pegasusio.io
  registry:
    command:
    - /bin/sh
    - /etc/docker/registry/init
    environment:
      REGISTRY_AUTH_TOKEN_ISSUER: portus.pegasusio.io
      REGISTRY_AUTH_TOKEN_REALM: https://portus.pegasusio.io:3000/v2/token
      REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /secrets/certificates/portus.crt
      REGISTRY_AUTH_TOKEN_SERVICE: oci-registry.pegasusio.io
      REGISTRY_HTTP_TLS_CERTIFICATE: /secrets/certificates/portus-oci-registry.crt
      REGISTRY_HTTP_TLS_KEY: /secrets/certificates/portus-oci-registry.key
      REGISTRY_NOTIFICATIONS_ENDPOINTS: "- name: portus\n  url: https://portus.pegasusio.io:3000/v2/webhooks/events\n\
        \  timeout: 2000ms\n  threshold: 5\n  backoff: 1s\n"
    extra_hosts:
    - oci-registry.pegasusio.io:192.168.1.22
    - portus.pegasusio.io:192.168.1.22
    image: library/registry:2.6
    links:
    - portus:portus
    networks:
      pipeline_portus:
        aliases:
        - oci-registry.pegasusio.io
    ports:
    - 5000:5000/tcp
    - 5001:5001/tcp
    volumes:
    - /var/lib/portus/registry:/var/lib/registry:rw
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/registry/config.yml:/etc/docker/registry/config.yml:ro
    - /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/registry/init:/etc/docker/registry/init:ro
version: '3.0'
volumes:
  static:
    driver: local

Using opensuse/portus:2.4.3 image

Now is the most interesting thing :

  • using opensuse/portus:2.4.3 , I check the background logs, and here I go with a completely environment (so before mssola 's update on /init script, ruby version was 2.5.0) :
background_1                        | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.0
background_1                        | RAILS_ENV=production
background_1                        | RACK_ENV=production
  • my background logs with opensuse/portus:2.4.3 :
jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f background|more
Attaching to compose_background_1
background_1                        | + mkdir -p /certificates
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | PORTUS BACKGROUND PKI-INIT
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | ++++++++++++++++++++++++++
background_1                        | + cp /secrets/certificates/portus.crt /certificates
background_1                        | + cp /secrets/certificates/portus-oci-registry.crt /certificates
background_1                        | + cp /secrets/certificates/portus-background.crt /certificates
background_1                        | + update-ca-certificates
background_1                        | PORTUS_DB_PASSWORD=portus
background_1                        | PORTUS_DB_HOST=db
background_1                        | HOSTNAME=1f8001b2a341
background_1                        | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060
background_1                        | PORTUS_DB_POOL=5
background_1                        | CCONFIG_PREFIX=PORTUS
background_1                        | PORTUS_KEY_PATH=/secrets/certificates/portus-background.key
background_1                        | PORTUS_LDAP_AUTHENTICATION_PASSWORD=
background_1                        | PWD=/
background_1                        | PORTUS_PUMA_HOST=0.0.0.0:3000
background_1                        | HOME=/root
background_1                        | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io
background_1                        | PORTUS_EMAIL_SMTP_PASSWORD=
background_1                        | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.0
background_1                        | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061
background_1                        | RAILS_ENV=production
background_1                        | PORTUS_PASSWORD=12341234
background_1                        | PORTUS_SECRET_KEY_BASE=19af1ad1d3c58649ca6bf1ca4514f22388660855df6c
f82d368f4d869554bf62de5bd92b273f7c6ed470961c510da7fda483ffb162b58cfb87b474bd9909fe08
background_1                        | RACK_ENV=production
background_1                        | PORTUS_LOG_LEVEL=debug
background_1                        | PORTUS_SECURITY_CLAIR_TIMEOUT=900s
background_1                        | SHLVL=2
background_1                        | PORTUS_DB_DATABASE=portus_production
background_1                        | PORTUS_BACKGROUND=true
background_1                        | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
background_1                        | _=/usr/bin/printenv
background_1                        | [Mailer config] Host:     portus.pegasusio.io
background_1                        | [Mailer config] Protocol: https://
background_1                        |   User Exists (0.2ms)  SELECT  1 AS one FROM `users` W
HERE `users`.`username` = 'portus' LIMIT 1
background_1                        |   User Load (0.5ms)  SELECT  `users`.* FROM `users` WHERE
 `users`.`username` = 'portus' LIMIT 1
background_1                        |    (0.2ms)  BEGIN
background_1                        |   SQL (0.4ms)  UPDATE `users` SET `encrypted_password` = 
'$2a$10$Cz7bYma5FEaaEH1QeuP6qeiCL7PSwZ29q8QPvvm6Xj.MT.GugcSm6', `updated_at` = '2020-03-01 07:47:19' WHERE `user
s`.`id` = 1
background_1                        |    (0.3ms)  COMMIT
background_1                        |   User Exists (0.3ms)  SELECT  1 AS one FROM `users` WHER
E `users`.`username` = 'portus' LIMIT 1
background_1                        |    (0.1ms)  SELECT COUNT(*) FROM `registries`
background_1                        |    (0.4ms)  SELECT COUNT(*) FROM `repositories`
background_1                        | [Initialization] Running: 'Registry events', 'Security scanning', '
Registry synchronization'
background_1                        |   RegistryEvent Load (0.4ms)  SELECT  `registry_events
`.* FROM `registry_events` WHERE `registry_events`.`status` = 2  ORDER BY `registry_events`.`id` ASC LIMIT 1000
0m
background_1                        |   Tag Exists (0.6ms)  SELECT  1 AS one FROM `tags` WHERE 
`tags`.`scanned` = 0 LIMIT 1
background_1                        |    (0.3ms)  SELECT COUNT(*) FROM `repositories`
background_1                        |   Registry Load (0.4ms)  SELECT  `registries`.* FROM `reg
istries`  ORDER BY `registries`.`id` ASC LIMIT 1000
background_1                        |   RegistryEvent Load (0.3ms)  SELECT  `registry_events
`.* FROM `registry_events` WHERE `registry_events`.`status` = 2  ORDER BY `registry_events`.`id` ASC LIMIT 1000
0m
background_1                        |   RegistryEvent Load (0.3ms)  SELECT  `registry_events`.*
  • still using opensuse/portus:2.4.3, for the portus service, I have the exact same environment, compared to my reparied opensuzie/portus:2.5 :
portuscontainer                     | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | RAILS_ENV=production
portuscontainer                     | RACK_ENV=production
  • check yourself in my below portus service logs :
jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f portus|more
Attaching to portuscontainer
portuscontainer                     | + mkdir -p /secrets/certificates
portuscontainer                     | + mkdir -p /secrets/rails
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo 'PORTUS PKI-INIT'
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + echo ++++++++++++++++++++++++++
portuscontainer                     | + set -x
portuscontainer                     | + mkdir -p /certificates
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | PORTUS PKI-INIT
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | ++++++++++++++++++++++++++
portuscontainer                     | + cp /secrets/certificates/portus.crt /certificates
portuscontainer                     | + cp /secrets/certificates/portus-oci-registry.crt /certificates
portuscontainer                     | + cp /secrets/certificates/portus-background.crt /certificates
portuscontainer                     | + update-ca-certificates
portuscontainer                     | + set -e
portuscontainer                     | + secrets=(PORTUS_DB_PASSWORD PORTUS_PASSWORD PORTUS_SECRET_KEY_BAS
E PORTUS_EMAIL_SMTP_PASSWORD PORTUS_LDAP_AUTHENTICATION_PASSWORD)
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z portus ]]
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z 12341234 ]]
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_SECRET_KEY_BASE
portuscontainer                     | + local var=PORTUS_SECRET_KEY_BASE
portuscontainer                     | + local fileVar=PORTUS_SECRET_KEY_BASE_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' /secrets/rails/portus.secret.key.base ']'
portuscontainer                     | + val=dc997f32935707adb399dfe06a57041ce12a8dc96c00898feb016a742da46
d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b
portuscontainer                     | + export PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12
a8dc96c00898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b
portuscontainer                     | + PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12a8dc96c
00898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b
portuscontainer                     | + unset PORTUS_SECRET_KEY_BASE_FILE
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_EMAIL_SMTP_PASSWORD
portuscontainer                     | + local var=PORTUS_EMAIL_SMTP_PASSWORD
portuscontainer                     | + local fileVar=PORTUS_EMAIL_SMTP_PASSWORD_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + export PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | + PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | + unset PORTUS_EMAIL_SMTP_PASSWORD_FILE
portuscontainer                     | + for s in "${secrets[@]}"
portuscontainer                     | + [[ -z '' ]]
portuscontainer                     | + file_env PORTUS_LDAP_AUTHENTICATION_PASSWORD
portuscontainer                     | + local var=PORTUS_LDAP_AUTHENTICATION_PASSWORD
portuscontainer                     | + local fileVar=PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE
portuscontainer                     | + local def=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + local val=
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + '[' '' ']'
portuscontainer                     | + export PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | + PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | + unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE
portuscontainer                     | + update-ca-certificates
portuscontainer                     | + export PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | + PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | + export RACK_ENV=production
portuscontainer                     | + RACK_ENV=production
portuscontainer                     | + export RAILS_ENV=production
portuscontainer                     | + RAILS_ENV=production
portuscontainer                     | + export CCONFIG_PREFIX=PORTUS
portuscontainer                     | + CCONFIG_PREFIX=PORTUS
portuscontainer                     | + '[' -z '' ']'
portuscontainer                     | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | + '[' debug == debug ']'
portuscontainer                     | + printenv
portuscontainer                     | PORTUS_DB_PASSWORD=portus
portuscontainer                     | PORTUS_DB_HOST=db
portuscontainer                     | HOSTNAME=30b6885e771c
portuscontainer                     | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060
portuscontainer                     | RAILS_SERVE_STATIC_ASSETS='true'
portuscontainer                     | PORTUS_DB_POOL=5
portuscontainer                     | CCONFIG_PREFIX=PORTUS
portuscontainer                     | PORTUS_KEY_PATH=/secrets/certificates/portus.key
portuscontainer                     | PORTUS_LDAP_AUTHENTICATION_PASSWORD=
portuscontainer                     | PWD=/
portuscontainer                     | PORTUS_PUMA_HOST=0.0.0.0:3000
portuscontainer                     | HOME=/root
portuscontainer                     | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io
portuscontainer                     | RAILS_SERVE_STATIC_FILES='true'
portuscontainer                     | + cd /srv/Portus
portuscontainer                     | PORTUS_EMAIL_SMTP_PASSWORD=
portuscontainer                     | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer                     | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061
portuscontainer                     | RAILS_ENV=production
portuscontainer                     | PORTUS_PASSWORD=12341234
portuscontainer                     | PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12a8dc96c00
898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b
portuscontainer                     | RACK_ENV=production
portuscontainer                     | PORTUS_SERVICE_FQDN_VALUE=portus.pegasusio.io
portuscontainer                     | PORTUS_LOG_LEVEL=debug
portuscontainer                     | PORTUS_PUMA_TLS_CERT=/secrets/certificates/portus.crt
portuscontainer                     | PORTUS_SECURITY_CLAIR_TIMEOUT=900s
portuscontainer                     | SHLVL=2
portuscontainer                     | PORTUS_PUMA_TLS_KEY=/secrets/certificates/portus.key
portuscontainer                     | PORTUS_DB_DATABASE=portus_production
portuscontainer                     | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
portuscontainer                     | _=/usr/bin/printenv
portuscontainer                     | + '[' '!' -z '' ']'
portuscontainer                     | + '[' -z '' ']'
portuscontainer                     | + setup_database
portuscontainer                     | + wait_for_database 1
portuscontainer                     | + should_setup=1
portuscontainer                     | + TIMEOUT=90
portuscontainer                     | + COUNT=0
portuscontainer                     | + RETRY=1
portuscontainer                     | + '[' 1 -ne 0 ']'
portuscontainer                     | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep 
DB) in
portuscontainer                     | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb
portuscontainer                     | ++ grep DB
portuscontainer                     | + echo 'Database ready'
portuscontainer                     | + break
portuscontainer                     | + set -e
portuscontainer                     | + portusctl exec 'pumactl -F /srv/Portus/config/puma.rb start'
portuscontainer                     | Database ready
portuscontainer                     | [64] Puma starting in cluster mode...
portuscontainer                     | [64] * Version 3.10.0 (ruby 2.5.0-p0), codename: Russell's Teapot
portuscontainer                     | [64] * Min threads: 1, max threads: 4
portuscontainer                     | [64] * Environment: production
portuscontainer                     | [64] * Process workers: 4
portuscontainer                     | [64] * Preloading application
portuscontainer                     | [Mailer config] Host:     portus.pegasusio.io
portuscontainer                     | [Mailer config] Protocol: https://
portuscontainer                     |   User Exists (0.3ms)  SELECT  1 AS one FROM `users` W
HERE `users`.`username` = 'portus' LIMIT 1
portuscontainer                     |   User Load (0.7ms)  SELECT  `users`.* FROM `users` WHERE
 `users`.`username` = 'portus' LIMIT 1
portuscontainer                     |    (0.4ms)  BEGIN 

Yet worth mentioning

  • it would have been such a good thing if we finally could get this keep_latest feature,
  • but this is repaired Docker build is not a build from source : just a small part of it, everything is hidden packaged into opensuse specific packages, i'll have to rip them off, and completely automate build from source, that is build from the https://github.com/SUSE/portus and the rest is all pure standard devops methods, tools and principles. So pipelines, ansible, terraform, Packer.
  • And this little toy story is a big warning : we have to tackle the build from source asap, or we might end up in a dead zone again, but this time, game over.

Just to ask in case u know about it : OpenSUSE Leap, it's just like their CentOS Atomic , isn't it ?

even if we get KEEP_LATEST feature

I have to warn you, and the warning stands for both KEEP_LATEST and SYNC :

  • we have serious feedback , thanks to @diranged , see Portus Background Process gets 404 on Image Tag, but doesn't delete it from Portus.. #2281 (comment) , showing that the SYNC feature has serious unmanaged limitations on the batch jobs it executes : If the batch job is huge, it systematically fails, and boum portus background is stuck in a starting, then failing, so stoping , then restarting..., failing, again, then etc...
  • this SYNC feature :
    • has a lot in common, in its technical nature, with the Garbage Collector feature : batch jobs, to clean up disk and persistance spaces (tables collections in dbses)
    • and those two are both the reponsibility of the background process
    • So, Sir, those two have serious limitations and unmanaged options that a battle tested bach job maanagement solution would have.
  • In other words, those functionalities :
    • are to be monitored extremely tightly,
    • and are very sensible to the size of the job : the bigger the job, the higher the failure probability.
    • standardized recovery operations have to be automated to harden production reliability.

(like so waiting for your results on keep_latest now :) )

@Jean-Baptiste-Lasselle
Copy link

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 1, 2020

The Real Build from source : Additional info related to /init script fix

  • there is a .ruby-version at root of portus repo : it is used inside the Docker container, with something called RVM, like nvm, but for ruby.
  • in this .ruby-version file, we, on master, have 2.6.2.
  • That value has to be set to a value consistent with those applied for running, that is, the 2.5.0 and 2.5.3 versions.
  • So there, we might end up building different images for portus and background services. The portus source code commit id must match, though, and distribution channel publishing the pre-built docker images should guarantee that.

@Jean-Baptiste-Lasselle
Copy link

Pipeline cycle identified

Ok, the OpenSUSE Team use the https://github.com/SUSE/Portus source code repo, to version control pipeline recipe, in particular :

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 1, 2020

Undocumented use of RVM confirmed

rvm:

- 2.6.2

Thought : I think they use RVM to normalize the set of executables involved in the portus ruby stack : They use it to make their portus zypper packages. Those in the zypper repository added in Dockerfile, to install back portus inside containers, "for production", obs://Virtualization:containers:Portus/openSUSE_Leap_15.1 portus .

Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 2, 2020
Fixed SUSE/Portus#2241

Hacked into OpenSUSE CI/CD

The fix is not acceptable for a long run, the OpenSUSE packages must be updated to
include a solution involving rubygems-integration new feaiture, to run specifically
on OpenSUSE Leap, similar to the DEBIAN specific management describe here:

rubygems/rubygems#2180 (comment)

There could be something like a [SUSE_LEAP_DISABLE_RUBYGEMS_INTEGRATION] env.var.
Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 2, 2020
Fixed SUSE/Portus#2241

Hacked into OpenSUSE CI/CD

The fix is not acceptable for a long run, the OpenSUSE packages must be updated to
include a solution involving rubygems-integration new feaiture, to run specifically
on OpenSUSE Leap, similar to the DEBIAN specific management describe here:

rubygems/rubygems#2180 (comment)

There could be something like a [SUSE_LEAP_DISABLE_RUBYGEMS_INTEGRATION] env.var.
Jean-Baptiste-Lasselle pushed a commit to pokusio/opensuzie-oci-library that referenced this issue Mar 2, 2020
Fixed SUSE/Portus#2241

Hacked into OpenSUSE CI/CD

The fix is not acceptable for a long run, the OpenSUSE packages must be updated to
include a solution involving rubygems-integration new feaiture, to run specifically
on OpenSUSE Leap, similar to the DEBIAN specific management describe here:

rubygems/rubygems#2180 (comment)

There could be something like a [SUSE_LEAP_DISABLE_RUBYGEMS_INTEGRATION] env.var.

see also #1 (comment)
@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Mar 2, 2020

Victory :D

@robgiovanardi Tested and re-tested, versioned and released, https://github.com/pokusio/opensuzie-oci-library/releases/tag/0.0.2

IMPORTANT UPDATE TO READER WILLING TO USE PORTUS 2.5 :

  • The OpenSUSE team has designed portus Dockerfile all based on opensuse
  • The OpenSUSE team has designed portus Dockerfile sio that portus and all its dependencies, are installed during the Docker build process, using the zypper package manager.
  • since my first edit of this issue comment, the OpenSUSE team has without any notice, changed the structure of its zypper package
  • which ended up beanking the redesign I worked on : https://github.com/pokusio/opensuzie-oci-library/releases/tag/0.0.2
  • This is why I made it possible for all of us to get rid of all of OpenSUSE porducts, to operate Portus.
  • So you will find the only image I found on the web (please tell me if you find another...), of :

(end of update)

  • It all works !! no errors in logs, both portus and background
  • I even checked the webui, I am running portus version 2.5, and I can even say the exact commit (on master I checked) is b87d37e4e692b4fe5616b6f0970cb606688c344a :
Clonage dans 'Portus'...
remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 38892 (delta 4), reused 0 (delta 0), pack-reused 38883
Réception d'objets: 100% (38892/38892), 36.50 MiB | 16.89 MiB/s, fait.
Résolution des deltas: 100% (18385/18385), fait.
jibl@poste-devops-typique:~$ cd Portus
jibl@poste-devops-typique:~/Portus$ export GIT_COMMIT_ID='b87d37e4e692b4fe5616b6f0970cb606688c344a'
jibl@poste-devops-typique:~/Portus$ git branch --contains $GIT_COMMIT_ID
* master
jibl@poste-devops-typique:~/Portus$ 
jibl@poste-devops-typique:~/Portus$ 
  • and it is this b87d37e so, as of today, the exact last commit on master . So yeah, they always build from source from last commit on master. Like I would never have accused SUSE Team members to do that without a serious proof like that... Guys, just switch to git flow AVH Edition (in case they build from source the outdated Vincent Driessen's, to do their suse git-flow package)...

  • the clair scanner didn't seem too busy, when I was running portus:2.4.3, and now.... Even Clair Works GREAAAAT :D

Even Clair Works GREAAAAT

  • confirmed 2.5 version of portus source code, I can even tell the commit ID I believe it is b87d37e4e692b4fe5616b6f0970cb606688c344a :

any body find meeeeeee sme-body - to love

  • And look, now Clair scanning Rocket Chat official distribution for docker ... full of vulnerabilities, my dear, as I expected ^ ^ :

My beloved rocket.chat

rookiedookie image

@Jean-Baptiste-Lasselle
Copy link

Now, I can't wait you test that on your side :)

@benthurley82
Copy link

Hi, I'm a bit confused. I'm setting up an instance of Portus. I've been battling with it for quite a while, the example compose file gets you so far but there has been a lot of trial and error to get things working.
Now I am looking at setting up the garbage collection and I stumbled on this bug. I'm running opensuse/portus:2.4. So what is the recommended way to set this up given the limitations that we know exist? I'm wondering if I can leave the keep_latest option and just use a combination of older_than and tag?
I'm planning to use git flow and have a tag for each branch. If I set a regex for the tag option that only includes feature, release and hotfix branches then only these should be considered for GC, leaving master and develop safely alone.
I can't tell from this bug report if this will work or not?

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented May 2, 2020

I can't tell from this bug report if this will work or not?

Hi @benthurley82 ,
I today found your message.

  • question 1 : For this question, forget anything about Garbage collection, or any deletion at all, of any OCI image. Do you have any docker-compose.yml with which you successfully run Portus , and by sucessfully, I mean :

    • you can push images
    • you can pull images
    • you can CRUD users, teams, namespaces, images, Tokens, through the Web UI
    • when you push a new tag of an image, the new tag appears in your Portus Web UI just as you would expect.
    • you can create a Token, and docker login (then push pull etc) using that token
    • you have TLS certs on all three registry, background, portus
  • question 2 : Does the text you will read below answers your question ? (yes, or no, i'll ask another question if no)

    • Indeed, you cannot do what you can do with the keep_latest option, using a combination of tags and older_than, it is logically impossible :
      • using the tag config option, the only thing you can say is "Hey Portus, don't garbage collect any tag that matches exactly this regular expression".
      • but the problem is, that, for every interger N, whatever regular expression you choose as configuration, there is only one that can match all the N youngest in the future : *.*.*.
      • I will give a proof that will not assume you use the semver standard, or any other standard, just that you can match any of your version numbers, using a regular expression. Weak enough, fair enough, isn't it ?
      • Ok, Let bensregexp be a regular expression (pick one, write it on a paper as you read this), and MYSTERY_VERSION_NUMBER any of your existing version numbers.
      • there was a date time DTomega, at which MYSTERY_VERSION_NUMBER was the latest version. Because that was true at the very instant you created / tagged / released that new version number.
      • So, At DTomega, MYSTERY_VERSION_NUMBER matched bensregexp. And so, MYSTERY_VERSION_NUMBER matches bensregexp today,
      • and therefore at any date/time, bensregexp will match all of your version numbers.
    • So you see, whatever regular expression you may ever choose, it cannot ever allow you to "filter the 5 latest at anytime", unless you don't ever release more than 5 version numbers, in other words, you abort the project.
    • and that's why the tag config. option is used to just filter environments, not version : dev is dev, and will all remain just dev, just new versions of it.
    • what you do with your versioning system, is that you endow your version number with meta data,
    • For example, I would say that in dev-3.4.7, 3.4.7 is a version number of a software, endowed with the dev label, used to "slip in" the info that this docker image is the dev execution environment for the 3.4.7 version of your software.
    • The docker image version number, implicitly is infered from the sofware it runs' versions, in the widely adopted standard (the git commit id maven plugins ? same in almost all todays lagnuages, javascript golang etc).
    • Think of alpine-12.0.4 , is alpine a version number ?
    • And well that regular expression is not of any use obviously, because that configuration will lead to not ever garbage collect at all.
    • That was the very point of @robgiovanardi , opening this issue, because Rob has the exact same use case as you. So he realized he has to be able to use the keep_latest config. option.

Alright, so you HAVE, to use the keep_latesttag, to get the behavior you (and bob) actually want, which is (isn't it ?) :
* hey portus, for all images (repositories), you delete all tags that are older than 100 days
* hey portus, regardless of their age (even in two years) you never, ever delete the 5 latest (youngest) tags, for any repository .

  • So rob opened an issue, because he also realized through tests, that for all version of portus, either the keep_latest is not available (version too old), or that config. option is bugged.
  • We (rob, @diranged , and myself) discovered a few weeks ago :
    • that actually, there exists a version of the portus source code that has a repaired keep_latest config. option.
    • that, that version has a commit hash, which is on master branch, is not tagged, and not merged back to any other branch, like the v2.5 branch for example.
    • that this exact version of the source code of portus, is not distributed in any docker image on docker hub or quay.io, by the OpenSUSE Team.
    • So to be able to use the keep_latest option, we had to build our own docker image of portus, starting from its from source.

Now :

@stale
Copy link

stale bot commented Aug 1, 2020

Thanks for all your contributions!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Aug 1, 2020
@stale stale bot closed this as completed Aug 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants