Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception using smartos.vm_present to create docker zone #51351

Closed
garethhowell opened this issue Jan 26, 2019 · 15 comments
Closed

Exception using smartos.vm_present to create docker zone #51351

garethhowell opened this issue Jan 26, 2019 · 15 comments
Labels
Bug broken, incorrect, or confusing behavior P4 Priority 4 severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@garethhowell
Copy link

Description of Issue/Question

Getting an exception when trying to create docker version on plexmediaserver on smartos server.

Setup

(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
smartos_docker_containers/plex3.sls

include:
  - smartos_docker_container
	
plex3.agdon.net:
  smartos.vm_present:
    - config:
        auto_import: true
        reprovision: true
    - vmconfig:
        image_uuid: plexinc/pms-docker:plexpass
        brand: lx
        alias: plex3
        quota: 5
        max_physical_memory: 1024
        tags:
          label: 'plex3 docker'
          owner: 'garethhowell'
        resolvers:
          - 172.29.12.7
        nics:
          "82:1b:8e:49:e9:19":
            nic_tag: admin
            mtu: 1500
            ips:
              - 172.29.12.44/24
        filesystems:
          "/config":
            source: "/data/config"
            type: lofs
            options:
              - nodevices
          "/data":
            source: "/data/media"
            type: lofs
            options:
              - nodevices
          "/transcode":
            source: "/data/transcode"
            type: lofs
            options:
              - nodevices

smartos_docker_container/init.sls

https://docker.io:
  smartos.source_present:
    - source_type: docker

Steps to Reproduce Issue

(Include debug logs if possible and relevant.)

sudo salt global_deneb state.apply smartos_docker_container.plex3
global_deneb:
----------
          ID: https://docker.io
    Function: smartos.source_present
      Result: True
     Comment: image source https://docker.io is present
     Started: 16:08:29.276973
    Duration: 493.941 ms
     Changes:   
----------
          ID: plex3.agdon.net
    Function: smartos.vm_present
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/opt/tools/lib/python2.7/site-packages/salt/state.py", line 1951, in call
                  ret = self.states[cdata['full']](*cdata['args'], **cdata['kwargs'])
                File "/opt/tools/lib/python2.7/site-packages/salt/loader.py", line 2033, in wrapper
                  return f(*args, **kwargs)
                File "/opt/tools/lib/python2.7/site-packages/salt/states/smartos.py", line 801, in vm_present
                  if vmconfig['image_uuid'] not in __salt__['imgadm.list']():
                File "/opt/tools/lib/python2.7/site-packages/salt/modules/smartos_imgadm.py", line 257, in list_installed
                  data = _parse_image_meta(image, verbose)
                File "/opt/tools/lib/python2.7/site-packages/salt/modules/smartos_imgadm.py", line 60, in _parse_image_meta
                  name = image['manifest']['name']
              KeyError: u'name'
     Started: 16:08:29.772658
    Duration: 1384.224 ms
     Changes:   

Summary for global_deneb
------------
Succeeded: 1
Failed:    1
------------
Total states run:     2
Total run time:   1.878 s

Versions Report

(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)

**Master**
Salt Version:
           Salt: 2018.11.0-488-g4d2a7f7
 
Dependency Versions:
           cffi: 1.5.2
       cherrypy: Not Installed
       dateutil: Not Installed
      docker-py: Not Installed
          gitdb: 2.0.2
      gitpython: 2.1.5
         Jinja2: 2.8
        libgit2: 0.24.0
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: 0.24.0
         Python: 2.7.12 (default, Nov 12 2018, 14:36:49)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.2.0
          smmap: 2.0.3
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4
 
System Versions:
           dist: Ubuntu 16.04 xenial
         locale: UTF-8
        machine: x86_64
        release: 4.3.0
         system: Linux
        version: Ubuntu 16.04 xenial

Minion

Salt Version:
           Salt: 2019.2.0-544-gb76b281
 
Dependency Versions:
           cffi: 1.11.5
       cherrypy: Not Installed
       dateutil: Not Installed
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
         Jinja2: 2.10
        libgit2: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: 2.19
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.15 (default, Jan  5 2019, 14:32:54)
   python-gnupg: Not Installed
         PyYAML: 3.13
          PyZMQ: 17.1.2
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 5.1.1
            ZMQ: 4.2.5
 
System Versions:
           dist:   
         locale: UTF-8
        machine: i86pc
        release: 5.11
         system: SunOS
        version: Not Installed
@Ch3LL
Copy link
Contributor

Ch3LL commented Jan 28, 2019

ping @sjorge can you take a look here?

@Ch3LL Ch3LL added the Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged label Jan 28, 2019
@Ch3LL Ch3LL added this to the Blocked milestone Jan 28, 2019
@sjorge
Copy link
Contributor

sjorge commented Jan 28, 2019

I tried a stripped down version (without a nic and onle /config)

local:
----------
          ID: testdocker
    Function: smartos.vm_present
      Result: True
     Comment: vm testdocker created
     Started: 17:11:21.882882
    Duration: 7569.958 ms
     Changes:
              ----------
              testdocker:
                  ----------
                  image_uuid:
                      45c766d0-94e0-129a-db68-35a988c4bdef
                  brand:
                      lx
                  alias:
                      plex3
                  quota:
                      5
                  max_physical_memory:
                      1024
                  tags:
                      ----------
                      label:
                          plex3 docker
                      owner:
                          garethhowell
                  filesystems:
                      |_
                        ----------
                        source:
                            /tmp
                        type:
                            lofs
                        options:
                            - nodevices
                        target:
                            /config
                  hostname:
                      testdocker
                  docker:
                      True
                  kernel_version:
                      4.3.0
                  internal_metadata:
                      ----------
                      docker:entrypoint:
                          ["/init"]
                      docker:env:
                          ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm", "LANG=C.UTF-8", "LC_ALL=C.UTF-8", "CHANGE_CONFIG_DIR_OWNERSHIP=true", "HOME=/config"]

Worked fine... what version is the smartos node running? I'm on recent bits, so it might be an older PI.

The output of salt-call -l debug state.apply (ran on the smartos node) would be useful too, just watch out with passwords that might be in there.

The interesting one is the output of imgadm get

[INFO    ] Executing command 'imgadm get 45c766d0-94e0-129a-db68-35a988c4bdef' in directory '/root'
[DEBUG   ] stdout: {
  "manifest": {
    "v": 2,
    "uuid": "45c766d0-94e0-129a-db68-35a988c4bdef",
    "owner": "00000000-0000-0000-0000-000000000000",
    "name": "docker-layer",
    "version": "2df7769d9603",
    "disabled": false,
    "public": true,
    "published_at": "2019-01-14T18:49:18.790Z",
    "type": "docker",
    "os": "linux",
    "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
    "tags": {
      "docker:repo": "plexinc/pms-docker",
      "docker:id": "sha256:2df7769d96031578a7bab996b74224e339d50b434539317ac345a7a3f08c2efb",
      "docker:architecture": "amd64",
      "docker:tag:plexpass": true,
      "docker:config": {
        "Cmd": null,
        "Entrypoint": [
          "/init"
        ],
        "Env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
          "TERM=xterm",
          "LANG=C.UTF-8",
          "LC_ALL=C.UTF-8",
          "CHANGE_CONFIG_DIR_OWNERSHIP=true",
          "HOME=/config"
        ],
        "WorkingDir": ""
      }
    },
    "origin": "0d6a9758-6f22-8e04-5fa2-6de9b9feabae"
  },
  "zpool": "zones",
  "source": "https://docker.io"
}

@sjorge
Copy link
Contributor

sjorge commented Jan 28, 2019

Potentially there might be an image without a 'name' tag in the 'manifest' tag... you can check that withimgadm list -j but as far as I know an image should always have a name... so if that one shows up I would thing there might be a broken image on the system.

@garethhowell
Copy link
Author

Thanks for taking a look.

uname -a
SunOS deneb.agdon.net 5.11 joyent_20181220T002304Z i86pc i386 i86pc

runnning imgadm list revealed a load of images with no name tag.

$ imgadm list
UUID                                  NAME          VERSION       OS       TYPE          PUB
7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b  -             -             -        -             -
ff8b3ad2-e7e2-e056-c01b-ac8dad184937  -             -             -        -             -
e69a0918-055d-11e5-8912-e3ceb6df4cf8  -             -             -        -             -
e1faace4-e19b-11e5-928b-83849e2fd94a  -             -             -        -             -
db47466e-0889-11e5-a4d0-77947c5b8b70  -             -             -        -             -
d9ad31fd-f4cf-4791-b322-44f4a0e98f62  -             -             -        -             -
d34c301e-10c3-11e4-9b79-5f67ca448df0  -             -             -        -             -
c8d68a9e-4682-11e5-9450-4f4fadd0936d  -             -             -        -             -
b33d4dec-db27-4337-93b5-1f5e7c5b47ce  -             -             -        -             -
a21a64a0-0809-11e5-a64f-ff80e8e8086f  -             -             -        -             -
842e6fa6-6e9b-11e5-8402-1b490459e334  -             -             -        -             -
fb534b79-44a9-4159-9657-878e145b0104  -             -             -        -             -
5c7d0d24-3475-11e5-8e67-27953a8b237e  -             -             -        -             -
4bc5b510-2d5d-e47e-c3bc-d492dfeae320  -             -             -        -             -
46c77656-5d22-cdaf-8056-88aaa11c1e58  -             -             -        -             -
2a9bfaf4-ddf1-e146-ab80-e2f8723ec714  -             -             -        -             -
1ed69a26-f60b-401c-bde6-793df2d0547b  -             -             -        -             -
1bd84670-055a-11e5-aaa2-0346bb21d5a1  -             -             -        -             -
1870884c-780a-cb0b-fdc0-8e740afa4173  -             -             -        -             -
163cd9fe-0c90-11e6-bd05-afd50e5961b6  -             -             -        -             -
147f4eca-1783-4b80-d7e4-9a1d4420567a  -             -             -        -             -
088b97b0-e1a1-11e5-b895-9baa2086eb33  -             -             -        -             -
0246b0fe-771c-60ba-cbe6-92ea5795117b  -             -             -        -             -
088b97b0-e1a1-11e5-b895-9baa2086eb33  base-64-lts   15.4.1        smartos  zone-dataset  2016-03-04
7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b  ubuntu-16.04  20170403      linux    lx-dataset    2017-04-03
a0af5ad3-8b69-1940-c2b2-8c22019f4381  docker-layer  3d77ce4481b1  linux    docker        2018-05-03
e08907cb-e5e2-0d0b-4684-31aaace5beb2  docker-layer  ed5d19727749  linux    docker        2018-05-03
f4cb12e9-1569-65e1-9243-4f85878eabb4  docker-layer  2e3e26bab9eb  linux    docker        2018-05-03
cc86138d-3909-3592-90cc-a84ac73c751d  docker-layer  0969bf11f82b  linux    docker        2018-05-03
967dd036-9dd1-0c5a-3d02-e3353664071e  docker-layer  846891b17e6d  linux    docker        2018-05-03
706b9530-717c-b1d9-8491-7586790d9ce3  docker-layer  34157dd498aa  linux    docker        2018-05-03
d7b61606-f4ae-fe8c-6400-602e5ca916c6  docker-layer  9ed5ccc7b14f  linux    docker        2018-11-08
90e30f46-1f73-5453-3781-f594f2e7e383  docker-layer  4fe2ade4980c  linux    docker        2018-11-08
c9c2ecb2-53a5-0f67-1478-a6ca32e70708  docker-layer  d6341e30912f  linux    docker        2018-12-03
30e52711-874f-3316-32df-f94bf327420e  docker-layer  087a57faf949  linux    docker        2018-12-03
c22cb4e5-4b22-98c7-c920-a92fae8c2df3  docker-layer  197122809c0b  linux    docker        2018-12-03
b9152be2-991c-fdfc-1acd-37e8b2e7dec7  docker-layer  620aea26e853  linux    docker        2018-12-03
b293df8b-c4ee-4d49-24c4-4f8fa65d7e73  docker-layer  2eeb5ce9b924  linux    docker        2018-12-03
a8843122-dd01-9907-3370-b6c44f3b47e4  docker-layer  0c1db9598990  linux    docker        2018-12-03
d5c82d16-ddd2-9847-7af8-056461bd4992  docker-layer  a8c530378055  linux    docker        2018-12-03
99f5b36e-5215-7fab-05b5-cc90a4765823  docker-layer  54f7e8ac135a  linux    docker        2018-12-03
f20cd995-ce22-9cf3-1509-ad7674a1b0a3  docker-layer  631d2ff18fe5  linux    docker        2018-12-03
136443d5-d9ac-d1d8-a7e7-50907b0ed7ee  docker-layer  507d7b60fe2f  linux    docker        2018-12-03
300b7408-a4ef-afd7-faa6-709c7bacfc77  docker-layer  687ed2fb2a0d  linux    docker        2018-12-03
71badf52-93dd-2d39-512d-a2520faca591  docker-layer  089af8215bfe  linux    docker        2018-12-03
d0a7a5f1-c861-0def-005d-1d1250fd3ebc  docker-layer  469b33d23647  linux    docker        2018-12-03
6a1897ad-89b6-3fc4-eab3-35c622a7b689  docker-layer  714d6164c1de  linux    docker        2018-12-03
44f6ba5d-ecd6-2f1d-c048-c332579a07af  docker-layer  5d71636fb824  linux    docker        2018-12-03
51a55316-ff26-67a2-d623-c44091d112a1  docker-layer  d927c1b717ec  linux    docker        2019-01-14
c2f24c6b-7401-6549-19c1-7cdf8a75a88b  docker-layer  15b86ea20233  linux    docker        2019-01-14
f8248cc8-f0a0-c787-4287-ef29de3a943a  docker-layer  42986ef25bcd  linux    docker        2019-01-14
2cbef2d9-162f-8bf9-cf33-f854323e34a6  docker-layer  b849b56b69e7  linux    docker        2019-01-14
13cd68ea-a3d4-12e3-9be9-809d6a2d1c60  docker-layer  f39465de3b1a  linux    docker        2019-01-14
0d6a9758-6f22-8e04-5fa2-6de9b9feabae  docker-layer  dec716eaf178  linux    docker        2019-01-14
d9eca37c-55d1-862d-b07a-b58cfba183d2  docker-layer  1691bdb34def  linux    docker        2019-01-14

I tracked this down to a collection of images on a separate zpool. When I exported that zpool, the exception disappeared.
It still fails, but no exception

sudo salt global_deneb state.apply smartos_docker_container.plex3
global_deneb:
----------
          ID: https://docker.io
    Function: smartos.source_present
      Result: True
     Comment: image source https://docker.io is present
     Started: 19:53:23.317338
    Duration: 538.717 ms
     Changes:   
----------
          ID: plex3.agdon.net
    Function: smartos.vm_present
      Result: False
     Comment: {u'bad_values': [], u'bad_properties': [], u'missing_properties': [u'kernel_version']}
     Started: 19:53:23.859575
    Duration: 39610.519 ms
     Changes:   

Summary for global_deneb
------------
Succeeded: 1
Failed:    1
------------
Total states run:     2
Total run time:  40.149 s
ERROR: Minions returned with non-zero exit code

Also, I can't see how to include a couple of environment variables into the docker container.

@garethhowell
Copy link
Author

Sorry, closed it be mistake

@sjorge
Copy link
Contributor

sjorge commented Jan 28, 2019

Can you do an ‘imgadm get’ for the downloaded Plex docker image?

Interesting case with the mission names though, I'll see if I can at least make the state ignore those.

@sjorge
Copy link
Contributor

sjorge commented Jan 28, 2019

I have some small patches that should at least make it not throw exceptions

--- salt/modules/smartos_imgadm.py.orig 2019-01-28 21:44:55.901626633 +0000
+++ salt/modules/smartos_imgadm.py      2019-01-28 22:01:23.401051029 +0000
@@ -56,7 +56,7 @@

     if image and 'Error' in image:
         ret = image
-    elif image:
+    elif image and 'manifest' in image:
         name = image['manifest']['name']
         version = image['manifest']['version']
         os = image['manifest']['os']
@@ -164,6 +164,8 @@
     if _is_docker_uuid(uuid):
         images = list_installed(verbose=True)
         for image_uuid in images:
+            if 'name' not in images[image_uuid]:
+                continue
             if images[image_uuid]['name'] == uuid:
                 return image_uuid
     return None

If you still have access to the pool with the nameless uuid, please apply the patch and run imgadm.list. Please post the result here.

I'm not 100% sure, but I think this will also fix the issue you are now experiencing. It should default to kernel_version 4.3.0 if it is not specified for docker. But it might have been unhappy before if the image manifest was missing a few items.

@sjorge
Copy link
Contributor

sjorge commented Jan 28, 2019

@Ch3LL this patch should probably go in regardless, but I'd like some confirmation to make sure it does not make things worse with image that are incomplete.

@garethhowell
Copy link
Author

Can you do an ‘imgadm get’ for the downloaded Plex docker image?

Doing an imadm list -j shows a lot of plex images, some of which have clones that appear in /zones but don't appear in a zoneadm list -cv.

imgadm list -j
[
  {
    "manifest": {
      "v": 2,
      "uuid": "0d6a9758-6f22-8e04-5fa2-6de9b9feabae",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "dec716eaf178",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:dec716eaf1781d8e8c9c86070e60b628cefa979ad82619bf34c2e705caa47cd4",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "13cd68ea-a3d4-12e3-9be9-809d6a2d1c60"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/d9eca37c-55d1-862d-b07a-b58cfba183d2"
    ],
    "clones": 1
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "13cd68ea-a3d4-12e3-9be9-809d6a2d1c60",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "f39465de3b1a",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:f39465de3b1a4869f10e680816e1ffe919d9f343e797d6e6fdab6a61526ecfac",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "c2f24c6b-7401-6549-19c1-7cdf8a75a88b"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/0d6a9758-6f22-8e04-5fa2-6de9b9feabae"
    ],
    "clones": 1
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "2cbef2d9-162f-8bf9-cf33-f854323e34a6",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "b849b56b69e7",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:b849b56b69e770db0ae9e71f818f5be89ba0e30c14133c8a0c7b2ca0eeac15b4",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      }
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/f8248cc8-f0a0-c787-4287-ef29de3a943a"
    ],
    "clones": 1
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "51a55316-ff26-67a2-d623-c44091d112a1",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "d927c1b717ec",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:d927c1b717ec274345e85adaa6f1074974d943fd3481cd64e9488ad9841e3016",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "f8248cc8-f0a0-c787-4287-ef29de3a943a"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/c2f24c6b-7401-6549-19c1-7cdf8a75a88b"
    ],
    "clones": 1
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "c2f24c6b-7401-6549-19c1-7cdf8a75a88b",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "15b86ea20233",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:15b86ea202330d8a60b52809277c566a318663dc824319ddfca663e6c90ae7bd",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "51a55316-ff26-67a2-d623-c44091d112a1"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/13cd68ea-a3d4-12e3-9be9-809d6a2d1c60"
    ],
    "clones": 1
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "d9eca37c-55d1-862d-b07a-b58cfba183d2",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "1691bdb34def",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:1691bdb34defc7e0f94b7781cbf1ad876a54824d9a2640037d432e56ff6630f9",
        "docker:architecture": "amd64",
        "docker:tag:latest": true,
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "0d6a9758-6f22-8e04-5fa2-6de9b9feabae"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [],
    "clones": 0
  },
  {
    "manifest": {
      "v": 2,
      "uuid": "f8248cc8-f0a0-c787-4287-ef29de3a943a",
      "owner": "00000000-0000-0000-0000-000000000000",
      "name": "docker-layer",
      "version": "42986ef25bcd",
      "disabled": false,
      "public": true,
      "published_at": "2019-01-14T18:48:51.257Z",
      "type": "docker",
      "os": "linux",
      "description": "/bin/sh -c #(nop)  HEALTHCHECK &{[\"CMD-SHELL\" \"/healthcheck.sh || exit 1\"] \"3m20s\" \"1m40s\" \"0s\" '\\x00'}",
      "tags": {
        "docker:repo": "plexinc/pms-docker",
        "docker:id": "sha256:42986ef25bcd19ccfcb5964cdffc5704dcb663c9c7bbb494707c9aa7b1c4f1d5",
        "docker:architecture": "amd64",
        "docker:config": {
          "Cmd": null,
          "Entrypoint": [
            "/init"
          ],
          "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "TERM=xterm",
            "LANG=C.UTF-8",
            "LC_ALL=C.UTF-8",
            "CHANGE_CONFIG_DIR_OWNERSHIP=true",
            "HOME=/config"
          ],
          "WorkingDir": ""
        }
      },
      "origin": "2cbef2d9-162f-8bf9-cf33-f854323e34a6"
    },
    "zpool": "zones",
    "source": "https://docker.io",
    "cloneNames": [
      "zones/51a55316-ff26-67a2-d623-c44091d112a1"
    ],
    "clones": 1
  }
]

I've omitted a load of other images, including several docker images for zones I've created manually. I notice that for all docker images, there is a maximum of one zone created: unlike with base-64-lts etc. Does this imply that creating a new docker vm (even if unsuccessful) results in a new image being downloaded? Is this because of the translation from the docker reference to a new smartos image UUID?

zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              joyent   shared
   2 32423f75-d86f-c531-bcb6-b67d92275d6c running    /zones/32423f75-d86f-c531-bcb6-b67d92275d6c lx       excl  
   1 2db2eea0-f833-4a89-890a-8edcc7c8f685 running    /zones/2db2eea0-f833-4a89-890a-8edcc7c8f685 lx       excl  
  11 d9ad31fd-f4cf-4791-b322-44f4a0e98f62 running    /zones/d9ad31fd-f4cf-4791-b322-44f4a0e98f62 kvm      excl  
  10 4bc5b510-2d5d-e47e-c3bc-d492dfeae320 running    /zones/4bc5b510-2d5d-e47e-c3bc-d492dfeae320 kvm      excl  
   3 147f4eca-1783-4b80-d7e4-9a1d4420567a running    /zones/147f4eca-1783-4b80-d7e4-9a1d4420567a lx       excl  
   6 2a9bfaf4-ddf1-e146-ab80-e2f8723ec714 running    /zones/2a9bfaf4-ddf1-e146-ab80-e2f8723ec714 lx       excl  
   7 46e026b5-6a3d-661d-9121-9079748975ec running    /zones/46e026b5-6a3d-661d-9121-9079748975ec lx       excl  
   8 10d8ff55-bd09-60ba-9cbc-a3fbd36b17b3 running    /zones/10d8ff55-bd09-60ba-9cbc-a3fbd36b17b3 joyent   excl  
   - 224088cc-ed9c-4e93-920d-ba20c0d88b89 installed  /zones/224088cc-ed9c-4e93-920d-ba20c0d88b89 lx       excl  
   5 c2294c33-87d7-6606-b6d6-c66e4af1bcd8 running    /zones/c2294c33-87d7-6606-b6d6-c66e4af1bcd8 lx       excl 
ls /zones
088b97b0-e1a1-11e5-b895-9baa2086eb33  51a55316-ff26-67a2-d623-c44091d112a1  c2f24c6b-7401-6549-19c1-7cdf8a75a88b
0d6a9758-6f22-8e04-5fa2-6de9b9feabae  6a1897ad-89b6-3fc4-eab3-35c622a7b689  c9c2ecb2-53a5-0f67-1478-a6ca32e70708
10d8ff55-bd09-60ba-9cbc-a3fbd36b17b3  706b9530-717c-b1d9-8491-7586790d9ce3  cc86138d-3909-3592-90cc-a84ac73c751d
136443d5-d9ac-d1d8-a7e7-50907b0ed7ee  71badf52-93dd-2d39-512d-a2520faca591  currbooted
13cd68ea-a3d4-12e3-9be9-809d6a2d1c60  7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b  d0a7a5f1-c861-0def-005d-1d1250fd3ebc
147f4eca-1783-4b80-d7e4-9a1d4420567a  82f79660-1ac3-4ae4-b033-ddd1a0309a11  d5c82d16-ddd2-9847-7af8-056461bd4992
224088cc-ed9c-4e93-920d-ba20c0d88b89  90e30f46-1f73-5453-3781-f594f2e7e383  d7b61606-f4ae-fe8c-6400-602e5ca916c6
2a9bfaf4-ddf1-e146-ab80-e2f8723ec714  967dd036-9dd1-0c5a-3d02-e3353664071e  d9ad31fd-f4cf-4791-b322-44f4a0e98f62
2cbef2d9-162f-8bf9-cf33-f854323e34a6  99f5b36e-5215-7fab-05b5-cc90a4765823  d9eca37c-55d1-862d-b07a-b58cfba183d2
2db2eea0-f833-4a89-890a-8edcc7c8f685  a0af5ad3-8b69-1940-c2b2-8c22019f4381  e08907cb-e5e2-0d0b-4684-31aaace5beb2
300b7408-a4ef-afd7-faa6-709c7bacfc77  a8843122-dd01-9907-3370-b6c44f3b47e4  f20cd995-ce22-9cf3-1509-ad7674a1b0a3
30e52711-874f-3316-32df-f94bf327420e  archive                               f4cb12e9-1569-65e1-9243-4f85878eabb4
32423f75-d86f-c531-bcb6-b67d92275d6c  b293df8b-c4ee-4d49-24c4-4f8fa65d7e73  f8248cc8-f0a0-c787-4287-ef29de3a943a
44f6ba5d-ecd6-2f1d-c048-c332579a07af  b9152be2-991c-fdfc-1acd-37e8b2e7dec7  global
46e026b5-6a3d-661d-9121-9079748975ec  c2294c33-87d7-6606-b6d6-c66e4af1bcd8  lastbooted
4bc5b510-2d5d-e47e-c3bc-d492dfeae320  c22cb4e5-4b22-98c7-c920-a92fae8c2df3  manifests

More to come.
I

@garethhowell
Copy link
Author

garethhowell commented Jan 29, 2019

--- salt/modules/smartos_imgadm.py.orig 2019-01-28 21:44:55.901626633 +0000 +++ salt/modules/smartos_imgadm.py 2019-01-28 22:01:23.401051029 +0000 @@ -56,7 +56,7 @@ if image and 'Error' in image: ret = image - elif image: + elif image and 'manifest' in image: name = image['manifest']['name'] version = image['manifest']['version'] os = image['manifest']['os'] @@ -164,6 +164,8 @@ if _is_docker_uuid(uuid): images = list_installed(verbose=True) for image_uuid in images: + if 'name' not in images[image_uuid]: + continue if images[image_uuid]['name'] == uuid: return image_uuid return None

Adding this patch deals with the Exception, even with the temporary pool imported. However, the state still fails with the same error messages.

@sjorge
Copy link
Contributor

sjorge commented Jan 29, 2019

I've already opened a PR to get that patch in, ... I'm still looking over the other provided data.

@sjorge
Copy link
Contributor

sjorge commented Jan 29, 2019

For me when I import plexinc/pms-docker:latest I get 7 images, basically 7 docker-layers and only one of them is used as the actually image... they all it's basically an inheritance chain.

layer 1 -> layer 2 -> layer 3 -> layer 4 -> layer 5 -> layer 6 -> layer 7 (image) importing it twice results in an error:

[root@carbon ~]# imgadm import plexinc/pms-docker:latest
Importing d9eca37c-55d1-862d-b07a-b58cfba183d2 (docker.io/plexinc/pms-docker:latest) from "https://docker.io"
Gather image d9eca37c-55d1-862d-b07a-b58cfba183d2 ancestry
Must download and install 7 images
Downloaded image 0d6a9758-6f22-8e04-5fa2-6de9b9feabae (3.2 KiB)
Downloaded image c2f24c6b-7401-6549-19c1-7cdf8a75a88b (170.0 B)
Downloaded image 51a55316-ff26-67a2-d623-c44091d112a1 (528.0 B)
Downloaded image f8248cc8-f0a0-c787-4287-ef29de3a943a (845.0 B)
Downloaded image 2cbef2d9-162f-8bf9-cf33-f854323e34a6 (41.4 MiB)
Downloaded image 13cd68ea-a3d4-12e3-9be9-809d6a2d1c60 (21.8 MiB)
Imported image 2cbef2d9-162f-8bf9-cf33-f854323e34a6 (docker-layer@b849b56b69e7)
Downloaded image d9eca37c-55d1-862d-b07a-b58cfba183d2 (102.2 MiB)
Download 7 images                       [================================================================================>] 100% 165.55MB  16.64MB/s     9s
Imported image f8248cc8-f0a0-c787-4287-ef29de3a943a (docker-layer@42986ef25bcd)
Imported image 51a55316-ff26-67a2-d623-c44091d112a1 (docker-layer@d927c1b717ec)
Imported image c2f24c6b-7401-6549-19c1-7cdf8a75a88b (docker-layer@15b86ea20233)
Imported image 13cd68ea-a3d4-12e3-9be9-809d6a2d1c60 (docker-layer@f39465de3b1a)
Imported image 0d6a9758-6f22-8e04-5fa2-6de9b9feabae (docker-layer@dec716eaf178)
Imported image d9eca37c-55d1-862d-b07a-b58cfba183d2 (docker-layer@1691bdb34def)
[root@carbon ~]# imgadm import plexinc/pms-docker:latest
Image d9eca37c-55d1-862d-b07a-b58cfba183d2 (docker-layer@1691bdb34def) is already installed from https://docker.io

So I don't think you have multiple instances of the same docker image, currently there is a proble where you cannot update those. So grabbing a newer release is basically delete all the vms + images and import it again.

[root@carbon ~]# salt-call imgadm.list true
local:
    ----------
    c6a275e4-c730-11e8-8c5f-9b24fe560a8f:
        ----------
        description:
            A 64-bit SmartOS image with just essential packages installed. Ideal for users who are comfortable with setting up their own environment and tools.
        name:
            base-64
        os:
            smartos
        published:
            2018-10-03T17:21:33Z
        source:
            https://images.joyent.com
        version:
            18.3.0
    d9eca37c-55d1-862d-b07a-b58cfba183d2:
        ----------
        description:
            Docker image imported from plexinc/pms-docker:latest on 2019-01-14T18:48:51.257Z.
        name:
            plexinc/pms-docker:latest
        os:
            linux
        published:
            2019-01-14T18:48:51.257Z
        source:
            https://docker.io
        version:
            1691bdb34def

Using the imgadm.list call in salt filters out those dummy docker-layers, it's basically the result of imgadm list minus docker-layers + imgadm list --docker (but all derrived from imgadm list -j actually)

Can you do a imgadm.list true via salt... I curious what it shows for the docker images, you should only see one for plex. I think the output should be simular to mine as you also have d9eca37c-55d1-862d-b07a-b58cfba183d2 and that docker-layer has cones = 0, because there is no vm/zone that uses it currently.

[root@carbon ~]# salt-call imgadm.docker_to_uuid plexinc/pms-docker:latest
local:
    d9eca37c-55d1-862d-b07a-b58cfba183d2

Should return the same image as imgadm list --docker for the plex image too.

@sjorge
Copy link
Contributor

sjorge commented Jan 29, 2019

testdocker:
  smartos.vm_present:
    - config:
        auto_import: true
        reprovision: true
    - vmconfig:
        image_uuid: plexinc/pms-docker:plexpass
        brand: lx
        quota: 5
        max_physical_memory: 1024
        filesystems:
          "/config":
            source: "/tmp"
            type: lofs
            options:
              - nodevices

This absolute minimal state is working for me, just to eliminate a lot of other things it could be unhappy with.

It has auto filled in kernel_version as I would expect after double checking the code.

----------
          ID: testdocker
    Function: smartos.vm_present
      Result: True
     Comment: vm testdocker created
     Started: 10:58:48.135880
    Duration: 7215.59 ms
     Changes:
              ----------
              testdocker:
                  ----------
                  image_uuid:
                      45c766d0-94e0-129a-db68-35a988c4bdef
                  brand:
                      lx
                  quota:
                      5
                  max_physical_memory:
                      1024
                  filesystems:
                      |_
                        ----------
                        source:
                            /tmp
                        type:
                            lofs
                        options:
                            - nodevices
                        target:
                            /config
                  hostname:
                      testdocker
                  docker:
                      True
                  kernel_version:
                      4.3.0
                  internal_metadata:
                      ----------
                      docker:entrypoint:
                          ["/init"]
                      docker:env:
                          ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm", "LANG=C.UTF-8", "LC_ALL=C.UTF-8", "CHANGE_CONFIG_DIR_OWNERSHIP=true", "HOME=/config"]

Summary for local
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:   7.216 s```

@sjorge
Copy link
Contributor

sjorge commented Feb 5, 2019

THis will end up in 2019.3.1, it won't make it into the initial release

@Ch3LL Ch3LL added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around P4 Priority 4 and removed Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged labels Feb 5, 2019
@Ch3LL Ch3LL modified the milestones: Blocked, Approved Feb 5, 2019
@Ch3LL Ch3LL closed this as completed Feb 5, 2019
@garethhowell
Copy link
Author

Sorry, I didn’t get any notifications for these comments until the final one was posted. Not sure why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P4 Priority 4 severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

3 participants