New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignition: template not filled by digitalocean metadata #2315

Closed
cnkuyan opened this Issue Jan 11, 2018 · 8 comments

Comments

Projects
None yet
5 participants
@cnkuyan

cnkuyan commented Jan 11, 2018

Issue Report

Bug

The metadata placeholders ( PUBLIC_IPV4, PRIVATE_IPV4) produced by the ContainerLinux transpiler ( ct-v0.5.0-x86_64-unknown-linux-gnu ) won't resolve to actual metadata by digitalocean droplet.

I'm providing the following ignition script as user_data when creating a digitalocean droplet (CoreOS Alpha)

IGNITION SCRIPT (user_data.ignition):

{
	"ignition": {
		"config": {},
		"timeouts": {},
		"version": "2.1.0"
	},
	"networkd": {},
	"passwd": {
		"users": [
			{
				"name": "core",
				"sshAuthorizedKeys": [
					"ssh-rsa XXXXX"
				]
			}
		]
	},
	"storage": {
		"files": [
			{
				"filesystem": "root",
				"group": {},
				"path": "/etc/environment",
				"user": {},
				"contents": {
					"source": "data:,COREOS_PUBLIC_IPV4%3D%7BPUBLIC_IPV4%7D%0ACOREOS_PRIVATE_IPV4%3D%7BPRIVATE_IPV4%7D%0A",
					"verification": {}
				},
				"mode": 420
			}
		]
	},
	"systemd": {}
}`

ContainerLinux source script (user_data):

#Container-linux . compile it into ignition

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-rsa XXXXX
    
storage:
  files:
    - filesystem: "root"
      path:       "/etc/environment"
      mode:       0644
      contents:
        inline: |
          COREOS_PUBLIC_IPV4={PUBLIC_IPV4}
          COREOS_PRIVATE_IPV4={PRIVATE_IPV4}

The command to produce above ignition script is:
$ ct --platform digitalocean < user_data > user_data.ignition

After the droplet gets created, the contents of the /etc/environment looks like:

 COREOS_PUBLIC_IPV4={PUBLIC_IPV4}
COREOS_PRIVATE_IPV4={PRIVATE_IPV4}

instead, it should've looked like : (addresses are fictional)

COREOS_PUBLIC_IPV4=159.89.100.176
COREOS_PRIVATE_IPV4=10.0.19.0

Container Linux Version

$ cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1649.0.0
VERSION_ID=1649.0.0
BUILD_ID=2018-01-05-0906
PRETTY_NAME="Container Linux by CoreOS 1649.0.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr...

Environment

What hardware/cloud provider/hypervisor is being used to run Container Linux?
DigitalOcean, Chosen Image: CoreOS 1649.0.0 (alpha)

Expected Behavior

$ cat /etc/environment 
COREOS_PUBLIC_IPV4=159.89.100.176
COREOS_PRIVATE_IPV4=10.0.19.0

Actual Behavior

$ cat /etc/environment 
 COREOS_PUBLIC_IPV4={PUBLIC_IPV4}
COREOS_PRIVATE_IPV4={PRIVATE_IPV4}

Reproduction Steps

  1. Go to create droplet
  2. Choose Container Distribution -> CoreOS (alpha)
  3. Check Private Networking, and user-data checkboxes . (activate)
  4. Copy and paste the IGNITION SCRIPT given at the top into the user_data textbox.
  5. Create the droplet.
  6. ssh into the droplet,
  7. execute command : cat /etc/environment

Other Information

@cnkuyan cnkuyan changed the title from ignition script on digitalocean to Ignition: template not filled by digitalocean metada Jan 11, 2018

@cnkuyan cnkuyan changed the title from Ignition: template not filled by digitalocean metada to Ignition: template not filled by digitalocean metadata Jan 11, 2018

@crawford

This comment has been minimized.

Show comment
Hide comment
@crawford

crawford Jan 11, 2018

Member

Variable substitution is only supported in services. This is because we rely on systemd to actually perform the substitution and since raw files aren't processed by systemd, we have no mechanism for ordering or substitution. I agree it's a little surprising that no warning was issued. We can address that in CT.

As an aside, why are you trying to populate /etc/environment? That shouldn't be needed except maybe in legacy cases.

Member

crawford commented Jan 11, 2018

Variable substitution is only supported in services. This is because we rely on systemd to actually perform the substitution and since raw files aren't processed by systemd, we have no mechanism for ordering or substitution. I agree it's a little surprising that no warning was issued. We can address that in CT.

As an aside, why are you trying to populate /etc/environment? That shouldn't be needed except maybe in legacy cases.

@dgonyeo

This comment has been minimized.

Show comment
Hide comment
@dgonyeo

dgonyeo Jan 11, 2018

If you want an environment file you can source to have information about your droplet's network addresses, you can use coreos-metadata to accomplish this. This is what ct does under the hood when variable substitution is used in a container linux config.

The systemd unit at the end of this page is an example of how to do this. Requiring and running after the coreos-metadata.service unit means that coreos-metadata will populate the /run/metadata/coreos file with the environment variables listed in the digital ocean section here.

Currently the dynamic data feature of ct only works in the etcd and flannel sections of the config (here's an example).

dgonyeo commented Jan 11, 2018

If you want an environment file you can source to have information about your droplet's network addresses, you can use coreos-metadata to accomplish this. This is what ct does under the hood when variable substitution is used in a container linux config.

The systemd unit at the end of this page is an example of how to do this. Requiring and running after the coreos-metadata.service unit means that coreos-metadata will populate the /run/metadata/coreos file with the environment variables listed in the digital ocean section here.

Currently the dynamic data feature of ct only works in the etcd and flannel sections of the config (here's an example).

@cnkuyan

This comment has been minimized.

Show comment
Hide comment
@cnkuyan

cnkuyan Jan 12, 2018

@crawford . Exactly. I need this for legacy fleet services. Here is an excerpt from one:

[Unit]
Description=Sample Legacy Fleet service
After=docker.service
Requires=docker.service
After=etcd2.service
Requires=etcd2.service
[Service]
EnvironmentFile=/etc/environment

.. . ...
.. .. 

ExecStartPost=/bin/bash -c "\
              set -e; \
              /usr/bin/etcdctl ${ETCD_OPTIONS} set /services/worker/%H ${COREOS_PRIVATE_IPV4};"

@dgonyeo I will see if I can get that service above depend on the coreos-metadata.service as in example you referenced.

@both : Thank you very much , I will post my findings.

cnkuyan commented Jan 12, 2018

@crawford . Exactly. I need this for legacy fleet services. Here is an excerpt from one:

[Unit]
Description=Sample Legacy Fleet service
After=docker.service
Requires=docker.service
After=etcd2.service
Requires=etcd2.service
[Service]
EnvironmentFile=/etc/environment

.. . ...
.. .. 

ExecStartPost=/bin/bash -c "\
              set -e; \
              /usr/bin/etcdctl ${ETCD_OPTIONS} set /services/worker/%H ${COREOS_PRIVATE_IPV4};"

@dgonyeo I will see if I can get that service above depend on the coreos-metadata.service as in example you referenced.

@both : Thank you very much , I will post my findings.

@lucab

This comment has been minimized.

Show comment
Hide comment
@lucab

lucab Jan 12, 2018

Member

@cnkuyan please note that in the case of /run/metadata/coreos, keys have cloud-specific names (e.g. COREOS_DIGITALOCEAN_IPV4_PRIVATE_0).

Member

lucab commented Jan 12, 2018

@cnkuyan please note that in the case of /run/metadata/coreos, keys have cloud-specific names (e.g. COREOS_DIGITALOCEAN_IPV4_PRIVATE_0).

@cnkuyan

This comment has been minimized.

Show comment
Hide comment
@cnkuyan

cnkuyan Jan 12, 2018

Yep, I noticed that. Thanks for pointing that out @lucab .

cnkuyan commented Jan 12, 2018

Yep, I noticed that. Thanks for pointing that out @lucab .

@cnkuyan

This comment has been minimized.

Show comment
Hide comment
@cnkuyan

cnkuyan Jan 12, 2018

I realized that I have a separate problem now .

I'm migrating the cloud-config based user-data over to ignition, while keeping the fleet service . (I'm aware that fleet will no longer be included after Feb, 2018 in CoreOS , I need fleet only for temporarily) .

The problem is , in the legacy cloud-config, I have a fleet section , that looks like this:

coreos:
  fleet:
    public_ip: {PUBLIC_IP}
    etcd_servers: "https://{PRIVATE_IP}:4001,https://{PRIVATE_IP}:2379"
    etcd_cafile: "/var/ssl/certs/ca.pem"
    etcd_certfile: "/var/ssl/certs/WORKER-client.pem"
    etcd_keyfile: "/var/ssl/keys/WORKER-client-key.pem"

which I'm attempting to convert to ContainerLinux format by writing the env variables to /etc/fleet/fleet.conf as follows (I'm not even sure it will work):

storage:
  files:
    - filesystem: "root"
      path:       "/etc/fleet/fleet.conf"
      mode:       0644
      contents:
        inline: |
         FLEET_PUBLIC_IP="{PUBLIC_IPV4}"
         FLEET_ETCD_SERVERS="https://{PRIVATE_IPV4}:4001,https://{PRIVATE_IPV4}:2379"
         FLEET_METADATA="name=ocore-noup1, ocore-noup1 hw=c2:r2 position=mongodb"
         FLEET_ETCD_CAFILE="/var/ssl/certs/ca.pem"
         FLEET_ETCD_CERTFILE="/var/ssl/certs/ocore-noup1-client.pem"
         FLEET_ETCD_KEYFILE="/var/ssl/keys/ocore-noup1-client-key.pem"

And having another block to start the fleet.service as follows:

systemd:
  units:
    - name: "fleet.service"
      enabled: true

But per the comment by @dgonyeo , the PUBLIC_IPV4, PRIVATE_IPV4 variables will not be accessible at the time of creation of /etc/fleet/fleet.conf. They are accessible only at etcd and flannel service blocks.

The general problem is, to migrate the legacy user-data to ignition , so that I can disable the CoreOs from updating itself, as given in the following ContainerLinux source:

    - name: update-engine.service
      mask: true
    - name: locksmithd.service
      mask: true

I solicit ideas about how to solve my general problem, as stated above.
Thanks a bunch,

cnkuyan commented Jan 12, 2018

I realized that I have a separate problem now .

I'm migrating the cloud-config based user-data over to ignition, while keeping the fleet service . (I'm aware that fleet will no longer be included after Feb, 2018 in CoreOS , I need fleet only for temporarily) .

The problem is , in the legacy cloud-config, I have a fleet section , that looks like this:

coreos:
  fleet:
    public_ip: {PUBLIC_IP}
    etcd_servers: "https://{PRIVATE_IP}:4001,https://{PRIVATE_IP}:2379"
    etcd_cafile: "/var/ssl/certs/ca.pem"
    etcd_certfile: "/var/ssl/certs/WORKER-client.pem"
    etcd_keyfile: "/var/ssl/keys/WORKER-client-key.pem"

which I'm attempting to convert to ContainerLinux format by writing the env variables to /etc/fleet/fleet.conf as follows (I'm not even sure it will work):

storage:
  files:
    - filesystem: "root"
      path:       "/etc/fleet/fleet.conf"
      mode:       0644
      contents:
        inline: |
         FLEET_PUBLIC_IP="{PUBLIC_IPV4}"
         FLEET_ETCD_SERVERS="https://{PRIVATE_IPV4}:4001,https://{PRIVATE_IPV4}:2379"
         FLEET_METADATA="name=ocore-noup1, ocore-noup1 hw=c2:r2 position=mongodb"
         FLEET_ETCD_CAFILE="/var/ssl/certs/ca.pem"
         FLEET_ETCD_CERTFILE="/var/ssl/certs/ocore-noup1-client.pem"
         FLEET_ETCD_KEYFILE="/var/ssl/keys/ocore-noup1-client-key.pem"

And having another block to start the fleet.service as follows:

systemd:
  units:
    - name: "fleet.service"
      enabled: true

But per the comment by @dgonyeo , the PUBLIC_IPV4, PRIVATE_IPV4 variables will not be accessible at the time of creation of /etc/fleet/fleet.conf. They are accessible only at etcd and flannel service blocks.

The general problem is, to migrate the legacy user-data to ignition , so that I can disable the CoreOs from updating itself, as given in the following ContainerLinux source:

    - name: update-engine.service
      mask: true
    - name: locksmithd.service
      mask: true

I solicit ideas about how to solve my general problem, as stated above.
Thanks a bunch,

@crawford

This comment has been minimized.

Show comment
Hide comment
@crawford

crawford Jan 12, 2018

Member

The general pattern we recommend is to use coreos-metadata.service to populate the /run/metadata/coreos environment file. You'll need to order your service after coreos-metadata.service to ensure that the metadata is present before your service starts to run. You'll then need to modify your service to source /run/metadata/coreos using EnvironmentFile=. After that, you can use the variables in your service.

This paradigm requires the service's target program accept command line flags. fleet, unfortunately doesn't use command line flags. This means you'll need to write a helper which just writes /etc/fleet/fleet.conf. You could do something like the following (untested) in a drop-in to fleet:

[Unit]
After=coreos-metadata.service
Requires=coreos-metadata.service

[Service]
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_PUBLIC_IP=\"${COREOS_DIGITALOCEAN_IPV4_PUBLIC_0}\"' > /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_SERVERS=\"https://${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0}:2379\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_METADATA=\"name=ocore-noup1, ocore-noup1 hw=c2:r2 position=mongodb\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_CAFILE=\"/var/ssl/certs/ca.pem\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_CERTFILE=\"/var/ssl/certs/ocore-noup1-client.pem\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_KEYFILE=\"/var/ssl/keys/ocore-noup1-client-key.pem\"' >> /etc/fleet/fleet.conf"
Member

crawford commented Jan 12, 2018

The general pattern we recommend is to use coreos-metadata.service to populate the /run/metadata/coreos environment file. You'll need to order your service after coreos-metadata.service to ensure that the metadata is present before your service starts to run. You'll then need to modify your service to source /run/metadata/coreos using EnvironmentFile=. After that, you can use the variables in your service.

This paradigm requires the service's target program accept command line flags. fleet, unfortunately doesn't use command line flags. This means you'll need to write a helper which just writes /etc/fleet/fleet.conf. You could do something like the following (untested) in a drop-in to fleet:

[Unit]
After=coreos-metadata.service
Requires=coreos-metadata.service

[Service]
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_PUBLIC_IP=\"${COREOS_DIGITALOCEAN_IPV4_PUBLIC_0}\"' > /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_SERVERS=\"https://${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0}:2379\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_METADATA=\"name=ocore-noup1, ocore-noup1 hw=c2:r2 position=mongodb\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_CAFILE=\"/var/ssl/certs/ca.pem\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_CERTFILE=\"/var/ssl/certs/ocore-noup1-client.pem\"' >> /etc/fleet/fleet.conf"
ExecStartPre=/usr/bin/bash -c "echo 'FLEET_ETCD_KEYFILE=\"/var/ssl/keys/ocore-noup1-client-key.pem\"' >> /etc/fleet/fleet.conf"
@cnkuyan

This comment has been minimized.

Show comment
Hide comment
@cnkuyan

cnkuyan Jan 13, 2018

@crawford Sure enough, with an extra line in the [Service] section, I was able to get the /etc/fleet/fleet.conf populated with the required env variables using the sample you provided.
Right on spot, for a blind try. Thank you .

And this is the line that needs to be executed before the rest of the echo commands, otherwise the script will fail - due to the missing /etc/fleet folder.

ExecStartPre=/usr/bin/bash -c "mkdir -p /etc/fleet"

cnkuyan commented Jan 13, 2018

@crawford Sure enough, with an extra line in the [Service] section, I was able to get the /etc/fleet/fleet.conf populated with the required env variables using the sample you provided.
Right on spot, for a blind try. Thank you .

And this is the line that needs to be executed before the rest of the echo commands, otherwise the script will fail - due to the missing /etc/fleet folder.

ExecStartPre=/usr/bin/bash -c "mkdir -p /etc/fleet"

@cnkuyan cnkuyan closed this Jan 16, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment