Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ubuntu/xenial #284

Merged
merged 69 commits into from Mar 27, 2020
Merged

Ubuntu/xenial #284

merged 69 commits into from Mar 27, 2020

Conversation

blackboxsw
Copy link
Collaborator

@blackboxsw blackboxsw commented Mar 26, 2020

Performed new-upstream-snapshot -v --skip-release (since we aren't pushing this for release yet).

This allows me to refresh failing quilt patches in debian/patches.

Since I had to refresh both ec2 patch and stable*requirements patch I added a common comment prefix SRU_FIX to we can easily see resolved differences in behavior in the cloud-init codebase via a grep.

Followed quilt refresh steps https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/ubuntu_release_process.md#when-the-daily-recipe-build-fails and manually changed the patch comment.

OddBloke and others added 30 commits January 29, 2020 14:55
* cloudinit: replace "import mock" with "from unittest import mock"

* test-requirements.txt: drop mock

Co-authored-by: Chad Smith <chad.smith@canonical.com>
It is proto 'none', not 'static' as was mistakenly implemented in
initramfs-tools/cloud-init in the past, yet was never the case in the
klibc ipconfig state file output.

LP: #1861412
…onical#162)

- Introduce the "flavor" configuration option for the sysconfig renderer
  this is necessary to account for differences in the handling of the
  BOOTPROTO setting between distributions (lp#1858808)
  + Thanks to Petr Pavlu for the idea
- Network config clean up for sysconfig renderer
  + The introduction of the "flavor" renderer configuration allows us
    to only write values that are pertinent for the given distro
- Set the DHCPv6 client mode on SUSE (lp#1800854)

Co-authored-by: Chad Smith <chad.smith@canonical.com>

LP: #1800854
fixes typo at doc/examples/cloud-config-disk-setup.txt; Cavaut => Caveat
The azurecloud platform did not always start instances
during collect runs.  This was a result of two issues. First
the image class _instance method did not invoke the start()
method which then allowed collect stage to attempt to run
scripts without an endpoint.  Second, azurecloud used the
image_id as both an instance handle (which is typically
vmName in azure api) as well as an image handle (for image
capture).  Resolve this by adding a .vm_name property to
the AzureCloudInstance and reference this property in
AzureCloudImage.

Also in this branch

- Fix error encoding user-data when value is None
- Add additional logging in AzureCloud platform
- Update logging format to print pathname,funcName and line number
  This greatly eases debugging.

LP: #1861921
As noticed by Seth Arnold, non-deterministic SystemRandom should be
used when creating security sensitive random strings.
)

Instead of logging the token values used log the headers and replace the actual
values with the string 'REDACTED'.  This allows users to examine cloud-init.log
and see that the IMDSv2 token header is being used but avoids leaving the value
used in the log file itself.

LP: #1863943
* tools/read-version: don't enforce version parity in release branch CI

We have a bootstrapping problem with new releases, currently.  To take
the example of 20.1: the branch that bumps the version fails CI because
there is no 20.1 tag for it to use in read-version.  Previously, this
was solved by creating a tag and pushing it to the cloud-init repo
before the commit landed.  However, we have GitHub branch protection
enabled, so the commit that needs to be tagged is not created until the
pull request lands in master.

This works around this problem by introducing a very specific check: if
we are performing CI for an upstream release branch, we skip the
read-version checking that we know will fail.

* tools/make-tarball: add --version parameter

When using make-tarball as part of a CI build of a new upstream release,
the version it determines is inconsistent with the version that other
tools determine.  Instead of encoding the logic here (as well as in
Python elsewhere), we add a parameter to allow us to set it from outside
the script.

* packages/bddeb: handle missing version_long in new version CI

If we're running in CI for a new upstream release, we have to use
`version` instead of `version_long` (because we don't yet have the tag
required to generate `version_long`).
Bump the version in cloudinit/version.py to 20.1 and
update ChangeLog.

LP: #1863954
)

* Add physical network type: cascading to openstack helpers
* add new helpers test for checking all openstack KNOWN_PHYSICAL_TYPES get type 'physical'.
In cloud-init 19.2, we added the ability for cloud-init to detect
OpenStack platforms by checking for "OpenStack Compute" or "OpenStack
Nova" in the chassis asset tag.  However, this was never reflected
in the documentation.  This patch updates the datasources documentation
for OpenStack to reflect the possibility of using the chassis asset tag.

LP: #1669875
canonical#230)

Our header redact logic was redacting both logged request headers and
the actual source request. This results in DataSourceEc2 sending the
invalid header "X-aws-ec2-metadata-token-ttl-seconds: REDACTED" which
gets an HTTP status response of 400.

Cloud-init retries this failed token request for 2 minutes before
falling back to IMDSv1.

LP: #1865882
…ical#232)

Allow disabling cloud-init's network configuration via a plain-text kernel cmdline

Cloud-init docs indicate that users can disable cloud-init networking via kernel
command line parameter 'network-config=<YAML>'.  This does not work unless 
the <YAML> payload base64 encoded.  Document the base64 encoding
requirement and add a plain-text value for disabling cloud-init network config:

    network-config=disabled

Also:
 - Log an error and ignore any plain-text network-config payloads that are
   not specifically 'network-config=disabled'.
 - Log a warning if network-config kernel param is invalid yaml but do not
   raise an exception, allowing boot to continue and use fallback networking.

LP: #1862702
When cloud-init persisted instance metadata to instance-data.json               
if failed to redact the sensitive value. Currently, the only sensitive          
key 'security-credentials' is omitted as cloud-init does not fetch              
this value from IMDS.                                                           
                                                                                
Fix this by properly redacting the content from the public                      
instance-metadata.json file while retaining the value in the root-only          
instance-data-sensitive.json file.                                              
                                                                                
LP: #1865947
The EC2 Data Source needs to handle 3 states of the Instance
Metadata Service configured for a given instance:

1. HttpTokens : optional & HttpEndpoint : enabled
   Either IMDSv2 or IMDSv1 can be used.
2. HttpTokens : required & HttpEndpoint : enabled
   Calls to IMDS without a valid token (IMDSv1 or IMDSv2 with expired token)
   will return a 401 error.
3. HttpEndpoint : disabled
   The IMDS http endpoint will return a 403 error.

Previous work to support IMDSv2 in cloud-init handled case 1 and case 2.

This commit handles case 3 by bypassing the retry block when IMDS returns HTTP
status code >= 400 on official AWS cloud platform.

It shaves 2 minutes when rebooting an instance that has its IMDS http token endpoint
disabled but creates some inconsistencies. An instance that doesn't set
"manual_cache_clean" to "True" will have its /var/lib/cloud/instance symlink
removed altogether after it has failed to find a datasource.
…anonical#214)

Cloud-config userdata provided as jinja templates are now distro,
platform and merged cloud config aware. The cloud-init query command
will also surface this config data.

Now users can selectively render portions of cloud-config based on:
* distro name, version, release
* python version
* merged cloud config values
* machine platform
* kernel

To support template handling of this config, add new top-level
keys to /run/cloud-init/instance-data.json.

The new 'merged_cfg' key represents merged cloud config from
/etc/cloud/cloud.cfg and /etc/cloud/cloud.cfg.d/*.

The new 'sys_info' key which captures distro and platform
info from cloudinit.util.system_info.

Cloud config userdata templates can render conditional content
based on these additional environmental checks such as the following
simple example:

```
  ## template: jinja
  #cloud-config
   runcmd:
  {% if distro == 'opensuse' %}
    - sh /custom-setup-sles
  {% elif distro == 'centos' %}
    - sh /custom-setup-centos
  {% elif distro == 'debian' %}
    - sh /custom-setup-debian
  {% endif %}
```

To see all values: sudo cloud-init query --all

Any keys added to the standardized v1 keys are guaranteed to not
change or drop on future released of cloud-init. 'v1' keys will be retained
for backward-compatibility even if a new standardized 'v2' set of keys
are introduced

The following standardized v1 keys are added:
* distro, distro_release, distro_version, kernel_version, machine,
python_version, system_platform, variant

LP: #1865969
As the nose docs[0] themselves note, it has been in maintenance mode for the past several years. pytest is an actively developed, featureful and popular alternative that the nose docs themselves recommend. See [1] for more details about the thinking here.

(This PR also removes stale tox definitions, instead of modifying them.)

[0] https://nose.readthedocs.io/en/latest/
[1] https://lists.launchpad.net/cloud-init/msg00245.html
pyflakes versions older than 2.1.0 are incompatible with Python 3.8
(which is the Python version in the current Ubuntu development release).
See PyCQA/pyflakes#367 for details.

2.1.1 is the latest version ATM, so bump to that.
…onical#164)

Github api doesn't allow read-write access to labels or comments when
running from a pull_request fork during CI.

This restriction results in an API error
    message: "Resource not accessible by integration"

If we want to run this action per pull_request, we need to convert the
action to fail the PR status check and emit the required steps to sign the
CLA to the console on the PR's failed status tab.
Now that we can distinguish between CI xenial dependencies and
needed-to-run-on-dev-machine xenial depedencies, we can return to
testing with the correct jsonpatch version.
Instead of using the username that triggered the action (which, in the
case of a committer merging master into a PR branch will be the
committer), always use the username of the submitter of the pull
request.
raharper and others added 13 commits March 24, 2020 11:42
* tools: use python3

Switch tools/ to use python3 instead of python.  At minimum this
fixes building deb on python3 only releases like Focal. Applied
via shell commands:

 $ grep 'usr/bin/.*python' tools/* 2>/dev/null | \
     grep -v python3 | awk -F':' '{print $1}' | \
     xargs -i sed -i -e '0,/python/s/python/python3/' {}

* Use /usr/bin/env python3 to be virtualenv friendly
Currently, `cc_package_update_upgrade_install.py` fails because
`package_command()` does not know how to do an update on FreeBSD.

```
2020-03-23 20:01:53,995 - util.py[DEBUG]: Package update failed
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/cloud_init-20.1-py3.7.egg/cloudinit/config/cc_package_update_upgrade_install.py", line 85, in handle
    cloud.distro.update_package_sources()
  File "/usr/local/lib/python3.7/site-packages/cloud_init-20.1-py3.7.egg/cloudinit/distros/freebsd.py", line 158, in update_package_sources
    ["update"], freq=PER_INSTANCE)
  File "/usr/local/lib/python3.7/site-packages/cloud_init-20.1-py3.7.egg/cloudinit/helpers.py", line 185, in run
    results = functor(*args)
  File "/usr/local/lib/python3.7/site-packages/cloud_init-20.1-py3.7.egg/cloudinit/distros/bsd.py", line 102, in package_command
    cmd.extend(pkglist)
UnboundLocalError: local variable 'cmd' referenced before assignment
```

This commit defines a new `pkg_cmd_update_prefix` key. If it's empty, we
don't do any update, otherwise we use the value to update the package
manager.
Co-authored-by: Joshua Powers <josh.powers@canonical.com>
)

Add support for additional escaping of formatting characters
in the YAML content between the 'cc:' and 'end_cc' tokens.  On
s390x legacy terminals the use of square brackets [] are not
available limiting the ability to indicate lists of values in
yaml content. Using #5B and #5D, [ and ] respectively enables
s390x users to pass list yaml content into cloud-init via
command line interface.
Avoid chpasswd on all the BSD variants.
Map lp user kgarloff to github garloff
- tested on OpenBSD 6.6
- tested on OpenStack without config drive, and NoCloud with ISO config
  drive
@OddBloke OddBloke self-assigned this Mar 26, 2020
@raharper
Copy link
Collaborator

Since I had to refresh both ec2 patch and stable*requirements patch I added a common comment prefix SRU_FIX to we can easily see resolved differences in behavior in the cloud-init codebase via a grep.

Where is this in the commit history? And what does it look like? Is this needed? or can we just document in tableflip docs that when we're getting quilt apply failures on daily recipe builds, we do:

new-upstream-snapshot --skip-release

etc. ?

Copy link
Collaborator

@OddBloke OddBloke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Followed the steps locally and got the same result. Thanks!

add SAP Converged Cloud as cloud provider
@blackboxsw
Copy link
Collaborator Author

blackboxsw commented Mar 26, 2020

Since I had to refresh both ec2 patch and stable*requirements patch I added a common comment prefix SRU_FIX to we can easily see resolved differences in behavior in the cloud-init codebase via a grep.

Where is this in the commit history? And what does it look like? Is this needed? or can we just document in tableflip docs that when we're getting quilt apply failures on daily recipe builds, we do:

new-upstream-snapshot --skip-release

We do have that documented already in uss-tableflip docs per your previous additions there (thanks) @ https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/ubuntu_release_process.md#when-the-daily-recipe-build-fails

Since I had to manually fix the applied patches for both ec2-classic patch and stable-release-no-jsonschema using quilt, it is only captured in commitish d4cbc7e as "refresh patches against origin/master...."

That manual refresh gave me an opportunity to standardize on the prefix we use in commenting on SRU_BLOCKER type patches.

@raharper
Copy link
Collaborator

Since I had to refresh both ec2 patch and stable*requirements patch I added a common comment prefix SRU_FIX to we can easily see resolved differences in behavior in the cloud-init codebase via a grep.

Where is this in the commit history? And what does it look like? Is this needed? or can we just document in tableflip docs that when we're getting quilt apply failures on daily recipe builds, we do:
new-upstream-snapshot --skip-release

We do have that documented already in uss-tableflip docs per your previous additions there (thanks) @ https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/ubuntu_release_process.md#when-the-daily-recipe-build-fails

Since I had to manually fix the applied patches for both ec2-classic patch and stable-release-no-jsonschema using quilt, it is only captured in commitish d4cbc7e as "refresh patches against origin/master...."

I see. I don't really want any manual steps added that aren't documented in process you followed. Let's not include that string.

refresh patches against origin/master commit 7f9f33d:
  debian/patches/ec2-classic-dont-reapply-networking.patch
  debian/patches/openstack-no-network-config.patch
  debian/patches/stable-release-no-jsonschema-dep.patch
@blackboxsw
Copy link
Collaborator Author

Since I had to refresh both ec2 patch and stable*requirements patch I added a common comment prefix SRU_FIX to we can easily see resolved differences in behavior in the cloud-init codebase via a grep.

Where is this in the commit history? And what does it look like? Is this needed? or can we just document in tableflip docs that when we're getting quilt apply failures on daily recipe builds, we do:
new-upstream-snapshot --skip-release

We do have that documented already in uss-tableflip docs per your previous additions there (thanks) @ https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/ubuntu_release_process.md#when-the-daily-recipe-build-fails
Since I had to manually fix the applied patches for both ec2-classic patch and stable-release-no-jsonschema using quilt, it is only captured in commitish d4cbc7e as "refresh patches against origin/master...."

I see. I don't really want any manual steps added that aren't documented in process you followed. Let's not include that string.

Done +1. Limited need for that level of prescribed commenting.

@blackboxsw blackboxsw merged commit e9a9a0a into canonical:ubuntu/xenial Mar 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet