-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rhcos-4.9-new] Backport assorted commits for pipeline migration #3186
[rhcos-4.9-new] Backport assorted commits for pipeline migration #3186
Conversation
It doesn't work properly against these rhcos- branches anyway. Let's just limit the testing to building the container and running check and unittest tests.
We've already dropped these in the latest COSA code. Let's drop them here because we're seeing build failures because go is trying to pull from different source code repos and it's failing. Rather than wait for the intermittent problem to resolve itself OR otherwise have to workaround this, let's just drop the code since it's not needed.
``` src/cmd-buildextend-live:424:1: W293 blank line contains whitespace ```
The command previously wouldn't allow us to get/set fedora-coreos.parent-commit and fedora-coreos.parent-version because it would try to traverse the `fedora-coreos.` path. Add an exception here similar to `coreos-assembler.`. Now we can do something like this in the pipeline: ``` cosa meta \ --set fedora-coreos.parent-commit=${parent_commit} \ --set fedora-coreos.parent-version=${parent_version} ``` (cherry picked from commit 433a68d)
This will allow us to specify where the AWS_CONFIG_FILE is from the CLI without having to set it in the environment. (cherry picked from commit db00529)
Added `--endpoint-url` as an `s3` sub-command flag under the `buildupload` command. This allows `buildupload` to use non-AWS storage servers like Minio. Useful for local development or environments where AWS S3 doesn't fit. (cherry picked from commit 7fd4fc6)
Create `boto3.client` in the main function and pass it down instead of creating a new `client` for every S3 operation. (cherry picked from commit d63bcca)
Right now, when we use `--artifact` to only upload some artifacts, we still reference them in the `meta.json`. This is bad though, because consumers will assume that they're present. Just scrub them out of the custom `meta.json` we upload if they're not present in the bucket. (cherry picked from commit b031235)
In the FCOS pipeline, we sometimes run `buildupload` because e.g. `meta.json` for a specific architecture changed. However, right now this also implies re-uploading the `meta.json` for every architecture, which is unnecessary and possibly opens the door to races. Add an `--arch` flag to fix this. (cherry picked from commit 5acca78)
This will allow us to specify the AWS_CONFIG_FILE from the command line without having to set it in the environment. (cherry picked from commit 9214b9b)
buildprep will continue to work but will print a warning and wait some time before continuing. (cherry picked from commit 0d90ad4)
This will allow us to run something like: ``` cosa buildfetch --stream=rawhide ``` And it will get the latest build metadata for us without requiring any s3 credentials. (cherry picked from commit fcfd0d5)
Allows anyone to grab any named artifact for use in local testing or development. For example: ``` $ cosa buildfetch --arch=aarch64 --stream=rawhide --build=36.20211001.91.1 --artifact=qemu Fetching: https://builds.coreos.fedoraproject.org/prod/streams/rawhide/builds/builds.json Fetching: https://builds.coreos.fedoraproject.org/prod/streams/rawhide/builds/36.20211001.91.1/aarch64/meta.json Fetching: https://builds.coreos.fedoraproject.org/prod/streams/rawhide/builds/36.20211001.91.1/aarch64/ostree-commit-object Fetching: https://builds.coreos.fedoraproject.org/prod/streams/rawhide/builds/36.20211001.91.1/aarch64/fedora-coreos-36.20211001.91.1-qemu.aarch64.qcow2.xz ``` (cherry picked from commit 6e42b11)
One easy way to reset is to just delete builds/builds.json and then re-run `cosa buildfetch`. In our logic we shouldn't compare the two files to see if we have local builds if the local builds.json doesn't exist. (cherry picked from commit 5e9befb)
In my opinion the --refresh semantics were a bit awkward. Typically what you want to do is: 1. check and error if local changes would be overwritten 2. allow one to --force the overwrite In that vein let's remove --refresh and add --force and fixup the surrounding logic to behave as expected. (cherry picked from commit ff48f1d)
It's useful for a few reasons, one of which is if you want to start doing local builds again after doing a buildfetch the commitmeta.json is consulted when the tooling tries to prune things. (cherry picked from commit 310ef40)
- Get all existing artifacts for the given build. - In particular cases (brew improvements) we do want to downlaod all available artifacts. Signed-off-by: Renata Ravanelli <rravanel@redhat.com> (cherry picked from commit 105aa00)
This will allow us to specify the AWS_CONFIG_FILE from the command line without having to set it in the environment. This required reworking the cosalib/s3 library into an S3() class so that we could call boto3.client('s3') after setting the AWS_CONFIG_FILE environment variable in cmd-buildfetch. Otherwise it would get called as soon as the library was imported and setting the environment variable would have no effect (too late). (cherry picked from commit 9d55a20)
The user could have passed in multiple arches so we should continue the loop rather than returning. (cherry picked from commit 1ac86fb)
There will be a ref per arch so we need to do it on each loop iteration. (cherry picked from commit 1c970d6)
This instructs buildfetch to traverse the builds.json history to find the most recent build for the given architecture. (cherry picked from commit 6864566)
I accidentally typed `--artifact qemu --artifact qemu` and buildfetch downloaded it twice. Let's dedupe here. Also, let's apply "all" even if the user passed more than one argument to `--artifact`. (cherry picked from commit 8b5bec7)
This will allow us to specify the AWS_CONFIG_FILE from the command line without having to set it in the environment. (cherry picked from commit 2b0b36d)
Right now cmd-build will just pick up whatever the latest symlink points to and try to get previous build information from that, but what if the absolute latest build doesn't have a build for the architecture we are targetting? Currently cmd-build will fail with: ``` /usr/lib/coreos-assembler/cmd-build: line 163: /srv/builds/36.20220720.20.5/aarch64/meta.json: No such file or directory error: failed to execute cmd-build: exit status 1 ``` What we really need is to get the latest build for the given architecture and use that in cmd-build instead. (cherry picked from commit 2ca610c)
If we are starting out a brand new stream and the x86_64 build has completed, but there was no build yet for $arch then the detected previous buildid: ``` previous_buildid = parent_build or self.get_latest() ``` Will just be the build ID of the x86_64 build from `get_latest()`, but this code tries to open a meta.json for it from $arch, which won't exist. Here's the error we see: ``` Traceback (most recent call last): File "<string>", line 5, in <module> File "/usr/lib/coreos-assembler/cosalib/builds.py", line 134, in init_build_meta_json with open(metapath) as f: FileNotFoundError: [Errno 2] No such file or directory: '/srv/builds/36.20220714.dev.0/aarch64/meta.json' error: failed to execute cmd-build: exit status 1 ``` This should fix that error by verifying the detected buildid actually has a build for that architecture before continuing. (cherry picked from commit 786e87b)
Make the `tarball` argument last. Prep for making it optional. (cherry picked from commit d8a76e9)
For the upcoming yumrepos, we don't want to publish a tarball of it. We still want the git rev information though. Make the tarball argument optional. (cherry picked from commit 606e61b)
We go to a lot of effort to create various caches of builds, and this is very useful for local incremental development to avoid re-downloading RPMs and rewriting data. It is useless for CI builds that discard everything when they're done. Add a `cosa init --transient` flag that for now just disables `fsync()` on `tmp/repo`. There's more we can do here in the future, for example we need to propagate this too into `cache/repo-build` and `cache/pkgcache-repo`, and actually in general we don't need the pkgcache repo at all here. (cherry picked from commit 5569136)
To make it clearer that things worked. (cherry picked from commit 7e7e5b4)
Add a new option `--yumrepos` option to `cosa init` to allow import of yum repo files and content sets located outside of the source git. Motivation: RHCOS needs yum repo definitions hidden behind a firewall. Let's add a clean way to import them. Co-authored-by: Jonathan Lebon <jonathan@jlebon.com> (cherry picked from commit ddcba9f) dustymabe: This cherry pick needed manual merge conflict resolution. (cherry picked from commit d475992) dustymabe: I needed to drop the change from the patch that updated the contentsets section in cmdlib.sh because that section of code doesn't exist in 4.9.
We need to support arch peeling even for multi-arch source containers pushed using v2s2. Thankfully, the schema is identical to the OCI schema for our purposes which makes supporting it painless. Related: coreos#2726 (cherry picked from commit cd32708) (cherry picked from commit 459831c) (cherry picked from commit 4a2512c)
Check if the digests are already present in the remote manifest and if you want to force or not a new manifest upload. Fixes coreos#3150 Signed-off-by: Renata Ravanelli <rravanel@redhat.com> (cherry picked from commit bfbbe41) jlebon: Had to modify new code to use `runcmd` from `cosalib.utils.`. (cherry picked from commit 9d3bdd9) (cherry picked from commit 3945a7c)
- If no metajsonname exists in the meta.json then we get a KeyError so we need to fix that by using `.get()`. - If no metajsonname entry exists then we need to upload. Right now this won't set upload = True and we end up not uploading anything. (cherry picked from commit 9e68cf2) (cherry picked from commit 3cfe5f0) (cherry picked from commit 71bfd52)
Copy repo config files from config dir & yumrepos git repo. Fixes: coreos#3074 Fixes: coreos#3036 Fixes: coreos#3045 (cherry picked from commit 5c708b4) (cherry picked from commit a82d230) (cherry picked from commit dd45d41)
Ensure that buildah is called for kubevirt with the same base arguments like for the oscontainer. Signed-off-by: Roman Mohr <rmohr@redhat.com> (cherry picked from commit 6b50db1) jlebon: Only cherry-picked the `buildah.py` bit from the commit. All other bits are for files that do not yet exist or were deleted by a previous cherry-pick. (cherry picked from commit 21af182)
There are three cases where cmd-compress won't compress an artifact: 1. it's in imgs_to_skip, 2. it's precompressed by qemuvariants.py, 3. it was already compressed by an earlier cmd-compress run. Case 2 currently covers digitalocean and gcp, since those platforms have opinions about which compression algorithm to use. Case 1 covers the ostree, live artifacts, and vmware OVA. Cases 2 and 3 are handled by the same test: cmd-compress looks for a known compression suffix. Case 2 artifacts must not be decompressed for testing etc., since the compression is part of the definition of the artifact. However, we don't currently write that down anywhere. The meta.json uncompressed-* fields would have been useful for this, but 6bca399 added those to case 2 artifacts as well. Add a skip-compression meta.json field, set it when generating all case 1 and case 2 artifacts, and use that to decide whether to compress. That lets us drop the imgs_to_skip list. In addition, detect case 3 by looking for the uncompressed-sha256 meta.json field, which seems cleaner than detecting by file extension. (cherry picked from commit 1893293) jlebon: Had to resolve conflicts in cmd-compress.
Sometimes we have CI flakes, sometimes a test is incorrectly classified as non-exclusive. For both these cases it will be beneficial to rerun failed tests once in their own VMs. If a test fails in the first run but passes in the next one, the overall result is still a failure. coreos#2442 (cherry picked from commit 4acc1fd)
Reruns are not always needed (such as in local testing). Lets disable reruns by default and add an option to enable them. (cherry picked from commit 9776cfd)
The existence of /etc/security/limits.d/95-kvm-memlock.conf causes sudo not to work on ppc64le which means we will never execute our "privileged" path. Let's delete the file so sudo can work on ppc64le while we discuss in the bug our options. https://bugzilla.redhat.com/show_bug.cgi?id=2082149 (cherry picked from commit 6a23441) dustymabe: Needed to use a different path than the original commit because the file was in a different location for Fedora 34.
With the changes in the latest fedora-messaging package, pylint gets confused that `addErrback` isn't a member of the consumer returned by `twisted_consume`. But the API is still valid and the code still works in a local test, so let's just silence pylint. Closes: coreos#2649 (cherry picked from commit 1abb016)
d4af21a
to
7f9b7ba
Compare
DONE. I mostly left them separated out for review purposes so you could confirm it was the correct thing to do.
I already had a local modification I was using for testing. Posted that in coreos/fedora-coreos-config#2090 |
@dustymabe: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@@ -117,10 +114,7 @@ def compress_one_builddir(builddir): | |||
img = buildmeta['images'][img_format] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is harmless, but I wonder if we still want this img
declaration here. Before this commit img
wasn't a loop variable and now it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's just merge
Merging... |
No description provided.