Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Platform matcher: allow for running of newer Win images under Hyper-V. #7856

Closed
wants to merge 2 commits into from

Conversation

aznashwan
Copy link
Contributor

Following the addition of Hyper-V isolation, it should be possible for Windows hosts to run container images which are older or newer than the host build if the Windows host is version RS5 (1809) or above.

This patch updates the Windows platform matching logic to allow for build number mismatches between Windows images and hosts iff the host version is at least RS5.

Signed-off-by: Nashwan Azhari nazhari@cloudbasesolutions.com

@k8s-ci-robot
Copy link

Hi @aznashwan. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aznashwan
Copy link
Contributor Author

Note that the referenced compatibility matrix on Microsoft's website is a bit outdated and doesn't claim running 2022 images on W10/WS2019 under Hyper-V is supported yet, but it does work so it's just a matter of the docs getting updated.

This is a sister PR to moby/moby#44489, which enables the same behavior in the Docker daemon.

@dcantah
Copy link
Member

dcantah commented Dec 22, 2022

As far as I remember the matrix was setup like that for a reason, I was always hesitant to add in explicit forward compat (higher versioned guest on lower host) as there may be weird things that fail. The very first thing I tried on an rs5 machine was if we could launch a 19h1 UVM and that worked, so it's been "a thing" for awhile 😳.

cc @kevpar, as I vaguely remember us looking into this

@aznashwan
Copy link
Contributor Author

I was always hesitant to add in explicit forward compat (higher versioned guest on lower host) as there may be weird things that fail.

Agreed forward compat might be risky, looping in @msscotb and @helsaawy for their take.

IMO backwards-compat (older guest image on newer host) is a pretty legitimate usecase though and should be considered.

@dcantah
Copy link
Member

dcantah commented Dec 22, 2022

100% agree to backcompat, I'm happy to see this!

match: false,
},
{
platform: imagespec.Platform{
Architecture: "amd64",
OS: "windows",
},
// If there is no platform.OSVersion, we assume it can run:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point.. are there many Windows images without this? What did docker do here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current behavior in upstream Docker is to accept images with empty or misformatted OSVersion strings. (i.e. if strings.Split(OSVersion, ".") yields less than 3 components)

moby/moby#44489 only adds a log message.

@dcantah
Copy link
Member

dcantah commented Jan 5, 2023

Wanna wait to hear @helsaawy and @msscotb's take on forward compat before reviewing too much more 😬 I hit the same realization as you pretty early on that it does seem to work, but I never took it for granted haha. I'd probably err on the side of using that compat matrix on the site as the source of truth for now; did we need lower host->higher guest for a feature?

@aznashwan
Copy link
Contributor Author

did we need lower host->higher guest for a feature?

The primary driver for the Docker PR was allowing desktop users with updated Windows 10 hosts to run Server 2022 containers for dev purposes.

At the end of the day HCS will have the final say on whether or not the container will run, so the most "damage" this could do is waste some time/storage by downloading an image which may not run in the end.

@helsaawy
Copy link
Contributor

helsaawy commented Jan 5, 2023

At the end of the day HCS will have the final say on whether or not the container will run, so the most "damage" this could do is waste some time/storage by downloading an image which may not run in the end.

Speaking on the logic side, we agreed it was acceptable to allow backward and forward compat on the moby side, since HCS will ultimately verify.

@dcantah
Copy link
Member

dcantah commented Jan 5, 2023

SGTM. I'll give this a look sometime this week then

@dcantah
Copy link
Member

dcantah commented Jan 10, 2023

Integration tests aren't happy with this 😞:

=== RUN   TestContainerEvents
    container_event_test.go:40: Set up container events streaming client
    container_event_test.go:49: Step 1: RunPodSandbox and check for expected events
E0109 14:03:00.426806    9544 remote_runtime.go:135] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 5eae0810e048c46986f29154dab648d88ec5f8fb44866ccd1f931935db5e9a61: The container operating system does not match the host operating system.: unknown
    container_event_test.go:54: 
        	Error Trace:	D:\a\containerd\containerd\src\github.com\containerd\containerd\container_event_test.go:54
        	Error:      	Received unexpected error:
        	            	rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 5eae0810e048c46986f29154dab648d88ec5f8fb44866ccd1f931935db5e9a61: The container operating system does not match the host operating system.: unknown
        	Test:       	TestContainerEvents
--- FAIL: TestContainerEvents (17.88s)
=== RUN   TestContainerLogWithoutTailingNewLine
    container_log_test.go:37: Create a sandbox with log directory
E0109 14:03:01.065388    9544 remote_runtime.go:135] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 6c9ef97f620dab70427e6c29db1a2d799f89fb1a8fec2dad8c6d9c104c2c64b7: The container operating system does not match the host operating system.: unknown
    main_test.go:[233](https://github.com/containerd/containerd/actions/runs/3874289248/jobs/6605381998#step:22:234): 
        	Error Trace:	D:\a\containerd\containerd\src\github.com\containerd\containerd\main_test.go:233
        	            				D:\a\containerd\containerd\src\github.com\containerd\containerd\container_log_test.go:38
        	Error:      	Received unexpected error:
        	            	rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 6c9ef97f620dab70427e6c29db1a2d799f89fb1a8fec2dad8c6d9c104c2c64b7: The container operating system does not match the host operating system.: unknown
        	Test:       	TestContainerLogWithoutTailingNewLine
--- FAIL: TestContainerLogWithoutTailingNewLine (0.64s)
=== RUN   TestLongContainerLog
    container_log_test.go:85: Create a sandbox with log directory
E0109 14:03:01.661621    9544 remote_runtime.go:135] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 8f9f0e17c869eee7dc4d17ed66f7188ce5b20d1fdbf2d24e35811a9aadb9f72d: The container operating system does not match the host operating system.: unknown
    main_test.go:233: 
        	Error Trace:	D:\a\containerd\containerd\src\github.com\containerd\containerd\main_test.go:233
        	            				D:\a\containerd\containerd\src\github.com\containerd\containerd\container_log_test.go:86
        	Error:      	Received unexpected error:
        	            	rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 8f9f0e17c869eee7dc4d17ed66f7188ce5b20d1fdbf2d24e35811a9aadb9f72d: The container operating system does not match the host operating system.: unknown
        	Test:       	TestLongContainerLog
--- FAIL: TestLongContainerLog (0.60s)
=== RUN   TestContainerRestart
    container_restart_test.go:29: Create a pod config and run sandbox container
E0109 14:03:02.289898    9544 remote_runtime.go:135] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 7b7ee7ab99eed2d6544fe2d13b6d270fe70a872c30c177ef310a70d2bf0c3a85: The container operating system does not match the host operating system.: unknown
    main_test.go:233: 
        	Error Trace:	D:\a\containerd\containerd\src\github.com\containerd\containerd\main_test.go:233
        	            				D:\a\containerd\containerd\src\github.com\containerd\containerd\container_restart_test.go:30
        	Error:      	Received unexpected error:
        	            	rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem 7b7ee7ab99eed2d6544fe2d13b6d270fe70a872c30c177ef310a70d2bf0c3a85: The container operating system does not match the host operating system.: unknown
        	Test:       	TestContainerRestart

@aznashwan
Copy link
Contributor Author

Integration tests aren't happy with this

It's really weird that 2022 seems to be the unhappy one but 2019 isn't, my best guess currently is that the changes in the platform comparison logic leads to containerd considering the 2019 manifest of the busybox:1.29-2 test image to be "good enough" and trying to run that in standard Process isolation mode (which fails)

I'll try to make the comparison logic prefer closer build numbers to the host but I think that'd even further complicate the platform matcher logic on Windows.

Following the addition of Hyper-V isolation, it should be possible for
Windows hosts to run container images which are older or newer than
the host build if the Windows host is version RS5 (1809) or above.

This patch updates the Windows platform matching logic to allow for
build number mismatches between Windows images and hosts iff
the host version is at least RS5.

Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
@marosset
Copy link
Contributor

I'm surprised to see the implementation for this new functionality didn't include a new platform matcher that had hyper-v isolated container specific logic and was selected at the appropriate times.

Adding a new platform matcher for Windows Hyper-V isolated containers was even suggested here #7431 (comment)

I'm concerned that changing the behavior of how the 'default' platform matcher works when performing image pulls on Windows could cause bad user experiences.

@aznashwan
Copy link
Contributor Author

I'm concerned that changing the behavior of how the 'default' platform matcher works when performing image pulls on Windows could cause bad user experiences.

#7431 flew under my radar as I was tunnel-visioned on enabling running any image under Hyper-V, though I reached the same conclusion you did on there currently not being a good way to inject a matcher during the pull process, and I did not see a cleaner course of action which doesn't involve a refactor.

Even if specifying a matcher were possible, there would be some contexts (e.g. standalone ctr pull commands) where we cannot know whether the image will be run under Hyper-V or not, so the best we could do is check whether runhcs-wcow-hypervisor is available and greenlight the image pull, in which case the potential bad UX of a user attempting to run the image in process isolation and failing is unavoidable.

Either way I agree the current approach of overriding the default matcher is not ideal, so I'll convert this PR to a draft and at the very least separate the matching logic into a ProcessIsolationMatcher and HypervIsolationMatcher.

@jsturtevant
Copy link
Contributor

there would be some contexts (e.g. standalone ctr pull commands) where we cannot know whether the image will be run under Hyper-V or not,

In the stand along ctr pull scenario, the user would know if they want to run in hyper-v and so would specify that info. Is that a reasonable expectation?

@kiashok
Copy link
Contributor

kiashok commented Apr 17, 2023

I'm surprised to see the implementation for this new functionality didn't include a new platform matcher that had hyper-v isolated container specific logic and was selected at the appropriate times.

Adding a new platform matcher for Windows Hyper-V isolated containers was even suggested here #7431 (comment)

I'm concerned that changing the behavior of how the 'default' platform matcher works when performing image pulls on Windows could cause bad user experiences.

@marosset @jsturtevant @aznashwan I am not sure we can support different matchers for wcow process and hyperV isolated right away during image pull as we don't yet have that information right? I was looking into the suggestion made here #7431 (comment) - I don't see the PodSandboxConfig having a field for runtime handler :
image

image

Are you suggesting that we introduce a new field to WindowsPodSandboxConfig to indicate if its hyperV isolated pod just like we have a field for HPC under WindowsSandboxSecurityContext ?

For the short term, I think loosening up the windows platform matcher logic to not block on an image pull would help a lot while we work on a long term approach to specify runtime class at image pull itself?
If the host OS version is >= 20348 (that is, windows 11, WS 2022+), we always pick the image with highest minor version and UBR version as these operating systems support stable ABI work.
For any other OS, we pick the closest match taking the host OS's minor and UBR version into account.

@TBBle
Copy link
Contributor

TBBle commented Apr 18, 2023

If I remember correctly, the PullImage CRI request has an annotation for which a HyperV-ness value was defined. It's mentioned in one of the various discussions here, but I can't immediately lay my hands on it. (Edit: #6491 (comment))

It bugs me that HyperV-ness is handled differently from Host Process Containers, but I think that's water under the bridge now. (Either both should be runtime handlers, or both should be part of WindowsSandboxSecurityContext, but maybe there's actual technical differences for this, rather than just the age of the decisions. I don't have the full codebase in front of me.)

Unfortunately, #7431 (comment) discovered that there's at least one underlying use of 'Default Platform Matcher' (and others have been mentioned in passing) which would need to be cleaned up before the intended "choose platform matcher based on CRI request" implementation will work.

In the meantime, referencing a specific image (rather than an image index) should work, and if any platform matchers are invoked in that flow, I'd consider it a bug to be fixed.

@kiashok
Copy link
Contributor

kiashok commented Apr 19, 2023

If I remember correctly, the PullImage CRI request has an annotation for which a HyperV-ness value was defined. It's mentioned in one of the various discussions here, but I can't immediately lay my hands on it. (Edit: #6491 (comment))

It bugs me that HyperV-ness is handled differently from Host Process Containers, but I think that's water under the bridge now. (Either both should be runtime handlers, or both should be part of WindowsSandboxSecurityContext, but maybe there's actual technical differences for this, rather than just the age of the decisions. I don't have the full codebase in front of me.)

Unfortunately, #7431 (comment) discovered that there's at least one underlying use of 'Default Platform Matcher' (and others have been mentioned in passing) which would need to be cleaned up before the intended "choose platform matcher based on CRI request" implementation will work.

In the meantime, referencing a specific image (rather than an image index) should work, and if any platform matchers are invoked in that flow, I'd consider it a bug to be fixed.

using Annotation field to specify which mode/runtime class to pull the image for would be a bigger change that needs kubelet and other tools to also change right? It probably a longer term fix that needs a KEP. The idea of this PR to bring in a shorter term fix where we loosen up the default windows platform matcher to allow an image to be pulled. In either case I think we should not block during the pull - we should pull an image based on what we think is best from the index. If HCS is not happy with what is being used while creating/starting a container, then we can choke at that level instead of blocking pull.
If host OS >= 20348 (this would include windows 11 too) , then we should just pick the highest minor/UBR version available in the index. If not, we should pick the closest minor version image that is available. If HCS later on chokes because the exact OS version match doesn't exist, then the user can pull the particular and override the sandbox image in containerd's toml.
This can be the interim short term fix to unblock folks while we working on getting a KEP out to transition an image to be identified as a tuple of (imageName, runtimeclass) instead of just the imageName like it exists today.

@kiashok
Copy link
Contributor

kiashok commented Apr 19, 2023

If I remember correctly, the PullImage CRI request has an annotation for which a HyperV-ness value was defined. It's mentioned in one of the various discussions here, but I can't immediately lay my hands on it. (Edit: #6491 (comment))
It bugs me that HyperV-ness is handled differently from Host Process Containers, but I think that's water under the bridge now. (Either both should be runtime handlers, or both should be part of WindowsSandboxSecurityContext, but maybe there's actual technical differences for this, rather than just the age of the decisions. I don't have the full codebase in front of me.)
Unfortunately, #7431 (comment) discovered that there's at least one underlying use of 'Default Platform Matcher' (and others have been mentioned in passing) which would need to be cleaned up before the intended "choose platform matcher based on CRI request" implementation will work.
In the meantime, referencing a specific image (rather than an image index) should work, and if any platform matchers are invoked in that flow, I'd consider it a bug to be fixed.

using Annotation field to specify which mode/runtime class to pull the image for would be a bigger change that needs kubelet and other tools to also change right? It probably a longer term fix that needs a KEP. The idea of this PR to bring in a shorter term fix where we loosen up the default windows platform matcher to allow an image to be pulled. In either case I think we should not block during the pull - we should pull an image based on what we think is best from the index. If HCS is not happy with what is being used while creating/starting a container, then we can choke at that level instead of blocking pull. If host OS >= 20348 (this would include windows 11 too) , then we should just pick the highest minor/UBR version available in the index. If not, we should pick the closest minor version image that is available. If HCS later on chokes because the exact OS version match doesn't exist, then the user can pull the particular and override the sandbox image in containerd's toml. This can be the interim short term fix to unblock folks while we working on getting a KEP out to transition an image to be identified as a tuple of (imageName, runtimeclass) instead of just the imageName like it exists today.

@jsturtevant @marosset @kevpar thoughts?

@TBBle
Copy link
Contributor

TBBle commented Apr 19, 2023

If HCS later on chokes because the exact OS version match doesn't exist, then the user can pull the particular and override the sandbox image in containerd's toml.

My understanding of the direction taken at this point is the other way 'round. If the user is doing something other than process isolation, then they can override the sandbox image and any other image indexes they use to reference the exact os version image they want, but the default approach is to make the process isolation the "works by default" case. That decision may have been from back when k8s explicitly did not support HyperV isolation on Windows though. (I suspect they still don't, which would be a strong argument for me to keep the default behaviours aligned around process isolation, even though it's a trade-off of less security for more performance and hence is not always the best chocie).

HyperV isolation also has a further complication (seen in #8348) that you need to pull the correct sandbox image os version based on the selected os version of other images in the sandbox, and CRI cannot currently represent that; so choosing to make HyperV pick the latest build in existence by default will then only work correctly if every image index in the pod has an image available for the same os version as its own latest build.

At some point we might be able to improve that by arranging to cross-check pulls for the same HyperV-isolated sandbox so they all have the same build, but my memory of CRI flows etc means that the only image we know of when processing a CRI pull request is the sandbox, so at best we can ensure images taken from an image list are all compatible with it, which is probably about right.

Anyway, without some more time in the code and for consideration, I don't think HyperV-isolation-supporting pulls should be the default, as there's still conflicting use-cases about which one to pick from a list in those cases, and data-flow in CRI makes that hard to resolve nicely.

Process-isolation by default is the zero-config option. If you have to configure HyperV isolation as a runtime anyway, perhaps that's a place to specify the build-number to use, which can then be used for the platform matcher for both the sandbox image and images pulled for that runtime.

@kiashok
Copy link
Contributor

kiashok commented Apr 24, 2023

If HCS later on chokes because the exact OS version match doesn't exist, then the user can pull the particular and override the sandbox image in containerd's toml.

My understanding of the direction taken at this point is the other way 'round. If the user is doing something other than process isolation, then they can override the sandbox image and any other image indexes they use to reference the exact os version image they want, but the default approach is to make the process isolation the "works by default" case. That decision may have been from back when k8s explicitly did not support HyperV isolation on Windows though. (I suspect they still don't, which would be a strong argument for me to keep the default behaviours aligned around process isolation, even though it's a trade-off of less security for more performance and hence is not always the best chocie).

HyperV isolation also has a further complication (seen in #8348) that you need to pull the correct sandbox image os version based on the selected os version of other images in the sandbox, and CRI cannot currently represent that; so choosing to make HyperV pick the latest build in existence by default will then only work correctly if every image index in the pod has an image available for the same os version as its own latest build.

At some point we might be able to improve that by arranging to cross-check pulls for the same HyperV-isolated sandbox so they all have the same build, but my memory of CRI flows etc means that the only image we know of when processing a CRI pull request is the sandbox, so at best we can ensure images taken from an image list are all compatible with it, which is probably about right.

Anyway, without some more time in the code and for consideration, I don't think HyperV-isolation-supporting pulls should be the default, as there's still conflicting use-cases about which one to pick from a list in those cases, and data-flow in CRI makes that hard to resolve nicely.

Process-isolation by default is the zero-config option. If you have to configure HyperV isolation as a runtime anyway, perhaps that's a place to specify the build-number to use, which can then be used for the platform matcher for both the sandbox image and images pulled for that runtime.

"Process-isolation by default is the zero-config option. If you have to configure HyperV isolation as a runtime anyway, perhaps that's a place to specify the build-number to use, which can then be used for the platform matcher for both the sandbox image and images pulled for that runtime." -- This is the option we are exploring for the long term (users can specify a runtime class and the runtime class will have an option to specify the platform, os version and sandboxisolation for us to decide which platform handler to call into. But this would require a KEP and might be more than a year worth of work to get all the code checked in after KEP is approved. Till such time I think loosening up the image pull is probably the only feasible short term option left?!

@TBBle
Copy link
Contributor

TBBle commented Apr 25, 2023

I meant using different runtime classes for different platform versions. That'd be transparent to k8s (it just sees the name in the RuntimeClass object) and just requires more containerd config and slightly more decision for the user when choosing a RuntimeClass.

Same with the image-pull annotation, it's already populated from the Pod annotations by kubelet, so that's user-exposed now, and requires no k8s changes to consume in containerd-cri.

The ideal would be that the pull operation gets explicit access to the runtime_handler name (#6657 (comment)) which does require a KEP. In the meantime, we already have implemented in containerd 1.7 an annotation io.containerd.cri.runtime-handler, as decribed in #6657 (comment).

This was implemented for per-runtime Snapshotters, but handily that means all the flows external to containerd-cri are already handled and being improved. I'm not aware of per-runtime Snapshotters being used in-the-wild yet, it's a new feature of containerd 1.7.0 (#6899), so I can't point at any existing user-facing docs that demonstrate this.

So I think the soonest-effective change to make to achieve the goal here is for the containerd-cri runtime class config to be able to specify a PlatformMatcher like it can specify a Snapshotter, and ensure we fix any code which unconditionally uses a default platform matcher, to honour that setting.


I disagree with making the default wider because it'll break existing workflows, which do not specify a RuntimeClass at all (and are hence running in process isolation) and so rely on the default platform matcher behaviour being process-isolation-correct. This is a specifically-likely problem if we allow newer images by default, as upstream image repos are more likely to add newer images to an existing image index, and hence introduce a failure case when a valid image is available.

@jsturtevant
Copy link
Contributor

So I think the soonest-effective change to make to achieve the goal here is for the containerd-cri runtime class config to be able to specify a PlatformMatcher like it can specify a Snapshotter, #7431 (comment) and ensure we fix any code which unconditionally uses a default platform matcher, to honour that setting.

if we have this information in cri code path couldn't we select the appropriate matcher via code (this was the approach @marosset took)? I guess specifying it via configuration makes it more extensible?

I disagree with making the default wider because it'll break existing workflows,

after a few other conversations I agree we can't make it wider

@TBBle
Copy link
Contributor

TBBle commented Apr 26, 2023

if we have this information in cri code path couldn't we select the appropriate matcher via code (this was the approach @marosset took)? I guess specifying it via configuration makes it more extensible?

Right now, the containerd code doesn't even know that a runtime is going to use HyperV isolation, that config option is actually part of an opaque blob passed down into hcsshim, we should avoid parsing it in containerd, particularly at higher layers like this. (AFAIR we do parse it in order to add debug flags at one point, but that's down near the actual call to the runtime shim. containerd-cri should not know about this at all.)

So we'd need to expose something to containerd config, and I personally think it'd be better to expose a platform matcher choice rather than trying expose a bunch of knobs that affect a single platform matcher, since once we bring LCOW into the mix, we need both os (and future may need os.arch) selection options, and os.version ranges, and that config probably needs a way to say "delta from host build" to make it reasonably cut-and-pastable; explicitly specifying the current default (process isolation, match host for all of os/arch/build) would be surprisingly verbose.

That said, the range of possible platform matcher implementations is also wide, and so maybe that won't be a user-nice experience either if we end up with like a dozen options. So we might need to have a few platform matcher implementations and some specific config options for those implementations. Which can also be the worst-of-both-worlds, so needs careful consideration.

Either way, some actual iteration will be needed to pull together the various existing and mooted platform matcher needs; I do think that it will be unavoidable to expose them to configuration somehow, because we already have multiple feature requests that require different results for the same ImagePull request even if HyperV-ness is known, e.g. this one and #8348: AFAICT they're both valid use-cases, so they can't be resolved by simply changing behaviour based on the HyperV isolation flag.

(Todo: Track down the ticket where the user wanted the Platform Matcher to prefer older os.build that was already in the content store over newer os.build that would need to be downloaded, as it's somewhat related here.)


#7431 (comment) was in the context of Host-Process Containers, which containerd-cri does know about (it's a CRI field), and so we can automatically select a specific platform matcher for that case. HPC also has pretty unambiguous rules: We can take any image matching os/arch, as long as it has either a os.version <= the host, or no os.version at all. (Arch only matters when Windows ARM containers exist again, and we may not need an arch-match depending on how emulation works in that hypothetical future. We can unburn that bridge when we come to it.)

We do need to fix the bugs that comment revealed of using the default platform matcher inappropriately, interfering with an override.

@k8s-ci-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link

This PR is stale because it has been open 90 days with no activity. This PR will be closed in 7 days unless new comments are made or the stale label is removed.

@github-actions github-actions bot added the Stale label Mar 20, 2024
Copy link

This PR was closed because it has been stalled for 7 days with no activity.

@github-actions github-actions bot closed this Mar 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

None yet

8 participants