Add Windows Containers Support #22623

Closed
preillyme opened this Issue Mar 7, 2016 · 35 comments

Comments

Projects
None yet
@preillyme
Member

preillyme commented Mar 7, 2016

Add Windows Containers Support at least at the node level.

@mikedanese mikedanese added the sig/node label Mar 10, 2016

@preillyme

This comment has been minimized.

Show comment
Hide comment
@preillyme

preillyme Mar 14, 2016

Member

I'd like to plan a kickoff meeting with @BenjaminArmstrong and @sschuller sometime in the next couple of weeks. I'd also like to ask @sarahnovotny to create the Windows SIG as well.

Member

preillyme commented Mar 14, 2016

I'd like to plan a kickoff meeting with @BenjaminArmstrong and @sschuller sometime in the next couple of weeks. I'd also like to ask @sarahnovotny to create the Windows SIG as well.

@timothysc

This comment has been minimized.

Show comment
Hide comment
@timothysc

timothysc Mar 14, 2016

Member

/cc @kubernetes/sig-node

Member

timothysc commented Mar 14, 2016

/cc @kubernetes/sig-node

@asultan001

This comment has been minimized.

Show comment
Hide comment
@asultan001

asultan001 Mar 14, 2016

Member

And here we go!

Member

asultan001 commented Mar 14, 2016

And here we go!

@quinton-hoole

This comment has been minimized.

Show comment
Hide comment
@quinton-hoole

quinton-hoole Mar 14, 2016

Member

Here are a few reasonable intro's, for those following along:

The Register: Hands On
Mark Russianovich, Microsoft CTO
Microsoft Docs

Member

quinton-hoole commented Mar 14, 2016

Here are a few reasonable intro's, for those following along:

The Register: Hands On
Mark Russianovich, Microsoft CTO
Microsoft Docs

@preillyme

This comment has been minimized.

Show comment
Hide comment
@preillyme

preillyme Mar 14, 2016

Member

Thanks @quinton-hoole greatly appreciate the references. Exciting times @asultan001 @timothysc

Member

preillyme commented Mar 14, 2016

Thanks @quinton-hoole greatly appreciate the references. Exciting times @asultan001 @timothysc

@taylorb-microsoft

This comment has been minimized.

Show comment
Hide comment
@taylorb-microsoft

taylorb-microsoft Mar 14, 2016

This is great to see! Just for a quick intro I am the lead program manager for all server container technologies in Windows, my team is responsible for Windows Server Containers and Hyper-V Containers.

This is great to see! Just for a quick intro I am the lead program manager for all server container technologies in Windows, my team is responsible for Windows Server Containers and Hyper-V Containers.

@asultan001

This comment has been minimized.

Show comment
Hide comment
@asultan001

asultan001 Mar 14, 2016

Member

Thanks @taylorb-microsoft looking forward to connecting once we have something a bit more concrete.

Member

asultan001 commented Mar 14, 2016

Thanks @taylorb-microsoft looking forward to connecting once we have something a bit more concrete.

@preillyme

This comment has been minimized.

Show comment
Hide comment
@preillyme

preillyme Mar 14, 2016

Member

I've created a shared document available at: https://goo.gl/NE0ABx to track our planning discussions. Thanks for helping us @taylorb-microsoft.

Member

preillyme commented Mar 14, 2016

I've created a shared document available at: https://goo.gl/NE0ABx to track our planning discussions. Thanks for helping us @taylorb-microsoft.

@rakeshm

This comment has been minimized.

Show comment
Hide comment
@rakeshm

rakeshm Mar 14, 2016

A few of us at Apprenda, @taylorb-microsoft and @johngossman are going to do a quick sync up this week.

We'll add to the shard doc @preillyme

rakeshm commented Mar 14, 2016

A few of us at Apprenda, @taylorb-microsoft and @johngossman are going to do a quick sync up this week.

We'll add to the shard doc @preillyme

@quinton-hoole

This comment has been minimized.

Show comment
Hide comment
@quinton-hoole

quinton-hoole Mar 14, 2016

Member

Could someone please grant me comment permission on the shared doc. I'm
quinton@google.com

Thanks

Q

On Mon, Mar 14, 2016 at 11:35 AM, rakeshm notifications@github.com wrote:

A few of us at Apprenda, @taylorb-microsoft
https://github.com/taylorb-microsoft and @johngossman
https://github.com/johngossman are going to do a quick sync up this
week.

We'll add to the shard doc @preillyme https://github.com/preillyme


Reply to this email directly or view it on GitHub
#22623 (comment)
.

Member

quinton-hoole commented Mar 14, 2016

Could someone please grant me comment permission on the shared doc. I'm
quinton@google.com

Thanks

Q

On Mon, Mar 14, 2016 at 11:35 AM, rakeshm notifications@github.com wrote:

A few of us at Apprenda, @taylorb-microsoft
https://github.com/taylorb-microsoft and @johngossman
https://github.com/johngossman are going to do a quick sync up this
week.

We'll add to the shard doc @preillyme https://github.com/preillyme


Reply to this email directly or view it on GitHub
#22623 (comment)
.

@davidopp

This comment has been minimized.

Show comment
Hide comment
Member

davidopp commented Mar 14, 2016

@rakeshm

This comment has been minimized.

Show comment
Hide comment
@rakeshm

rakeshm Mar 18, 2016

I added a summary of the Apprenda/Microsoft meeting to the doc with some upcoming key action items.

rakeshm commented Mar 18, 2016

I added a summary of the Apprenda/Microsoft meeting to the doc with some upcoming key action items.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 18, 2016

Contributor

How receptive will the project be to modifying allowable Kubelet options to include Windows specific flags and options?

We have an "ok" story for container runtime specific flags, but it needs to be upgraded to a "great" story. We'd follow the rkt pattern for now probably.

Is the PodSpec easy to modify in case any idiomatic Windows specs need to be added?

Great question, we should try to figure out what those would be and then have the discussion of a set of them together. ContainerSpecs were designed to be more generic than the default Docker container spec originally was, but I suspect we'll simply have options that don't work on all runtimes, and a way to document that. For instance, SELinux and AppArmor are already two items that don't work on all linux distros, but they're encapsulated within higher level security groupings. Path specs are likely to be painful on volumes. Dealing with persistent volumes across windows systems may not change that much, although certain options simply won't be available. We've started the "how do we deal with images across multiple runtimes" discussion, but since Windows Docker would probably use the same format as Linux I don't expect that to be an issue.

Contributor

smarterclayton commented Mar 18, 2016

How receptive will the project be to modifying allowable Kubelet options to include Windows specific flags and options?

We have an "ok" story for container runtime specific flags, but it needs to be upgraded to a "great" story. We'd follow the rkt pattern for now probably.

Is the PodSpec easy to modify in case any idiomatic Windows specs need to be added?

Great question, we should try to figure out what those would be and then have the discussion of a set of them together. ContainerSpecs were designed to be more generic than the default Docker container spec originally was, but I suspect we'll simply have options that don't work on all runtimes, and a way to document that. For instance, SELinux and AppArmor are already two items that don't work on all linux distros, but they're encapsulated within higher level security groupings. Path specs are likely to be painful on volumes. Dealing with persistent volumes across windows systems may not change that much, although certain options simply won't be available. We've started the "how do we deal with images across multiple runtimes" discussion, but since Windows Docker would probably use the same format as Linux I don't expect that to be an issue.

@srounce

This comment has been minimized.

Show comment
Hide comment
@srounce

srounce Mar 24, 2016

@smarterclayton I know it isn't perfect, but would it be possible to take an approach similar to cygwin where we leverage a small util to transform paths to/from?

Also Docker on Windows (2016 TP3) appears to use the same image format.

srounce commented Mar 24, 2016

@smarterclayton I know it isn't perfect, but would it be possible to take an approach similar to cygwin where we leverage a small util to transform paths to/from?

Also Docker on Windows (2016 TP3) appears to use the same image format.

@dchen1107

This comment has been minimized.

Show comment
Hide comment
@dchen1107

dchen1107 Mar 24, 2016

Member

Can someone grant me, dawnchen@google.com the comment permission on that shared doc?

Member

dchen1107 commented Mar 24, 2016

Can someone grant me, dawnchen@google.com the comment permission on that shared doc?

@preillyme

This comment has been minimized.

Show comment
Hide comment
@preillyme

preillyme Mar 30, 2016

Member

Hey @dchen1107 I've added you as an editor to the shared document.

Member

preillyme commented Mar 30, 2016

Hey @dchen1107 I've added you as an editor to the shared document.

@luxas

This comment has been minimized.

Show comment
Hide comment
@luxas

luxas Mar 30, 2016

Member

Given the OS differences and eventual differences in Pods as a container grouping mechanism, how should we approach any asymmetry that will likely occur when handling Windows hosts when compared to non-Windows hosts?

I've been thinking about if the kubelet should annotate or label itself with the platform it's running on. That might make arm, arm64, ppc64le, amd64 and windows handling easier, if there ever will be cross-clusters. WDYT?

Member

luxas commented Mar 30, 2016

Given the OS differences and eventual differences in Pods as a container grouping mechanism, how should we approach any asymmetry that will likely occur when handling Windows hosts when compared to non-Windows hosts?

I've been thinking about if the kubelet should annotate or label itself with the platform it's running on. That might make arm, arm64, ppc64le, amd64 and windows handling easier, if there ever will be cross-clusters. WDYT?

@esotericengineer

This comment has been minimized.

Show comment
Hide comment
@esotericengineer

esotericengineer Mar 30, 2016

I think self-labeling based on host platform makes a ton of sense. It will help with idiomatic configuration per platform, but also lends itself to helping with cluster segregation. Is there anything else in k8s that currently self-labels?

I think self-labeling based on host platform makes a ton of sense. It will help with idiomatic configuration per platform, but also lends itself to helping with cluster segregation. Is there anything else in k8s that currently self-labels?

@davidopp

This comment has been minimized.

Show comment
Hide comment
@davidopp

davidopp Mar 30, 2016

Member

#9044 says cloud provider but can also cover platform stuff.

Member

davidopp commented Mar 30, 2016

#9044 says cloud provider but can also cover platform stuff.

@esotericengineer

This comment has been minimized.

Show comment
Hide comment
@esotericengineer

esotericengineer Mar 30, 2016

OK, after reading it through #9044, it would seem we could capture 'Platform' as a standard label in
pkg/api/unversioned/well_known_labels.goMy guess is that in a multi-platform k8s, that would make the most sense since multiple platforms could become a pretty standard use case. That label could be used for OS & architecture combos or just OS

OK, after reading it through #9044, it would seem we could capture 'Platform' as a standard label in
pkg/api/unversioned/well_known_labels.goMy guess is that in a multi-platform k8s, that would make the most sense since multiple platforms could become a pretty standard use case. That label could be used for OS & architecture combos or just OS

@luxas

This comment has been minimized.

Show comment
Hide comment
@luxas

luxas Mar 30, 2016

Member

I guess samples value for kubernetes.io/(generic/)platform could be linux/arm, linux/amd64, windows/amd64. This would be very nice to have when we're heading for cross-platfrom (amd64, arm, arm64 and ppc64le): #17981

@davidopp When talking code changes, what should be added more than this or is this fine?

# pkg/api/unversioned/well_known_labels.go:22
const LabelPlatform = "beta.kubernetes.io/platform"
# pkg/kubelet/kubelet.go:1042
node.ObjectMeta.Labels[unversioned.LabelPlatform] = runtime.GOOS + "/" + runtime.GOARCH

I could send a PR for this if you like.

Member

luxas commented Mar 30, 2016

I guess samples value for kubernetes.io/(generic/)platform could be linux/arm, linux/amd64, windows/amd64. This would be very nice to have when we're heading for cross-platfrom (amd64, arm, arm64 and ppc64le): #17981

@davidopp When talking code changes, what should be added more than this or is this fine?

# pkg/api/unversioned/well_known_labels.go:22
const LabelPlatform = "beta.kubernetes.io/platform"
# pkg/kubelet/kubelet.go:1042
node.ObjectMeta.Labels[unversioned.LabelPlatform] = runtime.GOOS + "/" + runtime.GOARCH

I could send a PR for this if you like.

@csrwng

This comment has been minimized.

Show comment
Hide comment
@csrwng

csrwng Apr 14, 2016

Contributor

Early prototype: csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves well to the Linux model where 1 container = (mostly) 1 process. Windows containers tend to include more of the OS, including the service manager and at least as of right now don't allow namespace sharing. Therefore it doesn't make sense to start a separate infra container to hold on to the ip as on the Linux side. More importantly, a pod cannot be represented as a set of containers that share certain things. In Windows, it may make more sense to have 1 pod = 1 windows container, and each container from the pod simply represent separate process on that container. If modeled that way, it means that containers in a windows pod cannot use a different image each, it also means that resource requirements, security constraints, etc. would apply to the entire pod, and not to each container.

Contributor

csrwng commented Apr 14, 2016

Early prototype: csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves well to the Linux model where 1 container = (mostly) 1 process. Windows containers tend to include more of the OS, including the service manager and at least as of right now don't allow namespace sharing. Therefore it doesn't make sense to start a separate infra container to hold on to the ip as on the Linux side. More importantly, a pod cannot be represented as a set of containers that share certain things. In Windows, it may make more sense to have 1 pod = 1 windows container, and each container from the pod simply represent separate process on that container. If modeled that way, it means that containers in a windows pod cannot use a different image each, it also means that resource requirements, security constraints, etc. would apply to the entire pod, and not to each container.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Apr 15, 2016

Contributor

1 pod = 1 container is a dramatic shift, so it's worth diving in what
sharing is important. Network and volumes are critical for sharing.
Everything else is just nice to have. Can Windows handle 1 IP for multiple
containers? Can it handle volume sharing?

On Apr 14, 2016, at 11:08 AM, Cesar Wong notifications@github.com wrote:

Early prototype: csrwng/kubernetes@e755508
csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the
chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves
well to the Linux model where 1 container = (mostly) 1 process. Windows
containers tend to include more of the OS, including the service manager
and at least as of right now don't allow namespace sharing. Therefore it
doesn't make sense to start a separate infra container to hold on to the ip
as on the Linux side. More importantly, a pod cannot be represented as a
set of containers that share certain things. In Windows, it may make more
sense to have 1 pod = 1 windows container, and each container from the pod
simply represent separate process on that container. If modeled that way,
it means that containers in a windows pod cannot use a different image
each, it also means that resource requirements, security constraints, etc.
would apply to the entire pod, and not to each container.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

Contributor

smarterclayton commented Apr 15, 2016

1 pod = 1 container is a dramatic shift, so it's worth diving in what
sharing is important. Network and volumes are critical for sharing.
Everything else is just nice to have. Can Windows handle 1 IP for multiple
containers? Can it handle volume sharing?

On Apr 14, 2016, at 11:08 AM, Cesar Wong notifications@github.com wrote:

Early prototype: csrwng/kubernetes@e755508
csrwng@e755508
And corresponding demo:
https://goo.gl/2XxOtY

More than anything it helps to identify gaps in function (just look at the
chunks of code that are stubbed or commented out).

Main issue I ran into was that Windows containers don't lend themselves
well to the Linux model where 1 container = (mostly) 1 process. Windows
containers tend to include more of the OS, including the service manager
and at least as of right now don't allow namespace sharing. Therefore it
doesn't make sense to start a separate infra container to hold on to the ip
as on the Linux side. More importantly, a pod cannot be represented as a
set of containers that share certain things. In Windows, it may make more
sense to have 1 pod = 1 windows container, and each container from the pod
simply represent separate process on that container. If modeled that way,
it means that containers in a windows pod cannot use a different image
each, it also means that resource requirements, security constraints, etc.
would apply to the entire pod, and not to each container.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

@csrwng

This comment has been minimized.

Show comment
Hide comment
@csrwng

csrwng Apr 15, 2016

Contributor

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced through the Docker API, but I suspect that's just a bug with the implementation.

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and understand what will eventually be possible vs never possible.

Contributor

csrwng commented Apr 15, 2016

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced through the Docker API, but I suspect that's just a bug with the implementation.

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and understand what will eventually be possible vs never possible.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Apr 15, 2016

Contributor

I would even say volume sharing is the key differentiator for multiple
containers. I'd be extremely hesitant to consider a 1 pod = 1 container
approach for Windows unless we're saying containers on windows
fundamentally can never approach that.

On Fri, Apr 15, 2016 at 10:28 AM, Cesar Wong notifications@github.com
wrote:

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced
through the Docker API, but I suspect that's just

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and
understand what will eventually be possible vs never possible.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

Contributor

smarterclayton commented Apr 15, 2016

I would even say volume sharing is the key differentiator for multiple
containers. I'd be extremely hesitant to consider a 1 pod = 1 container
approach for Windows unless we're saying containers on windows
fundamentally can never approach that.

On Fri, Apr 15, 2016 at 10:28 AM, Cesar Wong notifications@github.com
wrote:

Can Windows handle 1 IP for multiple
containers?

no (at least as of today). Actually the container IP is not yet surfaced
through the Docker API, but I suspect that's just

Can it handle volume sharing?

yes

We're going to find out more next week as we talk to Microsoft and
understand what will eventually be possible vs never possible.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22623 (comment)

@vishh

This comment has been minimized.

Show comment
Hide comment
@vishh

vishh Apr 15, 2016

Member

Another thing to consider is the level of configurability we will get with windows containers. If we can emulate pods by dynamically configuring containers, that might work as well.

Member

vishh commented Apr 15, 2016

Another thing to consider is the level of configurability we will get with windows containers. If we can emulate pods by dynamically configuring containers, that might work as well.

@rakeshm

This comment has been minimized.

Show comment
Hide comment
@rakeshm

rakeshm Apr 17, 2016

I guess we'll learn more next week on what MSFT would recommend but IMO, having a short term (if indeed it is short term) limitation on Windows that 1 pod = 1 container is better than having a host of materially important caveats on Windows when you have 1 pod = n containers if that's what it ends up coming down to.

Clear statements of limitations - and we know there will be limitations in a variety of areas - are important b/c if you're constantly reading the fine print, it just creates lots of friction.

I'm with you Clayton that we should be very hesitant to limit 1 pod = 1 container but if alternatives are going to be messy, we shouldn't force a model into place that isn't ready.

rakeshm commented Apr 17, 2016

I guess we'll learn more next week on what MSFT would recommend but IMO, having a short term (if indeed it is short term) limitation on Windows that 1 pod = 1 container is better than having a host of materially important caveats on Windows when you have 1 pod = n containers if that's what it ends up coming down to.

Clear statements of limitations - and we know there will be limitations in a variety of areas - are important b/c if you're constantly reading the fine print, it just creates lots of friction.

I'm with you Clayton that we should be very hesitant to limit 1 pod = 1 container but if alternatives are going to be messy, we shouldn't force a model into place that isn't ready.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Apr 17, 2016

Contributor
Contributor

smarterclayton commented Apr 17, 2016

@rakeshm

This comment has been minimized.

Show comment
Hide comment
@rakeshm

rakeshm Apr 17, 2016

Agreed.

Just for clarification though - we'd be talking about deciding on the composition of a pod, not the fact that a pod is the unit of scheduling right? I'm certainly not suggesting we even consider the latter.

rakeshm commented Apr 17, 2016

Agreed.

Just for clarification though - we'd be talking about deciding on the composition of a pod, not the fact that a pod is the unit of scheduling right? I'm certainly not suggesting we even consider the latter.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Apr 17, 2016

Contributor
Contributor

smarterclayton commented Apr 17, 2016

@rakeshm

This comment has been minimized.

Show comment
Hide comment
@rakeshm

rakeshm Apr 17, 2016

Got it. Looks like it will come down to what can actually be shared across containers within a pod and which life cycle operations can be mutually guaranteed before this becomes a tougher call.

rakeshm commented Apr 17, 2016

Got it. Looks like it will come down to what can actually be shared across containers within a pod and which life cycle operations can be mutually guaranteed before this becomes a tougher call.

@michmike

This comment has been minimized.

Show comment
Hide comment
@michmike

michmike Apr 19, 2016

hi everyone, members from Apprenda and Red Hat have created the first version of a technical investigations document on how to bring the kubelet to Windows.
Comments and feedback is welcome from everyone (everyone has the ability to add comments) - if you want edit/write access, let me know through the Slack messaging system.

https://docs.google.com/document/d/1qhbxqkKBF8ycbXQgXlwMJs7QBReiSxp_PdsNNNUPRHs/edit?usp=sharing

Our goal is to share some of these findings with Microsoft during our Wednesday meeting. The focal point of that meeting is to go over some of the questions for Microsoft that we started accumulating in this document. If you have additional questions to bring during that discussion, please add them to the document.

hi everyone, members from Apprenda and Red Hat have created the first version of a technical investigations document on how to bring the kubelet to Windows.
Comments and feedback is welcome from everyone (everyone has the ability to add comments) - if you want edit/write access, let me know through the Slack messaging system.

https://docs.google.com/document/d/1qhbxqkKBF8ycbXQgXlwMJs7QBReiSxp_PdsNNNUPRHs/edit?usp=sharing

Our goal is to share some of these findings with Microsoft during our Wednesday meeting. The focal point of that meeting is to go over some of the questions for Microsoft that we started accumulating in this document. If you have additional questions to bring during that discussion, please add them to the document.

k8s-merge-robot added a commit that referenced this issue May 13, 2016

Merge pull request #23684 from luxas/auto_label_arch
Automatic merge from submit-queue

Automatically add node labels beta.kubernetes.io/{os,arch}

Proposal: #17981
As discussed in #22623:
> @davidopp: #9044 says cloud provider but can also cover platform stuff.

Adds a label `beta.kubernetes.io/platform` to `kubelet` that informs about the os/arch it's running on.
Makes it easy to specify `nodeSelectors` for different arches in multi-arch clusters.

```console
$ kubectl get no --show-labels
NAME        STATUS    AGE       LABELS
127.0.0.1   Ready     1m        beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
$ kubectl describe no
Name:			127.0.0.1
Labels:			beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
CreationTimestamp:	Thu, 31 Mar 2016 20:39:15 +0300
```
@davidopp @vishh @fgrzadkowski @thockin @wojtek-t @ixdy @bgrant0607 @dchen1107 @preillyme

@michmike michmike referenced this issue in kubernetes/features Oct 6, 2016

Open

Support Windows Server Containers for K8s #116

@michmike

This comment has been minimized.

Show comment
Hide comment
@michmike

michmike Oct 6, 2016

Today in the Kubernetes community meeting, we demo'ed the alpha version of the Windows Server Container support in Kubernetes, with kubernetes running on Microsoft Azure.

Feature: kubernetes/features#116 will track bringing the work of SIG-Windows to beta with release 1.5 of Kubernetes.

If you want to help, join SIG-Windows at https://kubernetes.slack.com/messages/sig-windows

cc: @sarahnovotny , @brendandburns

michmike commented Oct 6, 2016

Today in the Kubernetes community meeting, we demo'ed the alpha version of the Windows Server Container support in Kubernetes, with kubernetes running on Microsoft Azure.

Feature: kubernetes/features#116 will track bringing the work of SIG-Windows to beta with release 1.5 of Kubernetes.

If you want to help, join SIG-Windows at https://kubernetes.slack.com/messages/sig-windows

cc: @sarahnovotny , @brendandburns

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Dec 19, 2017

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@luxas

This comment has been minimized.

Show comment
Hide comment
@luxas

luxas Dec 22, 2017

Member

I think this issue can be closed in favor for kubernetes/features#116; where the feature state is tracked. Also, this is implemented to a large extent already (beta); whohoo!

Thanks all for the great work 👍 and reopen if you disagree with this assessment

Member

luxas commented Dec 22, 2017

I think this issue can be closed in favor for kubernetes/features#116; where the feature state is tracked. Also, this is implemented to a large extent already (beta); whohoo!

Thanks all for the great work 👍 and reopen if you disagree with this assessment

@luxas luxas closed this Dec 22, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment