New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PetSet (was nominal services) #260

Closed
bgrant0607 opened this Issue Jun 26, 2014 · 160 comments

Comments

@bgrant0607
Member

bgrant0607 commented Jun 26, 2014

@smarterclayton raised this issue in #199: how should Kubernetes support non-load-balanced and/or stateful services? Specifically, Zookeeper was the example.

Zookeeper (or etcd) exhibits 3 common problems:

  1. Identification of the instance(s) clients should contact
  2. Identification of peers
  3. Stateful instances

And it enables master election for other replicated services, which typically share the same problems, and probably need to advertise the elected master to clients.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Jun 27, 2014

Member

Note that we should probably also rename service to lbservice or somesuch to distinguish them from other types of services.

Member

bgrant0607 commented Jun 27, 2014

Note that we should probably also rename service to lbservice or somesuch to distinguish them from other types of services.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Jul 9, 2014

Member

As part of this, I'd remove service objects from the core apiserver and facilitate the use of other load balancers, such as HAProxy and nginx.

Member

bgrant0607 commented Jul 9, 2014

As part of this, I'd remove service objects from the core apiserver and facilitate the use of other load balancers, such as HAProxy and nginx.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 9, 2014

Contributor

It would be nice if the logical definition of a service (the query and/or global name) was able to be used/specialized in multiple ways - as a simple load balancer installed via the infrastructure, as a more feature complete load balancer like nginx or haproxy also offered by the infrastructure, as a queryable endpoint an integrator could poll/wait on (GET /services/foo -> { endpoints: [{host, port}, ...] }), or as information available to hosts to expose local load balancers. Obviously these could be multiple different use cases and as such split into their own resources, but having some flexibility to specify intent (unify under a lb) distinct from mechanism makes it easier to satisfy a wide range of reqts.

Contributor

smarterclayton commented Jul 9, 2014

It would be nice if the logical definition of a service (the query and/or global name) was able to be used/specialized in multiple ways - as a simple load balancer installed via the infrastructure, as a more feature complete load balancer like nginx or haproxy also offered by the infrastructure, as a queryable endpoint an integrator could poll/wait on (GET /services/foo -> { endpoints: [{host, port}, ...] }), or as information available to hosts to expose local load balancers. Obviously these could be multiple different use cases and as such split into their own resources, but having some flexibility to specify intent (unify under a lb) distinct from mechanism makes it easier to satisfy a wide range of reqts.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Jul 9, 2014

Member

@smarterclayton I agree with separating policy and mechanism.

Primitives we need:

  1. The ability to poll/watch a set identified by a label selector. Not sure if there is an issue filed yet.
  2. The ability to query pod IP addresses (#385).

This would be enough to compose with other naming/discovery mechanisms and/or load balancers. We could then build a higher-level layer on top of the core that bundles common patterns with a simple API.

Member

bgrant0607 commented Jul 9, 2014

@smarterclayton I agree with separating policy and mechanism.

Primitives we need:

  1. The ability to poll/watch a set identified by a label selector. Not sure if there is an issue filed yet.
  2. The ability to query pod IP addresses (#385).

This would be enough to compose with other naming/discovery mechanisms and/or load balancers. We could then build a higher-level layer on top of the core that bundles common patterns with a simple API.

@brendandburns

This comment has been minimized.

Show comment
Hide comment
@brendandburns

brendandburns Jul 13, 2014

Contributor

The two primitives described by @bgrant0607 is it worth keeping this issue open? Or are there more specific issues we can file?

Contributor

brendandburns commented Jul 13, 2014

The two primitives described by @bgrant0607 is it worth keeping this issue open? Or are there more specific issues we can file?

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 14, 2014

Contributor

I don't think zookeeper is solved - since you need the unique identifier in each container. I think you could do this with 3 separate replication controllers (one per instance) or a mode on the replication controller.

Contributor

smarterclayton commented Jul 14, 2014

I don't think zookeeper is solved - since you need the unique identifier in each container. I think you could do this with 3 separate replication controllers (one per instance) or a mode on the replication controller.

@bgrant0607 bgrant0607 added the design label Jul 17, 2014

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 22, 2014

Contributor

Service design I think deserves some discussion as Brian notes. Currently it couples an infrastructure abstraction (local proxy) with a mechanism for exposure (environment variables in all containers) with a label query. There is an equally valid use case for an edge proxy that takes L7 hosts/paths and balances them to a label query, as well as supporting protocols like http(s) and web sockets. In addition, services have a hard scale limit today of 60k backends, shared across the entire cluster (the amount of IPs allocated). It should be possible to run a local proxy on a minion that proxies only the services the containers on that host need, and also to avoid containers having to know about the external port. We can move this discussion to #494 if necessary.

Contributor

smarterclayton commented Jul 22, 2014

Service design I think deserves some discussion as Brian notes. Currently it couples an infrastructure abstraction (local proxy) with a mechanism for exposure (environment variables in all containers) with a label query. There is an equally valid use case for an edge proxy that takes L7 hosts/paths and balances them to a label query, as well as supporting protocols like http(s) and web sockets. In addition, services have a hard scale limit today of 60k backends, shared across the entire cluster (the amount of IPs allocated). It should be possible to run a local proxy on a minion that proxies only the services the containers on that host need, and also to avoid containers having to know about the external port. We can move this discussion to #494 if necessary.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

Tackling the problem of singleton services and non-auto-scaled services with fixed replication, such as master-slave replicated databases, key-value stores with fixed-size peer groups (e.g., etcd, zookeeper), etc.

The fixed-replication cases require predictable array-like behavior. Peers need to be able to discover and individually address each other. These services generally have their own client libraries and/or protocols, so we don't need to solve the problem of determining which instance a client should connect to, other than to make the instances individually addressable.

Proposal: We should create a new flavor of service, called Cardinal services, which map N IP addresses instead of just one. Cardinal services would perform a stable assignment of these IP addresses to N instances targeted by their label selector (i.e., a specified N, not just however many targets happen to exist). Once we have DNS ( #1261, #146 ), it would assign predictable DNS names based on a provided prefix, with suffixes 0 to N-1. The assignments could be recorded in annotations or labels of the targeted pods.

This would preserve the decoupling of role assignment from the identities of pods and replication controllers, while providing stable names and IP addresses, which could be used in standard application configuration mechanisms.

Some of the discussion around different types of load balancing happened in the services v2 design: #1107.

I'll file a separate issue for master election.

/cc @smarterclayton @thockin

Member

bgrant0607 commented Oct 2, 2014

Tackling the problem of singleton services and non-auto-scaled services with fixed replication, such as master-slave replicated databases, key-value stores with fixed-size peer groups (e.g., etcd, zookeeper), etc.

The fixed-replication cases require predictable array-like behavior. Peers need to be able to discover and individually address each other. These services generally have their own client libraries and/or protocols, so we don't need to solve the problem of determining which instance a client should connect to, other than to make the instances individually addressable.

Proposal: We should create a new flavor of service, called Cardinal services, which map N IP addresses instead of just one. Cardinal services would perform a stable assignment of these IP addresses to N instances targeted by their label selector (i.e., a specified N, not just however many targets happen to exist). Once we have DNS ( #1261, #146 ), it would assign predictable DNS names based on a provided prefix, with suffixes 0 to N-1. The assignments could be recorded in annotations or labels of the targeted pods.

This would preserve the decoupling of role assignment from the identities of pods and replication controllers, while providing stable names and IP addresses, which could be used in standard application configuration mechanisms.

Some of the discussion around different types of load balancing happened in the services v2 design: #1107.

I'll file a separate issue for master election.

/cc @smarterclayton @thockin

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Oct 2, 2014

Contributor

The assignments would have to carry through into the pods via some environment parameterization mechanism (almost certainly).

For the etcd example, I would create:

  • replication controller cardinality 1: 1 pod, pointing to stable storage volume A
  • replication controller cardinality 2: 1 pod, pointing to stable storage volume B
  • replication controller cardinality 3: 1 pod, pointing to stable storage volume C
  • cardinal service 'etcd' pointing to the pods

If pod 2 dies, replication controller 2 creates a new copy of it and reattaches it to volume B. Cardinal service 'etcd' knows that that pod is new, but how does it know that it should be cardinality 2 (which comes from data stored on volume B)?

Contributor

smarterclayton commented Oct 2, 2014

The assignments would have to carry through into the pods via some environment parameterization mechanism (almost certainly).

For the etcd example, I would create:

  • replication controller cardinality 1: 1 pod, pointing to stable storage volume A
  • replication controller cardinality 2: 1 pod, pointing to stable storage volume B
  • replication controller cardinality 3: 1 pod, pointing to stable storage volume C
  • cardinal service 'etcd' pointing to the pods

If pod 2 dies, replication controller 2 creates a new copy of it and reattaches it to volume B. Cardinal service 'etcd' knows that that pod is new, but how does it know that it should be cardinality 2 (which comes from data stored on volume B)?

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Oct 2, 2014

Member

Rather than 3 replication controllers, why not a sharding controller, which
looks at a label like "kubernetes.io/ShardIndex" when making decisions. If
you want 3-way sharding, it makes 3 pods with indices 0, 1, 2. I feel like
this was shot down before, but I can't reconstruct the trouble it caused in
my head.

It just seems wrong to place that burden on users if this is a relatively
common scenario.

Do you think it matters if the nominal IP for a given pod changes due to
unrelated changes in the set? For example:

at time 0, pods (A, B, C) make up a cardinal service, with IP's
10.0.0.{1-3} respectively

at time 1, the node which hosts pod B dies

at time 2, the replication controller driving B creates a new pod D

at time 3, the cardinal service changes to (A, C, D) with IP's 10.0.0.{1-3}
respectively

NB: pod C's "stable IP" changed from 10.0.0.3 to 10.0.0.2 when the set
membership changed. I expect this will do bad things to running
connections.

To circumvent this, we would need to have the ordinal values specified
outside of the service, or something else clever. Maybe that is OK, but it
seems fragile and easy to get wrong if people have to deal with it.

On Thu, Oct 2, 2014 at 10:17 AM, Clayton Coleman notifications@github.com
wrote:

The assignments would have to carry through into the pods via some
environment parameterization mechanism (almost certainly).

For the etcd example, I would create:

  • replication controller cardinality 1: 1 pod, pointing to stable
    storage volume A
  • replication controller cardinality 2: 1 pod, pointing to stable
    storage volume B
  • replication controller cardinality 3: 1 pod, pointing to stable
    storage volume C
  • cardinal service 'etcd' pointing to the pods

If pod 2 dies, replication controller 2 creates a new copy of it and
reattaches it to volume B. Cardinal service 'etcd' knows that that pod is
new, but how does it know that it should be cardinality 2 (which comes from
data stored on volume B)?

Reply to this email directly or view it on GitHub
#260 (comment)
.

Member

thockin commented Oct 2, 2014

Rather than 3 replication controllers, why not a sharding controller, which
looks at a label like "kubernetes.io/ShardIndex" when making decisions. If
you want 3-way sharding, it makes 3 pods with indices 0, 1, 2. I feel like
this was shot down before, but I can't reconstruct the trouble it caused in
my head.

It just seems wrong to place that burden on users if this is a relatively
common scenario.

Do you think it matters if the nominal IP for a given pod changes due to
unrelated changes in the set? For example:

at time 0, pods (A, B, C) make up a cardinal service, with IP's
10.0.0.{1-3} respectively

at time 1, the node which hosts pod B dies

at time 2, the replication controller driving B creates a new pod D

at time 3, the cardinal service changes to (A, C, D) with IP's 10.0.0.{1-3}
respectively

NB: pod C's "stable IP" changed from 10.0.0.3 to 10.0.0.2 when the set
membership changed. I expect this will do bad things to running
connections.

To circumvent this, we would need to have the ordinal values specified
outside of the service, or something else clever. Maybe that is OK, but it
seems fragile and easy to get wrong if people have to deal with it.

On Thu, Oct 2, 2014 at 10:17 AM, Clayton Coleman notifications@github.com
wrote:

The assignments would have to carry through into the pods via some
environment parameterization mechanism (almost certainly).

For the etcd example, I would create:

  • replication controller cardinality 1: 1 pod, pointing to stable
    storage volume A
  • replication controller cardinality 2: 1 pod, pointing to stable
    storage volume B
  • replication controller cardinality 3: 1 pod, pointing to stable
    storage volume C
  • cardinal service 'etcd' pointing to the pods

If pod 2 dies, replication controller 2 creates a new copy of it and
reattaches it to volume B. Cardinal service 'etcd' knows that that pod is
new, but how does it know that it should be cardinality 2 (which comes from
data stored on volume B)?

Reply to this email directly or view it on GitHub
#260 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Oct 2, 2014

Contributor

I think a sharding controller makes sense and is probably more useful in context of a cardinal service.

I do think that IP changes based on membership are scary and I can think of a bunch of degenerate edge cases. However, if the cardinality is stored with the pods, the decision is less difficult.

Contributor

smarterclayton commented Oct 2, 2014

I think a sharding controller makes sense and is probably more useful in context of a cardinal service.

I do think that IP changes based on membership are scary and I can think of a bunch of degenerate edge cases. However, if the cardinality is stored with the pods, the decision is less difficult.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

First of all, I didn't intend this to be about sharding -- that's #1064. Let's move sharding discussions to there. We've seen many cases of trying to use an analogous mechanism for sharding, and we concluded that it's not the best way to implement sharding.

Member

bgrant0607 commented Oct 2, 2014

First of all, I didn't intend this to be about sharding -- that's #1064. Let's move sharding discussions to there. We've seen many cases of trying to use an analogous mechanism for sharding, and we concluded that it's not the best way to implement sharding.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

Second, my intention is that it shouldn't be necessary to run N replication controllers. It should be possible to use only one, though the number required depends on deployment details (canaries, multiple release tracks, rolling updates, etc.).

Member

bgrant0607 commented Oct 2, 2014

Second, my intention is that it shouldn't be necessary to run N replication controllers. It should be possible to use only one, though the number required depends on deployment details (canaries, multiple release tracks, rolling updates, etc.).

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

Third, I agree we need to consider how this would interact with the durable data proposal (#1515) -- @erictune .

Member

bgrant0607 commented Oct 2, 2014

Third, I agree we need to consider how this would interact with the durable data proposal (#1515) -- @erictune .

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

Four, I agree we probably need to reflect the identity into the pod. As per #386, ideally a standard mechanism would be used to make the IP and DNS name assignments visible to the pod. How would IP and host aliases normally be surfaced in Linux?

Member

bgrant0607 commented Oct 2, 2014

Four, I agree we probably need to reflect the identity into the pod. As per #386, ideally a standard mechanism would be used to make the IP and DNS name assignments visible to the pod. How would IP and host aliases normally be surfaced in Linux?

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 2, 2014

Member

Fifth, I suggested that we ensure assignment stability by recording assignments in the pods via labels or annotations.

Member

bgrant0607 commented Oct 2, 2014

Fifth, I suggested that we ensure assignment stability by recording assignments in the pods via labels or annotations.

@philips

This comment has been minimized.

Show comment
Hide comment
@philips

philips May 19, 2016

Contributor

@ncdc Is everything done for this in v1.3? It is unclear as the proposal is unmerged and there haven't been other PRs referencing this for awhile.

Contributor

philips commented May 19, 2016

@ncdc Is everything done for this in v1.3? It is unclear as the proposal is unmerged and there haven't been other PRs referencing this for awhile.

@ncdc

This comment has been minimized.

Show comment
Hide comment
@ncdc
Member

ncdc commented May 19, 2016

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton May 19, 2016

Contributor

We are landing e2es and continuing to work on examples. It is in alpha
now, but the proposal is going to take the first round of alpha feedback
before merging.

On May 19, 2016, at 10:25 AM, Brandon Philips notifications@github.com
wrote:

@ncdc https://github.com/ncdc Is everything done for this in v1.3? It is
unclear as the proposal is unmerged and there haven't been other PRs
referencing this for awhile.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#260 (comment)

Contributor

smarterclayton commented May 19, 2016

We are landing e2es and continuing to work on examples. It is in alpha
now, but the proposal is going to take the first round of alpha feedback
before merging.

On May 19, 2016, at 10:25 AM, Brandon Philips notifications@github.com
wrote:

@ncdc https://github.com/ncdc Is everything done for this in v1.3? It is
unclear as the proposal is unmerged and there haven't been other PRs
referencing this for awhile.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#260 (comment)

@jberkus

This comment has been minimized.

Show comment
Hide comment
@jberkus

jberkus May 19, 2016

Where can I find docs for this? I'd like to test it out for a database-failover use case.

jberkus commented May 19, 2016

Where can I find docs for this? I'd like to test it out for a database-failover use case.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth May 19, 2016

Member

Ah ha, just what we need a postgres expert :)

see kubernetes/contrib#921 for examples, I can answer any questions about prototying [db of choice] as petset. We have a bunch of sketches under the "apps/stateful" label (eg: #23790, @philips an etcd example would be great). I haven't written docs yet, will do so toward the last few weeks of 1.3 (still 5 weeks to release after code complete on friday).

I'm guessing you're going to try automating failover with postgres since that's pretty common. I'll admit that currently that's still not as easy as I'd like it to be, you probably need a watchdog. @jberkus I'd like to hear feedback on what patterns make that easier.

To give you a quick review the petset today gives you consistent network identity (DNS, host name) that matches a network mounted volume, and ordering guarantees. So if you create a petset with replicas: 3, you'll get:
governing service: *.galear.default.svc.cluster.local
mysql-0 - volume0
mysql-1 - volume1: doesn't start till 0 is running and ready
mysql-2 - volume2: doesn't start till 0, 1 are running ready

The pods can use DNS for service discovery by looking up SRV records inserted under the governing service. That's what this simple pod does: kubernetes/contrib@4425930. So if you use the peer-finder through an init container like in the examples above, mysql-1 will not start till the init container sees (mysql-1, mysql-0) in DNS, and writes out the appropriate config.

The volumes are provisioned by a dynamic provisioner (https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/experimental/persistent-volume-provisioning/README.md), so if you don't have one running in your cluster but just want to prototype, you can simply use emptyDir. The "data-gravity" (#7562) case doesn't work yet, but will eventually.

Member

bprashanth commented May 19, 2016

Ah ha, just what we need a postgres expert :)

see kubernetes/contrib#921 for examples, I can answer any questions about prototying [db of choice] as petset. We have a bunch of sketches under the "apps/stateful" label (eg: #23790, @philips an etcd example would be great). I haven't written docs yet, will do so toward the last few weeks of 1.3 (still 5 weeks to release after code complete on friday).

I'm guessing you're going to try automating failover with postgres since that's pretty common. I'll admit that currently that's still not as easy as I'd like it to be, you probably need a watchdog. @jberkus I'd like to hear feedback on what patterns make that easier.

To give you a quick review the petset today gives you consistent network identity (DNS, host name) that matches a network mounted volume, and ordering guarantees. So if you create a petset with replicas: 3, you'll get:
governing service: *.galear.default.svc.cluster.local
mysql-0 - volume0
mysql-1 - volume1: doesn't start till 0 is running and ready
mysql-2 - volume2: doesn't start till 0, 1 are running ready

The pods can use DNS for service discovery by looking up SRV records inserted under the governing service. That's what this simple pod does: kubernetes/contrib@4425930. So if you use the peer-finder through an init container like in the examples above, mysql-1 will not start till the init container sees (mysql-1, mysql-0) in DNS, and writes out the appropriate config.

The volumes are provisioned by a dynamic provisioner (https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/experimental/persistent-volume-provisioning/README.md), so if you don't have one running in your cluster but just want to prototype, you can simply use emptyDir. The "data-gravity" (#7562) case doesn't work yet, but will eventually.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth May 19, 2016

Member

I'll add that currently it's easier to deliver "on-start" notification with a list of peers, through init containers. It's clear that we also require "on-change" notifications. Currently to notice cluster membership changes you need to use a custom pid1. Shared pid namespaces might make this easier, since you can then use a sidecar, this is also something that needs to just work.

Member

bprashanth commented May 19, 2016

I'll add that currently it's easier to deliver "on-start" notification with a list of peers, through init containers. It's clear that we also require "on-change" notifications. Currently to notice cluster membership changes you need to use a custom pid1. Shared pid namespaces might make this easier, since you can then use a sidecar, this is also something that needs to just work.

@jberkus

This comment has been minimized.

Show comment
Hide comment
@jberkus

jberkus May 19, 2016

I have a watchdog, it's the service failover which is more complicated than I'd like. Will test, thanks!

jberkus commented May 19, 2016

I have a watchdog, it's the service failover which is more complicated than I'd like. Will test, thanks!

@jberkus

This comment has been minimized.

Show comment
Hide comment
@jberkus

jberkus May 19, 2016

I also need to support etcd, so there may be lots of testing in my future.

jberkus commented May 19, 2016

I also need to support etcd, so there may be lots of testing in my future.

@paralin

This comment has been minimized.

Show comment
Hide comment
@paralin

paralin May 31, 2016

Contributor

@ncdc What's the status of the alpha code for this? I'd like to start testing / implementing. We need to deploy a cassandra cluster really soon here. I can do it with the existing codebase but it'd be nice to test out the petset stuff.

Contributor

paralin commented May 31, 2016

@ncdc What's the status of the alpha code for this? I'd like to start testing / implementing. We need to deploy a cassandra cluster really soon here. I can do it with the existing codebase but it'd be nice to test out the petset stuff.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth May 31, 2016

Member

You can get it if you build from HEAD

Member

bprashanth commented May 31, 2016

You can get it if you build from HEAD

@paralin

This comment has been minimized.

Show comment
Hide comment
@paralin

paralin May 31, 2016

Contributor

@bprashanth merged into the main repo? great, thanks. will do.

Contributor

paralin commented May 31, 2016

@bprashanth merged into the main repo? great, thanks. will do.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@paralin

This comment has been minimized.

Show comment
Hide comment
@paralin

paralin May 31, 2016

Contributor

embedded yaml in annotation strings? oof, ouch :(. thanks though, will investigate making a cassandra set.

Contributor

paralin commented May 31, 2016

embedded yaml in annotation strings? oof, ouch :(. thanks though, will investigate making a cassandra set.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth May 31, 2016

Member

that's json. It's an alpha feature added to a GA object (init containers in pods).
@chrislovecnm is working on Cassandra, might just want to wait him out.

Member

bprashanth commented May 31, 2016

that's json. It's an alpha feature added to a GA object (init containers in pods).
@chrislovecnm is working on Cassandra, might just want to wait him out.

@chrislovecnm

This comment has been minimized.

Show comment
Hide comment
@chrislovecnm

chrislovecnm May 31, 2016

Member

@paralin here is what I am working on. No time to document and get it into k8s repo now, but that is long term plan. https://github.com/k8s-for-greeks/gpmr/tree/master/pet-race-devops/k8s/cassandra Is working for me locally, on HEAD.

Latest C* image in the demo works well.

We do have issue open for more documentation. Wink wink, knudge @bprashanth

Member

chrislovecnm commented May 31, 2016

@paralin here is what I am working on. No time to document and get it into k8s repo now, but that is long term plan. https://github.com/k8s-for-greeks/gpmr/tree/master/pet-race-devops/k8s/cassandra Is working for me locally, on HEAD.

Latest C* image in the demo works well.

We do have issue open for more documentation. Wink wink, knudge @bprashanth

@ingvagabund

This comment has been minimized.

Show comment
Hide comment
@ingvagabund

ingvagabund Jun 30, 2016

Contributor

PetSets example with etcd cluster [1].

[1] kubernetes/contrib#1295

Contributor

ingvagabund commented Jun 30, 2016

PetSets example with etcd cluster [1].

[1] kubernetes/contrib#1295

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jun 30, 2016

Contributor

Be sure to capture design asks on the proposal doc after you finish review

On Jun 30, 2016, at 1:25 AM, Jan Chaloupka notifications@github.com wrote:

PetSets example with etcd cluster [1].

[1] kubernetes/contrib#1295
kubernetes/contrib#1295


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#260 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pwVgiaLvRKbtcJG9wzMEZcCNgae8ks5qQ32PgaJpZM4CIC6g
.

Contributor

smarterclayton commented Jun 30, 2016

Be sure to capture design asks on the proposal doc after you finish review

On Jun 30, 2016, at 1:25 AM, Jan Chaloupka notifications@github.com wrote:

PetSets example with etcd cluster [1].

[1] kubernetes/contrib#1295
kubernetes/contrib#1295


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#260 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pwVgiaLvRKbtcJG9wzMEZcCNgae8ks5qQ32PgaJpZM4CIC6g
.

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth
Member

bprashanth commented Jul 6, 2016

the petset docs are https://github.com/kubernetes/kubernetes.github.io/blob/release-1.3/docs/user-guide/petset.md and https://github.com/kubernetes/kubernetes.github.io/tree/release-1.3/docs/user-guide/petset/bootstrapping, I plan to close this issue and open a new one that addresses moving petset to beta unless anyone objects

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth
Member

bprashanth commented Jul 8, 2016

@bprashanth bprashanth closed this Jul 8, 2016

k8s-merge-robot added a commit that referenced this issue Oct 27, 2016

Merge pull request #18016 from smarterclayton/petset
Automatic merge from submit-queue

Proposal for implementing nominal services AKA StatefulSets AKA The-Proposal-Formerly-Known-As-PetSets

This is the draft proposal for #260.

xingzhou pushed a commit to xingzhou/kubernetes that referenced this issue Dec 15, 2016

Merge pull request kubernetes#18016 from smarterclayton/petset
Automatic merge from submit-queue

Proposal for implementing nominal services AKA StatefulSets AKA The-Proposal-Formerly-Known-As-PetSets

This is the draft proposal for kubernetes#260.

metadave pushed a commit to metadave/kubernetes that referenced this issue Feb 22, 2017

postgresql: optional prometheus metrics exporter (kubernetes#260)
* postgresql: optional prometheus metrics exporter

* split metrics.image into image and imageTag

* bump postgresql chart to 0.3.0

* Adding myself as maintainer for the postgresql chart
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment