Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename PetSet to StatefulSet in docs and examples. #35776

Merged
merged 1 commit into from
Nov 6, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
24 changes: 12 additions & 12 deletions docs/design/indexed-job.md
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ The multiple substitution approach:
for very large jobs, the work-queue style or another type of controller, such as
map-reduce or spark, may be a better fit.)
- Drawback: is a form of server-side templating, which we want in Kubernetes but
have not fully designed (see the [PetSets proposal](https://github.com/kubernetes/kubernetes/pull/18016/files?short_path=61f4179#diff-61f41798f4bced6e42e45731c1494cee)).
have not fully designed (see the [StatefulSets proposal](https://github.com/kubernetes/kubernetes/pull/18016/files?short_path=61f4179#diff-61f41798f4bced6e42e45731c1494cee)).

The index-only approach:

Expand Down Expand Up @@ -874,24 +874,24 @@ admission time; it will need to understand indexes.
previous container failures.
- modify the job template, affecting all indexes.

#### Comparison to PetSets
#### Comparison to StatefulSets (previously named PetSets)

The *Index substitution-only* option corresponds roughly to PetSet Proposal 1b.
The `perCompletionArgs` approach is similar to PetSet Proposal 1e, but more
The *Index substitution-only* option corresponds roughly to StatefulSet Proposal 1b.
The `perCompletionArgs` approach is similar to StatefulSet Proposal 1e, but more
restrictive and thus less verbose.

It would be easier for users if Indexed Job and PetSet are similar where
possible. However, PetSet differs in several key respects:
It would be easier for users if Indexed Job and StatefulSet are similar where
possible. However, StatefulSet differs in several key respects:

- PetSet is for ones to tens of instances. Indexed job should work with tens of
- StatefulSet is for ones to tens of instances. Indexed job should work with tens of
thousands of instances.
- When you have few instances, you may want to given them pet names. When you
have many instances, you that many instances, integer indexes make more sense.
- When you have few instances, you may want to give them names. When you have many instances,
integer indexes make more sense.
- When you have thousands of instances, storing the work-list in the JobSpec
is verbose. For PetSet, this is less of a problem.
- PetSets (apparently) need to differ in more fields than indexed Jobs.
is verbose. For StatefulSet, this is less of a problem.
- StatefulSets (apparently) need to differ in more fields than indexed Jobs.

This differs from PetSet in that PetSet uses names and not indexes. PetSet is
This differs from StatefulSet in that StatefulSet uses names and not indexes. StatefulSet is
intended to support ones to tens of things.


Expand Down
2 changes: 1 addition & 1 deletion docs/devel/updating-docs-for-feature-changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Anyone making user facing changes to kubernetes. This is especially important f
### When making Api changes

*e.g. adding Deployments*
* Always make sure docs for downstream effects are updated *(PetSet -> PVC, Deployment -> ReplicationController)*
* Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)*
* Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item
* Verify the guides / walkthroughs do not require any changes:
* **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change**
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/image-provenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ due to a CVE that just came out (fictional scenario). In this scenario:
up and not scale down the old one.
- an existing replicaSet will be unable to create Pods that replace ones which are terminated. If this is due to
slow loss of nodes, then there should be time to react before significant loss of capacity.
- For non-replicated things (size 1 ReplicaSet, PetSet), a single node failure may disable it.
- For non-replicated things (size 1 ReplicaSet, StatefulSet), a single node failure may disable it.
- a node rolling update will eventually check for liveness of replacements, and would be throttled if
in the case when the image was no longer allowed and so replacements could not be started.
- rapid node restarts will cause existing pod objects to be restarted by kubelet.
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/synchronous-garbage-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Finalizer breaks an assumption that many Kubernetes components have: a deletion

**Replication controller manager**, **Job controller**, and **ReplicaSet controller** ignore pods in terminated phase, so pods with pending finalizers will not block these controllers.

**PetSet controller** will be blocked by a pod with pending finalizers, so synchronous GC might slow down its progress.
**StatefulSet controller** will be blocked by a pod with pending finalizers, so synchronous GC might slow down its progress.

**kubectl**: synchronous GC can simplify the **kubectl delete** reapers. Let's take the `deployment reaper` as an example, since it's the most complicated one. Currently, the reaper finds all `RS` with matching labels, scales them down, polls until `RS.Status.Replica` reaches 0, deletes the `RS`es, and finally deletes the `deployment`. If using synchronous GC, `kubectl delete deployment` is as easy as sending a synchronous GC delete request for the deployment, and polls until the deployment is deleted from the key-value store.

Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ There are two main motivators for Template functionality in Kubernetes: Control
Today the replication controller defines a PodTemplate which allows it to instantiate multiple pods with identical characteristics.
This is useful but limited. Stateful applications have a need to instantiate multiple instances of a more sophisticated topology
than just a single pod (e.g. they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
instances of a given Template definition. This capability would be immediately useful to the [PetSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.
instances of a given Template definition. This capability would be immediately useful to the [StatefulSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.

Similarly the [Service Catalog proposal](https://github.com/kubernetes/kubernetes/pull/17543) could leverage template instantiation as a mechanism for claiming service instances.

Expand Down Expand Up @@ -47,7 +47,7 @@ values are appropriate for a deployer to tune or what the parameters control.
* Providing a library of predefined application definitions that users can select from
* Enabling the creation of user interfaces that can guide an application deployer through the deployment process with descriptive help about the configuration value decisions they are making, and useful default values where appropriate
* Exporting a set of objects in a namespace as a template so the topology can be inspected/visualized or recreated in another environment
* Controllers that need to instantiate multiple instances of identical objects (e.g. PetSets).
* Controllers that need to instantiate multiple instances of identical objects (e.g. StatefulSets).


### Use cases for parameters within templates
Expand All @@ -65,7 +65,7 @@ values are appropriate for a deployer to tune or what the parameters control.
a pod as a TLS cert).
* Provide guidance to users for parameters such as default values, descriptions, and whether or not a particular parameter value
is required or can be left blank.
* Parameterize the replica count of a deployment or [PetSet](https://github.com/kubernetes/kubernetes/pull/18016)
* Parameterize the replica count of a deployment or [StatefulSet](https://github.com/kubernetes/kubernetes/pull/18016)
* Parameterize part of the labels and selector for a DaemonSet
* Parameterize quota/limit values for a pod
* Parameterize a secret value so a user can provide a custom password or other secret at deployment time
Expand Down
20 changes: 10 additions & 10 deletions examples/cockroachdb/README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# CockroachDB on Kubernetes as a PetSet
# CockroachDB on Kubernetes as a StatefulSet

This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as
a PetSet. CockroachDB is a distributed, scalable NewSQL database. Please see
a StatefulSet. CockroachDB is a distributed, scalable NewSQL database. Please see
[the homepage](https://cockroachlabs.com) and the
[documentation](https://www.cockroachlabs.com/docs/) for details.

## Limitations

### PetSet limitations
### StatefulSet limitations

Standard PetSet limitations apply: There is currently no possibility to use
Standard StatefulSet limitations apply: There is currently no possibility to use
node-local storage (outside of single-node tests), and so there is likely
a performance hit associated with running CockroachDB on some external storage.
Note that CockroachDB already does replication and thus it is unnecessary to
deploy it onto persistent volumes which already replicate internally.
For this reason, high-performance use cases on a private Kubernetes cluster
may want to consider a DaemonSet deployment until PetSets support node-local
may want to consider a DaemonSet deployment until Stateful Sets support node-local
storage (see #7562).

### Recovery after persistent storage failure
Expand Down Expand Up @@ -43,13 +43,13 @@ Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
## Testing in the cloud on GCE or AWS

Once you have a Kubernetes cluster running, just run
`kubectl create -f cockroachdb-petset.yaml` to create your cockroachdb cluster.
`kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster.
This works because GCE and AWS support dynamic volume provisioning by default,
so persistent volumes will be created for the CockroachDB pods as needed.

## Accessing the database

Along with our PetSet configuration, we expose a standard Kubernetes service
Along with our StatefulSet configuration, we expose a standard Kubernetes service
that offers a load-balanced virtual IP for clients to access the database
with. In our example, we've called this service `cockroachdb-public`.

Expand Down Expand Up @@ -98,10 +98,10 @@ database and ensuring the other replicas have all data that was written.

## Scaling up or down

Simply patch the PetSet by running
Simply patch the Stateful Set by running

```shell
kubectl patch petset cockroachdb -p '{"spec":{"replicas":4}}'
kubectl patch statefulset cockroachdb -p '{"spec":{"replicas":4}}'
```

Note that you may need to create a new persistent volume claim first. If you
Expand All @@ -116,7 +116,7 @@ Because all of the resources in this example have been tagged with the label `ap
we can clean up everything that we created in one quick command using a selector on that label:

```shell
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
```


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ spec:
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pet in the petset
# such that they can resolve each other's IP addresses. It does not create a
# load-balanced ClusterIP and should not be used directly by clients in most
# circumstances.
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: cockroachdb
labels:
app: cockroachdb
Expand Down Expand Up @@ -55,7 +55,7 @@ spec:
app: cockroachdb
---
apiVersion: apps/v1beta1
kind: PetSet
kind: StatefulSet
metadata:
name: cockroachdb
spec:
Expand All @@ -71,8 +71,8 @@ spec:
# it's started up for the first time. It has to exit successfully
# before the pod's main containers are allowed to start.
# This particular init container does a DNS lookup for other pods in
# the petset to help determine whether or not a cluster already exists.
# If any other pets exist, it creates a file in the cockroach-data
# the set to help determine whether or not a cluster already exists.
# If any other pods exist, it creates a file in the cockroach-data
# directory to pass that information along to the primary container that
# has to decide what command-line flags to use when starting CockroachDB.
# This only matters when a pod's persistent volume is empty - if it has
Expand Down
6 changes: 3 additions & 3 deletions examples/cockroachdb/minikube.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

# Run the CockroachDB PetSet example on a minikube instance.
# Run the CockroachDB StatefulSet example on a minikube instance.
#
# For a fresh start, run the following first:
# minikube delete
Expand All @@ -29,7 +29,7 @@
set -exuo pipefail

# Clean up anything from a prior run:
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb

# Make persistent volumes and (correctly named) claims. We must create the
# claims here manually even though that sounds counter-intuitive. For details
Expand Down Expand Up @@ -69,4 +69,4 @@ spec:
EOF
done;

kubectl create -f cockroachdb-petset.yaml
kubectl create -f cockroachdb-statefulset.yaml
8 changes: 4 additions & 4 deletions examples/examples_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -218,10 +218,10 @@ func TestExampleObjectSchemas(t *testing.T) {
"rbd-with-secret": &api.Pod{},
},
"../examples/storage/cassandra": {
"cassandra-daemonset": &extensions.DaemonSet{},
"cassandra-controller": &api.ReplicationController{},
"cassandra-service": &api.Service{},
"cassandra-petset": &apps.StatefulSet{},
"cassandra-daemonset": &extensions.DaemonSet{},
"cassandra-controller": &api.ReplicationController{},
"cassandra-service": &api.Service{},
"cassandra-statefulset": &apps.StatefulSet{},
},
"../examples/cluster-dns": {
"dns-backend-rc": &api.ReplicationController{},
Expand Down