Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.0.6 #13841

Merged
merged 2 commits into from
Sep 10, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
508 changes: 254 additions & 254 deletions api/swagger-spec/v1.json

Large diffs are not rendered by default.

104 changes: 52 additions & 52 deletions api/swagger-spec/v1beta3.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Documentation: releases.k8s.io/v1.0.5
# Kubernetes Documentation: releases.k8s.io/v1.0.6

* The [User's guide](user-guide/README.md) is for anyone who wants to run programs and
services on an existing Kubernetes cluster.
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}`

[Complete file example](http://releases.k8s.io/v1.0.5/pkg/auth/authorizer/abac/example_policy_file.jsonl)
[Complete file example](http://releases.k8s.io/v1.0.6/pkg/auth/authorizer/abac/example_policy_file.jsonl)

## Plugin Development

Expand Down
12 changes: 6 additions & 6 deletions docs/admin/cluster-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,17 +69,17 @@ selects a node for them to run on.
Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
[kube-master-addons](http://releases.k8s.io/v1.0.5/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
[kube-master-addons](http://releases.k8s.io/v1.0.6/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)

Addon objects are created in the "kube-system" namespace.

Example addons are:
* [DNS](http://releases.k8s.io/v1.0.5/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/v1.0.5/cluster/addons/kube-ui/) provides a graphical UI for the
* [DNS](http://releases.k8s.io/v1.0.6/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/v1.0.6/cluster/addons/kube-ui/) provides a graphical UI for the
cluster.
* [fluentd-elasticsearch](http://releases.k8s.io/v1.0.5/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/v1.0.5/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/) provides
* [fluentd-elasticsearch](http://releases.k8s.io/v1.0.6/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/v1.0.6/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/) provides
monitoring for the cluster.

## Node components
Expand Down
16 changes: 8 additions & 8 deletions docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and

A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).

Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/v1.0.5/cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/v1.0.6/cluster/gce/config-default.sh)).

Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.

Expand Down Expand Up @@ -54,15 +54,15 @@ These limits, however, are based on data collected from addons running on 4-node

To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* Heapster ([GCM/GCL backed](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/v1.0.5/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/v1.0.5/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/v1.0.5/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Heapster ([GCM/GCL backed](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/v1.0.6/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/v1.0.6/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/v1.0.6/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](http://releases.k8s.io/v1.0.5/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* [elasticsearch](http://releases.k8s.io/v1.0.6/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* Increase memory and CPU limits sligthly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/v1.0.5/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/v1.0.5/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/v1.0.6/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/v1.0.6/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)

For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.md#troubleshooting).

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# DNS Integration with Kubernetes

As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/v1.0.5/cluster/addons/README.md).
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/v1.0.6/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.

Expand Down Expand Up @@ -40,7 +40,7 @@ time.

## For more information

See [the docs for the DNS cluster addon](http://releases.k8s.io/v1.0.5/cluster/addons/dns/README.md).
See [the docs for the DNS cluster addon](http://releases.k8s.io/v1.0.6/cluster/addons/dns/README.md).


<!-- BEGIN MUNGE: IS_VERSIONED -->
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ to reduce downtime in case of corruption.
## Default configuration

The default setup scripts use kubelet's file-based static pods feature to run etcd in a
[pod](http://releases.k8s.io/v1.0.5/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
[pod](http://releases.k8s.io/v1.0.6/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
be run on master VMs. The default location that kubelet scans for manifests is
`/etc/kubernetes/manifests/`.

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](http://releases.k8s.io/v1.0.5/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
[kubelet init file](http://releases.k8s.io/v1.0.6/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
scripts.

If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/salt.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ We should define a grains.conf key that captures more specifically what network

## Further reading

The [cluster/saltbase](http://releases.k8s.io/v1.0.5/cluster/saltbase/) tree has more details on the current SaltStack configuration.
The [cluster/saltbase](http://releases.k8s.io/v1.0.6/cluster/saltbase/) tree has more details on the current SaltStack configuration.


<!-- BEGIN MUNGE: IS_VERSIONED -->
Expand Down
4 changes: 2 additions & 2 deletions docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst

## Design

Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/v1.0.5/pkg/api/types.go#L1111) the following fields:
Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/v1.0.6/pkg/api/types.go#L1111) the following fields:
* `FirstTimestamp util.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp util.Time`
Expand All @@ -44,7 +44,7 @@ Each binary that generates events:
* `event.Reason`
* `event.Message`
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](http://releases.k8s.io/v1.0.5/pkg/client/record/event.go)).
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](http://releases.k8s.io/v1.0.6/pkg/client/record/event.go)).
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/cherry-picks.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ particular, they may be self-merged by the release branch owner without fanfare,
in the case the release branch owner knows the cherry pick was already
requested - this should not be the norm, but it may happen.

[Contributor License Agreements](http://releases.k8s.io/v1.0.5/CONTRIBUTING.md) is considered implicit
[Contributor License Agreements](http://releases.k8s.io/v1.0.6/CONTRIBUTING.md) is considered implicit
for all code within cherry-pick pull requests, ***unless there is a large
conflict***.

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/client-libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

### Supported

* [Go](http://releases.k8s.io/v1.0.5/pkg/client/)
* [Go](http://releases.k8s.io/v1.0.6/pkg/client/)

### User Contributed

Expand Down
4 changes: 2 additions & 2 deletions docs/devel/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

# Releases and Official Builds

Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/v1.0.5/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/v1.0.6/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.

## Go development environment

Expand Down Expand Up @@ -296,7 +296,7 @@ The conformance test runs a subset of the e2e-tests against a manually-created c
require support for up/push/down and other operations. To run a conformance test, you need to know the
IP of the master for your cluster and the authorization arguments to use. The conformance test is
intended to run against a cluster at a specific binary release of Kubernetes.
See [conformance-test.sh](http://releases.k8s.io/v1.0.5/hack/conformance-test.sh).
See [conformance-test.sh](http://releases.k8s.io/v1.0.6/hack/conformance-test.sh).

## Testing out flaky tests

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/getting-builds.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# Getting Kubernetes Builds

You can use [hack/get-build.sh](http://releases.k8s.io/v1.0.5/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
You can use [hack/get-build.sh](http://releases.k8s.io/v1.0.6/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).

```console
usage:
Expand Down
12 changes: 6 additions & 6 deletions docs/devel/scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,30 +25,30 @@ divided by the node's capacity).
Finally, the node with the highest priority is chosen
(or, if there are multiple such nodes, then one of them is chosen at random). The code
for this main scheduling loop is in the function `Schedule()` in
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.0.5/plugin/pkg/scheduler/generic_scheduler.go)
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.0.6/plugin/pkg/scheduler/generic_scheduler.go)

## Scheduler extensibility

The scheduler is extensible: the cluster administrator can choose which of the pre-defined
scheduling policies to apply, and can add new ones. The built-in predicates and priorities are
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.0.5/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.0.5/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.0.6/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.0.6/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
The policies that are applied when scheduling can be chosen in one of two ways. Normally,
the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.0.5/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.0.6/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
However, the choice of policies
can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON
file specifying which scheduling policies to use. See
[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example
config file. (Note that the config file format is versioned; the API is defined in
[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.0.5/plugin/pkg/scheduler/api/)).
[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.0.6/plugin/pkg/scheduler/api/)).
Thus to add a new scheduling policy, you should modify predicates.go or priorities.go,
and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file.

## Exploring the code

If you want to get a global picture of how the scheduler works, you can start in
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.0.5/plugin/cmd/kube-scheduler/app/server.go)
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.0.6/plugin/cmd/kube-scheduler/app/server.go)


<!-- BEGIN MUNGE: IS_VERSIONED -->
Expand Down