Switch branches/tags
Nothing to show
Find file History
serathius and ahmetb Import kubernetes updates (#210)
* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* move hardPodAffinitySymmetricWeight to scheduler policy config

* Added Bind method to Scheduler Extender

- only one extender can support the bind method
- if an extender supports bind, scheduler delegates the pod binding to the extender

* examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP.

* fix typo

* SPBM policy ID support in vsphere cloud provider

* fix the invalid link

* DeamonSet-DaemonSet

* Update GlusterFS examples readme.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>

* fix some typo in example/volumes

* Fix  spelling in example/spark

* Correct spelling in quobyte

* Support custom domains in the cockroachdb example's init container

This switches from using v0.1 of the peer-finder image to a version that
includes kubernetes/contrib#2013

While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1

* Update docs/ URLs to point to proper locations

* Adds --insecure to cockroachdb client command

Cockroach errors out when using said command:

```shell
▶  kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
                                                      Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default
Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found
Failed running "sql"
Waiting for pod default/cockroach-client to terminate, status is Running
pod "cockroach-client" deleted
```

This PR updates the README.md to include --insecure in the client command

* Add StorageOS volume plugin

* examples/volumes/flexvolume/nfs: check for jq and simplify quoting.

* Remove broken getvolumename and pass PV or volume name to attach call

* Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS).

* Add `imageFeatures` parameter for RBD volume plugin, which is used to
customize RBD image format 2 features.
Update RBD docs in examples/persistent-volume-provisioning/README.md.

* Only `layering` RBD image format 2 feature should be supported for now.

* Formatted Dockerfile to be cleaner and precise

* Update docs for user-guide

* Make the Quota creation optional

* Remove duplicated line from ceph-secret-admin.yaml

* Update CockroachDB tag to v1.0.3

* Correct the comment in PSP examples.

* Update wordpress to 4.8.0

* Cassandra example, use nodetool drain in preStop

* Add termination gracePeriod

* Use buildozer to remove deprecated automanaged tags

* Use buildozer to delete licenses() rules except under third_party/

* NR Infrastructure agent example daemonset

Copy of previous newrelic example, then modified to use the new agent
"newrelic-infra" instead of "nrsysmond".

Also maps all of host node's root fs into /host in the container (ro,
but still exposes underlying node info into a container).

Updates to README

* Reduce one time url direction

Reduce one time url direction

* update to rbac v1 in yaml file

* Replicate the persistent volume label admission plugin in a controller in
the cloud-controller-manager

* update related files

* Paramaterize stickyMaxAgeMinutes for service in API

* Update example to CockroachDB v1.0.5

* Remove storage-class annotations in examples

* PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities.

Also modify "privileged" PSP to use it and allow privileged users to use
any capabilities.

* Add examples pods to demonstrate CPU manager.

* Tag broken examples test as manual

* bazel: use autogenerated all-srcs rules instead of manually-curated sources rules

* Update CockroachDB tag to v1.1.0

* update BUILD files

* pkg/api/legacyscheme: fixup imports

* Update bazel

* [examples.storage/minio] update deploy config version

* Volunteer to help review examples

I would like to do some code review for examples about how to run real applications with Kubernetes

* examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository.

* Update storageclass version to v1 in examples

* pkg/apis/core: mechanical import fixes in dependencies

* Use k8s.gcr.io vanity domain for container images

* Update generated files

* gcloud docker now auths k8s.gcr.io by default

* -Add scheduler optimization options, short circuit all predicates if one predicate fails

* Revert k8s.gcr.io vanity domain

This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9.

Fixes kubernetes/kubernetes#57526

* Autogenerate BUILD files

* Move scheduler code out of plugin directory.

This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.

Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.

* Fix scheduler refs in BUILD files.

Update references to moved scheduler code.

* Switch to k8s.gcr.io vanity domain

This is the 2nd attempt.  The previous was reverted while we figured out
the regional mirrors (oops).

New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest.  To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today).  For now the staging is an alias to
gcr.io/google_containers (the legacy URL).

When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.

We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it.  Nice and
visible, easy to keep track of.

* Remove apiVersion from scheduler extender example configuration

* Update examples to use PSPs from the policy API group.

* fix all the typos across the project

* Autogenerated: hack/update-bazel.sh

* Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group.

* fix todo: add validate method for &schedulerapi.Policy

* examples/podsecuritypolicy: add owners.

* Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example

* Fix relative links in README
Latest commit e4cc298 Mar 14, 2018

README.md

vSphere Volume

Prerequisites

Examples

Volumes

  1. Create VMDK.

    First ssh into ESX and then use following command to create vmdk,

    vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
  2. Create Pod which uses 'myDisk.vmdk'.

    See example

       apiVersion: v1
       kind: Pod
       metadata:
         name: test-vmdk
       spec:
         containers:
         - image: k8s.gcr.io/test-webserver
           name: test-container
           volumeMounts:
           - mountPath: /test-vmdk
             name: test-volume
         volumes:
         - name: test-volume
           # This VMDK volume must already exist.
           vsphereVolume:
             volumePath: "[datastore1] volumes/myDisk"
             fsType: ext4

    Download example

    Creating the pod:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pod.yaml

    Verify that pod is running:

    $ kubectl get pods test-vmdk
    NAME      READY     STATUS    RESTARTS   AGE
    test-vmdk   1/1     Running   0          48m

Persistent Volumes

  1. Create VMDK.

    First ssh into ESX and then use following command to create vmdk,

    vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
  2. Create Persistent Volume.

    See example:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001
    spec:
      capacity:
        storage: 2Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume:
        volumePath: "[datastore1] volumes/myDisk"
        fsType: ext4

    In the above example datastore1 is located in the root folder. If datastore is member of Datastore Cluster or located in sub folder, the folder path needs to be provided in the VolumePath as below.

    vsphereVolume:
        VolumePath:	"[DatastoreCluster/datastore1] volumes/myDisk"

    Download example

    Creating the persistent volume:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pv.yaml

    Verifying persistent volume is created:

    $ kubectl describe pv pv0001
    Name:		pv0001
    Labels:		<none>
    Status:		Available
    Claim:
    Reclaim Policy:	Retain
    Access Modes:	RWO
    Capacity:	2Gi
    Message:
    Source:
        Type:	vSphereVolume (a Persistent Disk resource in vSphere)
        VolumePath:	[datastore1] volumes/myDisk
        FSType:	ext4
    No events.
  3. Create Persistent Volume Claim.

    See example:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc0001
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi

    Download example

    Creating the persistent volume claim:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvc.yaml

    Verifying persistent volume claim is created:

    $ kubectl describe pvc pvc0001
    Name:		pvc0001
    Namespace:	default
    Status:		Bound
    Volume:		pv0001
    Labels:		<none>
    Capacity:	2Gi
    Access Modes:	RWO
    No events.
  4. Create Pod which uses Persistent Volume Claim.

    See example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pvpod
    spec:
      containers:
      - name: test-container
        image: k8s.gcr.io/test-webserver
        volumeMounts:
        - name: test-volume
          mountPath: /test-vmdk
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: pvc0001

    Download example

    Creating the pod:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcpod.yaml

    Verifying pod is created:

    $ kubectl get pod pvpod
    NAME      READY     STATUS    RESTARTS   AGE
    pvpod       1/1     Running   0          48m

Storage Class

Note: Here you don't need to create vmdk it is created for you.

  1. Create Storage Class.

    Example 1:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        fstype:     ext3

    Download example

    You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class. This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.

    Example 2:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        datastore: VSANDatastore

    If datastore is member of DataStore Cluster or within some sub folder, the datastore folder path needs to be provided in the datastore parameter as below.

    parameters:
       datastore:	DatastoreCluster/VSANDatastore

    Download example Creating the storageclass:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-fast.yaml

    Verifying storage class is created:

    $ kubectl describe storageclass fast
    Name:           fast
    IsDefaultClass: No
    Annotations:    <none>
    Provisioner:    kubernetes.io/vsphere-volume
    Parameters:     diskformat=zeroedthick,fstype=ext3
    No events.
  2. Create Persistent Volume Claim.

    See example:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvcsc001
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi

    Download example

    Creating the persistent volume claim:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml

    Verifying persistent volume claim is created:

    $ kubectl describe pvc pvcsc001
    Name:           pvcsc001
    Namespace:      default
    StorageClass:   fast
    Status:         Bound
    Volume:         pvc-83295256-f8e0-11e6-8263-005056b2349c
    Labels:         <none>
    Capacity:       2Gi
    Access Modes:   RWO
    Events:
      FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                  Message
      ---------     --------        -----   ----                            -------------   --------        ------                  -------
      1m            1m              1       persistentvolume-controller                     Normal          ProvisioningSucceeded   Successfully provisioned volume pvc-83295256-f8e0-11e6-8263-005056b2349c using kubernetes.io/vsphere-volume
    

    Persistent Volume is automatically created and is bounded to this pvc.

    Verifying persistent volume claim is created:

    $ kubectl describe pv pvc-83295256-f8e0-11e6-8263-005056b2349c
    Name:           pvc-83295256-f8e0-11e6-8263-005056b2349c
    Labels:         <none>
    StorageClass:   fast
    Status:         Bound
    Claim:          default/pvcsc001
    Reclaim Policy: Delete
    Access Modes:   RWO
    Capacity:       2Gi
    Message:
    Source:
        Type:       vSphereVolume (a Persistent Disk resource in vSphere)
        VolumePath: [datastore1] kubevols/kubernetes-dynamic-pvc-83295256-f8e0-11e6-8263-005056b2349c.vmdk
        FSType:     ext3
    No events.

    Note: VMDK is created inside kubevols folder in datastore which is mentioned in 'vsphere' cloudprovider configuration. The cloudprovider config is created during setup of Kubernetes cluster on vSphere.

  3. Create Pod which uses Persistent Volume Claim with storage class.

    See example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pvpod
    spec:
      containers:
      - name: test-container
        image: k8s.gcr.io/test-webserver
        volumeMounts:
        - name: test-volume
          mountPath: /test-vmdk
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: pvcsc001

    Download example

    Creating the pod:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml

    Verifying pod is created:

    $ kubectl get pod pvpod
    NAME      READY     STATUS    RESTARTS   AGE
    pvpod       1/1     Running   0          48m

Storage Policy Management inside kubernetes

Using existing vCenter SPBM policy

Admins can use the existing vCenter Storage Policy Based Management (SPBM) policy to configure a persistent volume with the SPBM policy. Note: Here you don't need to create persistent volume it is created for you.

  1. Create Storage Class.

    Example 1:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        storagePolicyName: gold

    Download example

    The admin specifies the SPBM policy - "gold" as part of storage class definition for dynamic volume provisioning. When a PVC is created, the persistent volume will be provisioned on a compatible datastore with maximum free space that satisfies the "gold" storage policy requirements.

    Example 2:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        storagePolicyName: gold
        datastore: VSANDatastore

    Download example

    The admin can also specify a custom datastore where he wants the volume to be provisioned along with the SPBM policy name. When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the "gold" storage policy requirements. If yes, it will provision the persistent volume on user specified datastore. If not, it will error out to the user that the user specified datastore is not compatible with "gold" storage policy requirements.

Virtual SAN policy support inside Kubernetes

Vsphere Infrastructure(VI) Admins will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a persistent volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

The official VSAN policy documentation describes in detail about each of the individual storage capabilities that are supported by VSAN. The user can specify these storage capabilities as part of storage class definition based on his application needs.

The policy settings can be one or more of the following:

  • hostFailuresToTolerate: represents NumberOfFailuresToTolerate
  • diskStripes: represents NumberofDiskStripesPerObject
  • objectSpaceReservation: represents ObjectSpaceReservation
  • cacheReservation: represents FlashReadCacheReservation
  • iopsLimit: represents IOPSLimitForObject
  • forceProvisioning: represents if volume must be Force Provisioned

Note: Here you don't need to create persistent volume it is created for you.

  1. Create Storage Class.

    Example 1:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        hostFailuresToTolerate: "2"
        cachereservation: "20"

    Download example

    Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be zeroedthick disk. The official VSAN policy documentation describes in detail about each of the individual storage capabilities that are supported by VSAN and can be configured on the virtual disk.

    You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class. This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.

    Example 2:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: fast
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: zeroedthick
        datastore: VSANDatastore
        hostFailuresToTolerate: "2"
        cachereservation: "20"

    Download example

    Note: If you do not apply a storage policy during dynamic provisioning on a VSAN datastore, it will use a default Virtual SAN policy.

    Creating the storageclass:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-vsancapabilities.yaml

    Verifying storage class is created:

    $ kubectl describe storageclass fast
    Name:		fast
    Annotations:	<none>
    Provisioner:	kubernetes.io/vsphere-volume
    Parameters:	diskformat=zeroedthick, hostFailuresToTolerate="2", cachereservation="20"
    No events.
  2. Create Persistent Volume Claim.

    See example:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvcsc-vsan
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi

    Download example

    Creating the persistent volume claim:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml

    Verifying persistent volume claim is created:

    $ kubectl describe pvc pvcsc-vsan
    Name:		pvcsc-vsan
    Namespace:	default
    Status:		Bound
    Volume:		pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
    Labels:		<none>
    Capacity:	2Gi
    Access Modes:	RWO
    No events.

    Persistent Volume is automatically created and is bounded to this pvc.

    Verifying persistent volume claim is created:

    $ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
    Name:		pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
    Labels:		<none>
    Status:		Bound
    Claim:		default/pvcsc-vsan
    Reclaim Policy:	Delete
    Access Modes:	RWO
    Capacity:	2Gi
    Message:
    Source:
        Type:	vSphereVolume (a Persistent Disk resource in vSphere)
        VolumePath:	[VSANDatastore] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk
        FSType:	ext4
    No events.

    Note: VMDK is created inside kubevols folder in datastore which is mentioned in 'vsphere' cloudprovider configuration. The cloudprovider config is created during setup of Kubernetes cluster on vSphere.

  3. Create Pod which uses Persistent Volume Claim with storage class.

    See example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pvpod
    spec:
      containers:
      - name: test-container
        image: k8s.gcr.io/test-webserver
        volumeMounts:
        - name: test-volume
          mountPath: /test
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: pvcsc-vsan

    Download example

    Creating the pod:

    $ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscvsanpod.yaml

    Verifying pod is created:

    $ kubectl get pod pvpod
    NAME      READY     STATUS    RESTARTS   AGE
    pvpod       1/1     Running   0          48m

Stateful Set

vSphere volumes can be consumed by Stateful Sets.

  1. Create a storage class that will be used by the volumeClaimTemplates of a Stateful Set.

    See example:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: thin-disk
    provisioner: kubernetes.io/vsphere-volume
    parameters:
        diskformat: thin

    Download example

  2. Create a Stateful set that consumes storage from the Storage Class created.

    See example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None
      selector:
        app: nginx
    ---
    #  for k8s versions before 1.9.0 use apps/v1beta2  and before 1.8.0 use extensions/v1beta1
    apiVersion:apps/v1
    kind: StatefulSet
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      serviceName: "nginx"
      selector:
        matchLabels:
          app: nginx
      replicas: 14
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: k8s.gcr.io/nginx-slim:0.8
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: www
          annotations:
            volume.beta.kubernetes.io/storage-class: thin-disk
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 1Gi

    This will create Persistent Volume Claims for each replica and provision a volume for each claim if an existing volume could be bound to the claim.

    Download example

Analytics