Switch branches/tags
Nothing to show
Find file History
serathius and ahmetb Import kubernetes updates (#210)
* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* move hardPodAffinitySymmetricWeight to scheduler policy config

* Added Bind method to Scheduler Extender

- only one extender can support the bind method
- if an extender supports bind, scheduler delegates the pod binding to the extender

* examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP.

* fix typo

* SPBM policy ID support in vsphere cloud provider

* fix the invalid link

* DeamonSet-DaemonSet

* Update GlusterFS examples readme.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>

* fix some typo in example/volumes

* Fix  spelling in example/spark

* Correct spelling in quobyte

* Support custom domains in the cockroachdb example's init container

This switches from using v0.1 of the peer-finder image to a version that
includes kubernetes/contrib#2013

While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1

* Update docs/ URLs to point to proper locations

* Adds --insecure to cockroachdb client command

Cockroach errors out when using said command:

```shell
▶  kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
                                                      Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default
Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found
Failed running "sql"
Waiting for pod default/cockroach-client to terminate, status is Running
pod "cockroach-client" deleted
```

This PR updates the README.md to include --insecure in the client command

* Add StorageOS volume plugin

* examples/volumes/flexvolume/nfs: check for jq and simplify quoting.

* Remove broken getvolumename and pass PV or volume name to attach call

* Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS).

* Add `imageFeatures` parameter for RBD volume plugin, which is used to
customize RBD image format 2 features.
Update RBD docs in examples/persistent-volume-provisioning/README.md.

* Only `layering` RBD image format 2 feature should be supported for now.

* Formatted Dockerfile to be cleaner and precise

* Update docs for user-guide

* Make the Quota creation optional

* Remove duplicated line from ceph-secret-admin.yaml

* Update CockroachDB tag to v1.0.3

* Correct the comment in PSP examples.

* Update wordpress to 4.8.0

* Cassandra example, use nodetool drain in preStop

* Add termination gracePeriod

* Use buildozer to remove deprecated automanaged tags

* Use buildozer to delete licenses() rules except under third_party/

* NR Infrastructure agent example daemonset

Copy of previous newrelic example, then modified to use the new agent
"newrelic-infra" instead of "nrsysmond".

Also maps all of host node's root fs into /host in the container (ro,
but still exposes underlying node info into a container).

Updates to README

* Reduce one time url direction

Reduce one time url direction

* update to rbac v1 in yaml file

* Replicate the persistent volume label admission plugin in a controller in
the cloud-controller-manager

* update related files

* Paramaterize stickyMaxAgeMinutes for service in API

* Update example to CockroachDB v1.0.5

* Remove storage-class annotations in examples

* PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities.

Also modify "privileged" PSP to use it and allow privileged users to use
any capabilities.

* Add examples pods to demonstrate CPU manager.

* Tag broken examples test as manual

* bazel: use autogenerated all-srcs rules instead of manually-curated sources rules

* Update CockroachDB tag to v1.1.0

* update BUILD files

* pkg/api/legacyscheme: fixup imports

* Update bazel

* [examples.storage/minio] update deploy config version

* Volunteer to help review examples

I would like to do some code review for examples about how to run real applications with Kubernetes

* examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository.

* Update storageclass version to v1 in examples

* pkg/apis/core: mechanical import fixes in dependencies

* Use k8s.gcr.io vanity domain for container images

* Update generated files

* gcloud docker now auths k8s.gcr.io by default

* -Add scheduler optimization options, short circuit all predicates if one predicate fails

* Revert k8s.gcr.io vanity domain

This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9.

Fixes kubernetes/kubernetes#57526

* Autogenerate BUILD files

* Move scheduler code out of plugin directory.

This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.

Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.

* Fix scheduler refs in BUILD files.

Update references to moved scheduler code.

* Switch to k8s.gcr.io vanity domain

This is the 2nd attempt.  The previous was reverted while we figured out
the regional mirrors (oops).

New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest.  To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today).  For now the staging is an alias to
gcr.io/google_containers (the legacy URL).

When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.

We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it.  Nice and
visible, easy to keep track of.

* Remove apiVersion from scheduler extender example configuration

* Update examples to use PSPs from the policy API group.

* fix all the typos across the project

* Autogenerated: hack/update-bazel.sh

* Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group.

* fix todo: add validate method for &schedulerapi.Policy

* examples/podsecuritypolicy: add owners.

* Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example

* Fix relative links in README
Latest commit e4cc298 Mar 14, 2018

README.md

Dell EMC ScaleIO Volume Plugin for Kubernetes

This document shows how to configure Kubernetes resources to consume storage from volumes hosted on ScaleIO cluster.

Pre-Requisites

  • Kubernetes ver 1.6 or later
  • ScaleIO ver 2.0 or later
  • A ScaleIO cluster with an API gateway
  • ScaleIO SDC binary installed/configured on each Kubernetes node that will consume storage

ScaleIO Setup

This document assumes you are familiar with ScaleIO and have a cluster ready to go. If you are not familiar with ScaleIO, please review Learn how to setup a 3-node ScaleIO cluster on Vagrant and see General instructions on setting up ScaleIO

For this demonstration, ensure the following:

  • The ScaleIO SDC component is installed and properly configured on all Kubernetes nodes where deployed pods will consume ScaleIO-backed storage.
  • You have a configured ScaleIO gateway that is accessible from the Kubernetes nodes.

Deploy Kubernetes Secret for ScaleIO

The ScaleIO plugin uses a Kubernetes Secret object to store the username and password credentials. Kubernetes requires the secret values to be base64-encoded to simply obfuscate (not encrypt) the clear text as shown below.

$> echo -n "siouser" | base64
c2lvdXNlcg==
$> echo -n "sc@l3I0" | base64
c2NAbDNJMA==

The previous will generate base64-encoded values for the username and password.
Remember to generate the credentials for your own environment and copy them in a secret file similar to the following.

File: secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: sio-secret
type: kubernetes.io/scaleio
data:
  username: c2lvdXNlcg==
  password: c2NAbDNJMA==

Notice the name of the secret specified above as sio-secret. It will be referred in other YAML configuration files later. Next, deploy the secret.

$ kubectl create -f ./examples/volumes/scaleio/secret.yaml

Read more about Kubernetes secrets here.

Deploying Pods with Persistent Volumes

The example presented in this section shows how the ScaleIO volume plugin can automatically attach, format, and mount an existing ScaleIO volume for pod. The Kubernetes ScaleIO volume spec supports the following attributes:

Attribute Description
gateway address to a ScaleIO API gateway (required)
system the name of the ScaleIO system (required)
protectionDomain the name of the ScaleIO protection domain (required)
storagePool the name of the volume storage pool (required)
storageMode the storage provision mode: ThinProvisioned (default) or ThickProvisioned
volumeName the name of an existing volume in ScaleIO (required)
secretRef:name references the name of a Secret object (required)
readOnly specifies the access mode to the mounted volume (default false)
fsType the file system to use for the volume (default ext4)

Create Volume

When using static persistent volumes, it is required that the volume, to be consumed by the pod, be already created in ScaleIO. For this demo, we assume there's an existing ScaleIO volume named vol-0 which is reflected configuration properly volumeName: below.

Deploy Pod YAML

Create a pod YAML file that declares the volume (above) to be used.

File: pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-0
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: pod-0
    volumeMounts:
    - mountPath: /test-pd
      name: vol-0
  volumes:
  - name: vol-0
    scaleIO:
      gateway: https://localhost:443/api
      system: scaleio
      protectionDomain: pd01
      storagePool: sp01
      volumeName: vol-0
      secretRef:
        name: sio-secret
      fsType: xfs

Remember to change the ScaleIO attributes above to reflect that of your own environment.

Next, deploy the pod.

$> kubectl create -f examples/volumes/scaleio/pod.yaml

You can verify the pod:

$> kubectl get pod
NAME      READY     STATUS    RESTARTS   AGE
pod-0     1/1       Running   0          33s

Or for more detail, use

kubectl describe pod pod-0

You can see the attached/mapped volume on the node:

$> lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
...
scinia      252:0    0    8G  0 disk /var/lib/kubelet/pods/135986c7-dcb7-11e6-9fbf-080027c990a7/volumes/kubernetes.io~scaleio/vol-0

StorageClass and Dynamic Provisioning

The ScaleIO volume plugin can also dynamically provision storage to a Kubernetes cluster. The ScaleIO dynamic provisioner plugin can be used with a StorageClass and is identified as kubernetes.io/scaleio.

ScaleIO StorageClass

The ScaleIO dynamic provisioning plugin supports the following StorageClass parameters:

Parameter Description
gateway address to a ScaleIO API gateway (required)
system the name of the ScaleIO system (required)
protectionDomain the name of the ScaleIO protection domain (required)
storagePool the name of the volume storage pool (required)
storageMode the storage provision mode: ThinProvisioned (default) or ThickProvisioned
secretRef reference to the name of a configuered Secret object (required)
readOnly specifies the access mode to the mounted volume (default false)
fsType the file system to use for the volume (default ext4)

The following shows an example of ScaleIO StorageClass configuration YAML:

File sc.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sio-small
provisioner: kubernetes.io/scaleio
parameters:
  gateway: https://localhost:443/api
  system: scaleio
  protectionDomain: pd01
  storagePool: sp01
  secretRef: sio-secret
  fsType: xfs

Note the metadata:name attribute of the StorageClass is set to sio-small and will be referenced later. Again, remember to update other parameters to reflect your environment setup.

Next, deploy the storage class file.

$> kubectl create -f examples/volumes/scaleio/sc.yaml

$> kubectl get sc
NAME        TYPE
sio-small   kubernetes.io/scaleio

PVC for the StorageClass

The next step is to define/deploy a PersistentVolumeClaim that will use the StorageClass.

File sc-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-sio-small
spec:
  storageClassName: sio-small
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Note the spec:storageClassName entry which specifies the name of the previously defined StorageClass sio-small .

Next, deploy the PVC file. This step will cause the Kubernetes ScaleIO plugin to create the volume in the storage system.

$> kubectl create -f examples/volumes/scaleio/sc-pvc.yaml

You verify that a new volume created in the ScaleIO dashboard. You can also verify the newly created volume as follows.

 kubectl get pvc
NAME            STATUS    VOLUME                CAPACITY   ACCESSMODES   AGE
pvc-sio-small   Bound     k8svol-5fc78518dcae   10Gi       RWO           1h

###Pod for PVC and SC At this point, the volume is created (by the claim) in the storage system. To use it, we must define a pod that references the volume as done in this YAML.

File pod-sc-pvc.yaml

kind: Pod
apiVersion: v1
metadata:
  name: pod-sio-small
spec:
  containers:
    - name: pod-sio-small-container
      image: k8s.gcr.io/test-webserver
      volumeMounts:
      - mountPath: /test
        name: test-data
  volumes:
    - name: test-data
      persistentVolumeClaim:
        claimName: pvc-sio-small

Notice that the claimName: attribute refers to the name of the PVC, pvc-sio-small, defined and deployed earlier. Next, let us deploy the file.

$> kubectl create -f examples/volumes/scaleio/pod-sc-pvc.yaml

We can now verify that the new pod is deployed OK.

kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
pod-0           1/1       Running   0          23m
pod-sio-small   1/1       Running   0          5s

Analytics