Skip to content

Commit

Permalink
docs: change 'ZFS-LocalPV' to 'LocalPV ZFS' (#524)
Browse files Browse the repository at this point in the history
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
  • Loading branch information
niladrih committed Apr 11, 2024
1 parent ed933c9 commit d99b0a0
Show file tree
Hide file tree
Showing 17 changed files with 88 additions and 88 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ labels: Bug


**Environment:**
- ZFS-LocalPV version
- LocalPV-ZFS version
- Kubernetes version (use `kubectl version`):
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/feature-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ labels: Enhancement


**Environment:**
- ZFS-LocalPV version
- LocalPV-ZFS version
- Kubernetes version (use `kubectl version`):
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
Expand Down
8 changes: 4 additions & 4 deletions Adopters.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# OpenEBS ZFS-LocalPV Adopters
# OpenEBS LocalPV-ZFS Adopters

This is the list of organizations and users that publicly shared details of how they are using OpenEBS ZFS-LocalPV CSI driver for running their Stateful workloads. Please send PRs to add or remove organizations/users.
This is the list of organizations and users that publicly shared details of how they are using OpenEBS LocalPV-ZFS CSI driver for running their Stateful workloads. Please send PRs to add or remove organizations/users.

The list of organizations that have publicly shared the usage of ZFS-LocalPV.
The list of organizations that have publicly shared the usage of LocalPV-ZFS.


| Organization | Stateful Workloads | Success Story |
Expand All @@ -13,7 +13,7 @@ The list of organizations that have publicly shared the usage of ZFS-LocalPV.



The list of users that have publicly shared the usage of ZFS-LocalPV.
The list of users that have publicly shared the usage of LocalPV-ZFS.


| User | Stateful Workloads | Success Story |
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ ZFS LocalPV uses the standard GitHub pull requests process to review and accept

## Steps to Contribute

ZFS-LocalPV is an Apache 2.0 Licensed project and all your commits should be signed with Developer Certificate of Origin. See [Sign your work](#sign-your-work).
LocalPV-ZFS is an Apache 2.0 Licensed project and all your commits should be signed with Developer Certificate of Origin. See [Sign your work](#sign-your-work).

* Find an issue to work on or create a new issue. The issues are maintained at [zfs-localpv/issues](https://github.com/openebs/zfs-localpv/issues). You can pick up from a list of [good-first-issues](https://github.com/openebs/zfs-localpv/labels/good%20first%20issue).
* Claim your issue by commenting your intent to work on it to avoid duplication of efforts.
Expand Down
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## OpenEBS - ZFS-LocalPV CSI Driver
## OpenEBS - LocalPV-ZFS CSI Driver
[![Build Status](https://github.com/openebs/zfs-localpv/actions/workflows/build.yml/badge.svg)](https://github.com/openebs/zfs-localpv/actions/workflows/build.yml)
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fopenebs%2Fzfs-localpv.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Fopenebs%2Fzfs-localpv?ref=badge_shield)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3523/badge)](https://bestpractices.coreinfrastructure.org/en/projects/3523)
Expand All @@ -7,30 +7,30 @@
[![Go Report](https://goreportcard.com/badge/github.com/openebs/zfs-localpv)](https://goreportcard.com/report/github.com/openebs/zfs-localpv)


| [![opezfs](https://github.com/openebs/website/blob/main/website/public/images/svg/openzfs_logo_2024.svg.png "OpenZFS")](https://github.com/openebs/website/blob/main/website/public/images/svg/openzfs_logo_2024.svg.png) | The OpenEBS ZFS-LocalPV Data-Engine is a heavily deployed production grade CSI driver for dynamically provisioning Node Local Volumes into a K8s cluster utilizing the OpenZFS storage ZPool Data Mgmt stack as the storage backend. It integrates OpenZFS into the OpenEBS platform and exposes many ZFS services and capabilities. |
| [![opezfs](https://github.com/openebs/website/blob/main/website/public/images/svg/openzfs_logo_2024.svg.png "OpenZFS")](https://github.com/openebs/website/blob/main/website/public/images/svg/openzfs_logo_2024.svg.png) | The OpenEBS LocalPV-ZFS Data-Engine is a heavily deployed production grade CSI driver for dynamically provisioning Node Local Volumes into a K8s cluster utilizing the OpenZFS storage ZPool Data Mgmt stack as the storage backend. It integrates OpenZFS into the OpenEBS platform and exposes many ZFS services and capabilities. |
| :--- | :--- |
<BR>



## Overview

The ZFS-LocalPV Data-Engine became GA on Dec 2020 and is now a core component of the OpenEBS storage platform.
Due to the major adoption of ZFS-LocalPV (+120,000 users), this Data-Engine is now being unified and integrated into the core OpenEBS Storage platform; instead of being maintained as an external Data-Engine within our project.<BR>
The LocalPV-ZFS Data-Engine became GA on Dec 2020 and is now a core component of the OpenEBS storage platform.
Due to the major adoption of LocalPV-ZFS (+120,000 users), this Data-Engine is now being unified and integrated into the core OpenEBS Storage platform; instead of being maintained as an external Data-Engine within our project.<BR>

Our [2024 Roadmap is here](https://github.com/openebs/openebs/blob/main/ROADMAP.md). It defines a rich set of new featrues, which covers the integration of ZFS-LocalPV into the core OpenEBS platform.<br>
Please review this roadmp and feel free to pass back any feedback on it, as well as recommend and suggest new ideas regarding ZFS-LocalPV. We welcome all your feedback.
Our [2024 Roadmap is here](https://github.com/openebs/openebs/blob/main/ROADMAP.md). It defines a rich set of new featrues, which covers the integration of LocalPV-ZFS into the core OpenEBS platform.<br>
Please review this roadmp and feel free to pass back any feedback on it, as well as recommend and suggest new ideas regarding LocalPV-ZFS. We welcome all your feedback.
<br>

<BR>

> **ZFS-LocalPV is very popular** : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user).
> **LocalPV-ZFS is very popular** : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user).
> Here are our key project popularity metrics as of: 01 Mar 2024 <BR>
>
> :rocket: &nbsp; OpenEBS is the #1 deployed Storage Platform for Kubernetes <BR>
> :zap: &nbsp; ZFS-LocalPV is the 2nd most deployed Data-Engine within the platform <BR>
> :sunglasses: &nbsp; ZFS-LocalPV has +120,000 Daily Acive Users <BR>
> :sunglasses: &nbsp; ZFS-LocalPV has +250,000 Global instllations <BR>
> :zap: &nbsp; LocalPV-ZFS is the 2nd most deployed Data-Engine within the platform <BR>
> :sunglasses: &nbsp; LocalPV-ZFS has +120,000 Daily Acive Users <BR>
> :sunglasses: &nbsp; LocalPV-ZFS has +250,000 Global instllations <BR>
> :floppy_disk: &nbsp; +49 Million OpenEBS Volumes have been deployed globally <BR>
> :tv: &nbsp; We have +8 Million Global OpenEBS installations <BR>
> :star: &nbsp; We are the [#1 GitHub Star ranked](https://github.com/openebs/website/blob/main/website/public/images/png/github_star-history-2024_Feb_1.png) K8s Data Storage platform <BR>
Expand All @@ -43,7 +43,7 @@ Please review this roadmp and feel free to pass back any feedback on it, as well
## Project info

The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects/10). This tracks our base historical engineering development work and is now somewhat out of date. We will be publish an updated 2024 Unified Roadmp soon, as ZFS-LoalPV is now being integrated and unified into the core OpenEBS storage platform.<BR>
- The E2E Wiki [is here ](https://github.com/openebs/zfs-localpv/wiki/ZFS-LocalPV-e2e-test-cases)
- The E2E Wiki [is here ](https://github.com/openebs/zfs-localpv/wiki/LocalPV-ZFS-e2e-test-cases)
- The E2S Tests [are here](https://github.com/openebs/e2e-tests/projects/7).

<BR>
Expand All @@ -53,7 +53,7 @@ The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects
### Prerequisites

> [!IMPORTANT]
> Before installing the ZFS-LocalPV driver please make sure your Kubernetes Cluster meets the following prerequisites:
> Before installing the LocalPV-ZFS driver please make sure your Kubernetes Cluster meets the following prerequisites:
> 1. All the nodes must have ZFS utils package installed
> 2. A ZPOOL has been configurred for provisioning volumes
> 3. You have access to install RBAC components into kube-system namespace. The OpenEBS ZFS driver components are installed in kube-system namespace to allow them to be flagged as system critical components.
Expand All @@ -68,7 +68,7 @@ The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects
> | Kenel | oldest supported kernel is 2.6.32 |
> | ZFS | 0.7, 0.8, 2.2.3 |
> | Memory | ECC Memory is highly recommended |
> | RAM | 8GiB for best perf with Dedup enabled. (Will work with 2GiB or less without dedup) |
> | RAM | 8GiB for best perf with Dedup enabled. (Will work with 2GiB or less without Dedup) |
Check the [features](./docs/features.md) supported for each k8s version.

Expand Down Expand Up @@ -174,7 +174,7 @@ poolname: "zfspv-pool"
poolname: "zfspv-pool/child"
```

Also the dataset provided under `poolname` must exist on *all the nodes* with the name given in the storage class. Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for ZFS-LocalPV
Also the dataset provided under `poolname` must exist on *all the nodes* with the name given in the storage class. Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for LocalPV-ZFS

##### ext2/3/4 or xfs or btrfs as FsType

Expand Down Expand Up @@ -358,7 +358,7 @@ zfspv-pool/pvc-34133838-0d0d-11ea-96e3-42010a800114 96K 4.00G 96K legacy

#### 3. Deploy the application

Create the deployment yaml using the pvc backed by ZFS-LocalPV storage.
Create the deployment yaml using the pvc backed by LocalPV-ZFS storage.

```yaml
apiVersion: v1
Expand Down
26 changes: 13 additions & 13 deletions design/pv-migration.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Volume Migration for ZFS-LocalPV
title: Volume Migration for LocalPV-ZFS
authors:
- "@pawanpraka1"
owners:
Expand All @@ -8,7 +8,7 @@ creation-date: 2021-05-21
last-updated: 2021-05-21
---

# Volume Migration for ZFS-LocalPV
# Volume Migration for LocalPV-ZFS

## Table of Contents

Expand All @@ -27,11 +27,11 @@ last-updated: 2021-05-21

## Summary

This is a design proposal to implement a feature for migrating a volume from one node to another node. This doc describes how we can move the persistence volumes and the application to the other node for ZFS-LocalPV CSI Driver. This design expects that the administrators will move the disks to the new node and will import the pool there as part of replacing the node. The goal of this design is volume and the pod using the volume should be moved to the new nodes. This design also assumes that admins are not having large number of ZFS POOLs configured on a node.
This is a design proposal to implement a feature for migrating a volume from one node to another node. This doc describes how we can move the persistence volumes and the application to the other node for LocalPV-ZFS CSI Driver. This design expects that the administrators will move the disks to the new node and will import the pool there as part of replacing the node. The goal of this design is volume and the pod using the volume should be moved to the new nodes. This design also assumes that admins are not having large number of ZFS POOLs configured on a node.

## Problem

The problem with the LocalPV is, it has the affinity set on the PersistenceVolume object. This will let k8s scheduler to schedule the pods to that node as data is there only. ZFS-LocalPV driver uses nodename to set the affinity which creates the problem here as if we are replacing a node, the node name will change and k8s scheduler will not be able to schedule the pods to the new node even if we have moved the disks there.
The problem with the LocalPV is, it has the affinity set on the PersistenceVolume object. This will let k8s scheduler to schedule the pods to that node as data is there only. LocalPV-ZFS driver uses nodename to set the affinity which creates the problem here as if we are replacing a node, the node name will change and k8s scheduler will not be able to schedule the pods to the new node even if we have moved the disks there.

## Current Solution

Expand All @@ -44,11 +44,11 @@ The problem with the above approach is we can not move the volumes to any existi

### Keys Per ZPOOL

We are proposing to have a key dedicated to ZFS POOL. This key will be used by the ZFS-LocalPV driver to set the label on the nodes where it is present. In this way we can allow the ZFS POOLs to move from any node to any other node as the key is tied to the ZFS POOL as opposed to keeping it per node. We are proposing to have a `guid.zfs.openebs.io/<pool-guid>=true` label on the node where the pool is present. Assuming admins do not have large number of pools on a node, there will be not much label set on a node.
We are proposing to have a key dedicated to ZFS POOL. This key will be used by the LocalPV-ZFS driver to set the label on the nodes where it is present. In this way we can allow the ZFS POOLs to move from any node to any other node as the key is tied to the ZFS POOL as opposed to keeping it per node. We are proposing to have a `guid.zfs.openebs.io/<pool-guid>=true` label on the node where the pool is present. Assuming admins do not have large number of pools on a node, there will be not much label set on a node.

### Migrator

ZFS POOL name should be same across all the nodes for ZFS-LocalPV. So, we have to rename the ZFS POOLs if we are moving it to any existing node. We need a Migrator workflow to update the POOL name in the ZFSVolume object. This will find all the volumes present in a ZFS POOL on that node and updates the ZFSVolume object with the correct PoolName.
ZFS POOL name should be same across all the nodes for LocalPV-ZFS. So, we have to rename the ZFS POOLs if we are moving it to any existing node. We need a Migrator workflow to update the POOL name in the ZFSVolume object. This will find all the volumes present in a ZFS POOL on that node and updates the ZFSVolume object with the correct PoolName.

**Note:** We can not edit PV volumeAttributes with the new pool name as it is immutable field.

Expand All @@ -57,7 +57,7 @@ The migrator will look for all the volumes for all the pools present on the node
### Workflow

- user will setup all the nodes and setup the ZFS pool on each of those nodes.
- the ZFS-LocalPV CSI driver will look for all the pools on the node and will set the `guid.zfs.openebs.io/<pool-guid>=true` label for all ZFS POOLs that is present on that node. Let's say node-1 has two pools(say pool1 with guid as 14820954593456176137 and pool2 with guid as 16291571091328403547) present then the labels will be like this :
- the LocalPV-ZFS CSI driver will look for all the pools on the node and will set the `guid.zfs.openebs.io/<pool-guid>=true` label for all ZFS POOLs that is present on that node. Let's say node-1 has two pools(say pool1 with guid as 14820954593456176137 and pool2 with guid as 16291571091328403547) present then the labels will be like this :
```
$ kubectl get node pawan-node-1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
Expand All @@ -67,17 +67,17 @@ node-1 Ready worker 351d v1.17.4 beta.kubernetes.io/arch=amd64,beta.k

#### 1. if node2 is a fresh node

- we can simply import the pool and restart the ZFS-LocalPV driver to make it aware of that pool to set the corresponding node topology
- the ZFS-LocalPV driver will look for `guid.zfs.openebs.io/14820954593456176137=true` and will remove the label from the nodes where pool is not present
- the ZFS-LocalPV driver will update the new node with `guid.zfs.openebs.io/14820954593456176137=true` label
- we can simply import the pool and restart the LocalPV-ZFS driver to make it aware of that pool to set the corresponding node topology
- the LocalPV-ZFS driver will look for `guid.zfs.openebs.io/14820954593456176137=true` and will remove the label from the nodes where pool is not present
- the LocalPV-ZFS driver will update the new node with `guid.zfs.openebs.io/14820954593456176137=true` label
- the migrator will look for ZFSVolume resource and update the OwnerNodeID with the new node id for all the volumes.
- the k8s scheduler will be able to see the new label and should schedule the pods to this new node.

#### 2. if node2 is existing node and Pool of the same name is present there

- here we need to import the pool with the different name and restart the ZFS-LocalPV driver to make it aware of that pool to set the corresponding node topology
- the ZFS-LocalPV driver will look for `guid.zfs.openebs.io/14820954593456176137=true` and will remove the label from the nodes where the pool is not present
- the ZFS-LocalPV driver will update the new node with `guid.zfs.openebs.io/14820954593456176137=true` label
- here we need to import the pool with the different name and restart the LocalPV-ZFS driver to make it aware of that pool to set the corresponding node topology
- the LocalPV-ZFS driver will look for `guid.zfs.openebs.io/14820954593456176137=true` and will remove the label from the nodes where the pool is not present
- the LocalPV-ZFS driver will update the new node with `guid.zfs.openebs.io/14820954593456176137=true` label
- the migrator will look for ZFSVolume resource and update the PoolName and OwnerNodeID for all the volumes.
- the k8s scheduler will be able to see the new label and should schedule the pods to this new node.

Expand Down
Loading

0 comments on commit d99b0a0

Please sign in to comment.