Skip to content

Commit

Permalink
Merge pull request #223 from tigergraph/k8s-operator/1.1.0
Browse files Browse the repository at this point in the history
Add release notes for Operator version 1.1.0
  • Loading branch information
chengjie-qin committed May 9, 2024
2 parents a3aa6d0 + 67dfd01 commit 4a9ef48
Show file tree
Hide file tree
Showing 3 changed files with 95 additions and 2 deletions.
9 changes: 7 additions & 2 deletions k8s/docs/01-introduction/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,15 @@

TigerGraph Operator stands as an automated operations system meticulously designed to streamline the management of TigerGraph clusters within Kubernetes environments. Its comprehensive suite of functionalities encompasses every aspect of the TigerGraph lifecycle, spanning deployment, upgrades, scaling, backups, restoration, and fail-over processes. Whether you're operating in a public cloud setting or within a self-hosted environment, TigerGraph Operator ensures that your TigerGraph instances function seamlessly within Kubernetes clusters.

> [!NOTE]
> Kubernetes Operator support is currently general availability in Operator version 1.1.0, which can be used for production deployments.
Understanding the intricate synergy between TigerGraph, TigerGraph Operator, and Kubernetes versions is pivotal. This relationship is as follows:

| TigerGraph Operator version | TigerGraph version | Kubernetes version |
|----------|----------|----------|
| 1.0.0 | TigerGraph >= 3.6.0 |1.24, 1.25, 1.26, 1.27, **1.28**|
| 1.1.0 | TigerGraph >= 3.6.0 |1.24, 1.25, 1.26, 1.27, 1.28|
| 1.0.0 | TigerGraph >= 3.6.0 |1.24, 1.25, 1.26, 1.27, 1.28|
| 0.0.9 | TigerGraph >= 3.6.0 && TigerGraph <= 3.9.3|1.23, 1.24, 1.25, 1.26, 1.27|
| 0.0.7 | TigerGraph >= 3.6.0 && TigerGraph <= 3.9.2|1.22, 1.23, 1.24, 1.25, 1.26|
| 0.0.6 | TigerGraph >= 3.6.0 && TigerGraph <= 3.9.1|1.22, 1.23, 1.24, 1.25, 1.26|
Expand Down Expand Up @@ -43,7 +47,8 @@ Once your deployment is complete, refer to the following documents for guidance
- [Customize TigerGraph Pods and Containers](../03-deploy/customize-tigergraph-pod.md)
- [Lifecycle of TigerGraph](../03-deploy/lifecycle-of-tigergraph.md)
- [Multiple persistent volumes mounting](../03-deploy/multiple-persistent-volumes-mounting.md)
- [Cluster status of TigerGraph on k8s](../07-reference/cluster-status-of-tigergraph.md)
- [Cluster status of TigerGraph on K8s](../07-reference/cluster-status-of-tigergraph.md)
- [High availability of rolling upgrade for TigerGraph on K8s](../07-reference/high-availability-of-rolling-upgrade.md)

In case issues arise and your cluster requires diagnosis, you have two valuable resources:

Expand Down
1 change: 1 addition & 0 deletions k8s/docs/08-release-notes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Those document describes the new features, improvements, bugfixes for all of ope

Please see the detailed documentation of each operator version release notes as follows:

- [Operator 1.1.0](./operator-1.1.0.md)
- [Operator 1.0.0](./operator-1.0.0.md)
- [Operator 0.0.9](./operator-0.0.9.md)
- [Operator 0.0.7](./operator-0.0.7.md)
Expand Down
87 changes: 87 additions & 0 deletions k8s/docs/08-release-notes/operator-1.1.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Operator 1.1.0 Release notes

## Overview

**Operator 1.1.0** is now available, designed to work seamlessly with **TigerGraph version 3.10.1**.

Kubernetes Operator support is now **generally available** in Operator version 1.1.0, suitable for production deployments.

### kubectl plugin installation

To install the kubectl plugin for Operator 1.1.0, execute the following command:

```bash
curl https://dl.tigergraph.com/k8s/1.1.0/kubectl-tg -o kubectl-tg
sudo install kubectl-tg /usr/local/bin/
```

### Operator upgrading

#### Upgrading from Operator 1.0.0

There are no changes in CRD for 1.1.0, you can upgrade the operator directly if you have an old operator version 1.0.0 installed.

Upgrade Operator using kubectl-tg plugin:

```bash
kubectl tg upgrade --namespace ${YOUR_NAMESPACE_OF_OPERATOR} --operator-version 1.1.0
```

#### Upgrading from Operator versions prior to 1.0.0

This new operator version upgrade brings breaking changes if you upgrade it from from Operator versions prior to 1.0.0.

Refer to the documentation [How to upgrade TigerGraph Kubernetes Operator](../04-manage/operator-upgrade.md) for details.

- Delete the existing TG cluster and retain the PVCs:

```bash
# You should take note of the cluster size, HA and so on before you delete it, you'll use it when you recreate the cluster
# You can export the yaml resource file of TG cluster for the later restoring
kubectl tg export --cluster-name ${YOUR_CLUSTER_NAME} -n ${NAMESPACE_OF_CLUSTER}
kubectl tg delete --cluster-name ${YOUR_CLUSTER_NAME} -n ${NAMESPACE_OF_CLUSTER}
```

- Uninstall the old version of the Operator:

```bash
kubectl tg uninstall -n ${NAMESPACE_OF_OPERATOR}
```

- Delete old versions of TG CRDs:

```bash
kubectl delete crd tigergraphs.graphdb.tigergraph.com
kubectl delete crd tigergraphbackups.graphdb.tigergraph.com
kubectl delete crd tigergraphbackupschedules.graphdb.tigergraph.com
kubectl delete crd tigergraphrestores.graphdb.tigergraph.com
```

- Reinstall the new version of the Operator:

```bash
kubectl tg init -n ${NAMESPACE_OF_OPERATOR}
```

- Recreate the TigerGraph cluster if necessary:

Extract parameters from the backup YAML resource file generated in step 1, or modify the YAML resource file and apply it directly.

```bash
# You can get the following parameters from the backup yaml resoure file in step 1
kubectl tg create --cluster-name ${YOUR_CLUSTER_NAME} -n ${NAMESPACE_OF_CLUSTER} \
--size ${CLUSTER_SIZE} --ha ${CLUSTER_HA} --private-key-secret ${YOUR_PRIVATE_KEY_SECRET} \
--version ${TG_VERSION} --storage-class ${YOUR_STORAGE_CLASS} --storage-size ${YOUR_STORAGE_SIZE} --cpu 6000m --memory 10Gi
```

## Improvements

- Support overlap between ConfigUpdate and ConfigUpdate. Now when a config-update job is running, users are able to change .spec.tigergraphConfig. After the running job completes, another config-update job will run to apply the changes.([TP-4699](https://graphsql.atlassian.net/browse/TP-4699))

## Bugfixes

- Kubectl-tg plugin cannot remove tigergraphConfig/podLabels/podAnnotations fields of TigerGraph CR.([TP-5091](https://graphsql.atlassian.net/browse/TP-5091))

- Fix the watch namespace update issue of the operator in kubectl-tg plugin.([TP-5280](https://graphsql.atlassian.net/browse/TP-5280))

- Fix the issue of Nginx DNS cache for tigergraph on K8s ([TP-5360](https://graphsql.atlassian.net/browse/TP-5360))

0 comments on commit 4a9ef48

Please sign in to comment.