Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release Helm Chart 2.6.0 #21

Merged
merged 2 commits into from
Jun 30, 2020
Merged

Conversation

sijie
Copy link
Member

@sijie sijie commented Jun 20, 2020

Motivation

Release Helm Chart for Pulsar Release 2.6.0.

@sijie sijie added this to the 2.6.0 milestone Jun 20, 2020
@sijie sijie self-assigned this Jun 20, 2020
@wolfstudy wolfstudy self-requested a review June 23, 2020 09:26
@sijie sijie marked this pull request as ready for review June 23, 2020 17:46
@sijie sijie merged commit 93d8fd1 into apache:master Jun 30, 2020
@sijie sijie deleted the release_helm_chart_260 branch June 30, 2020 01:16
Joshhw added a commit to Joshhw/pulsar-helm-chart that referenced this pull request Mar 10, 2021
* Release Helm Chart 2.6.0 (apache#21)

* Release Helm Chart 2.6.0

* Issue-29: Bump missed out pulsar-image tags to 2.6.0 (apache#30)

Fixes apache#29 

### Motivation

Bumped missed out pulsar-image tags to 2.6.0

### Modifications

Modified the following files:
1. .ci/clusters/values-pulsar-image.yaml
2. charts/pulsar/values.yaml
3. examples/values-one-node.yaml
4. examples/values-pulsar.yaml

* Release workflow should fetch all tags (apache#33)

*Motivation*

The helm chart release workflow should fetch all tags.

* Add the release process (apache#34)

* Update Pulsar Helm Chart README (apache#35)

* Update appVersion to 2.6.0 (apache#36)

*Motivation*

based on [helm documentation](https://helm.sh/docs/topics/charts/),
the `appVersion` is the version of the app that this contains. Since the repo
is using 2.6.0 image, update `appVersion` to 2.6.0

* add targetport for grafana nad manager service (apache#37)

Co-authored-by: rahul.name <rahul@mail.com>

* Add optional user provided zookeeper as metadata store for other components (apache#38)

## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.

### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.

## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.

- Delete redundant image reference in bookkeeper init Job.

* Fix wrong variable reference in Grafana & Pulsar Manager port (apache#41)

### Motivation

PR apache#37 updated the location of the ports in the default values yaml. This causes a null pointer exception when rendering this helm chart.

### Modifications

Fix variable reference

* Add Ingress to Pulsar Proxy and Pulsar Manager (apache#42)

* changes for aws (apache#43)

* Update grafana dashboard images version to 0.0.9 (apache#45)

Signed-off-by: xiaolong.ran <rxl@apache.org>

### Modifications

- Update grafana dashboard images version to 0.0.9
- Add `.gitignore` file

* Add zookeeper metrics port and PodMonitors (apache#44)

* Add 'http' port specification to zookeeper statefulset

This makes the zookeeper spec inline with the other statefulset specs
in this chart and it provides a port target for custom podMonitors

* Added PodMonitors for bookie, broker, proxy, and zookeeper

New PodMonitors are needed for prometheus-operator to pickup scrape
targets.
Defaults to disabled so users need to opt in to deploy

* Added Apache license info to podmonitor yamls

* Allow Grafana to work with a reverse proxy (apache#48)

### Motivation

Allow Grafana to be served from a sub path.  

### Modifications

- Added a config map to add extra environment variables to the grafana deployment. As the grafana image adds new features that require environment variables, this can be used to set them.
- Bumped the grafana image to allow a reverse proxy
- removed ingress annotations as they are specific to nginx, and to match all the other ingresses
- bumped the chart version as per the README 


Example values:
```
grafana:
  configData:
    GRAFANA_ROOT_URL: /pulsar/grafana
    GRAFANA_SERVE_FROM_SUB_PATH: "true"
  ingress:
      enabled: true
      port: 3000
      path: "/pulsar/grafana/?(.*)"
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /$1
```

* Fix deprecated values (apache#49)

Fixes apache#46

### Motivation

There were some templates that relied on extra values that are deprecated. 

### Modifications

Modified the checks to check for non deprecated values or deprecated values. 

### Verifying this change

- [X] Make sure that the change passes the CI checks.

* Fix zookeeper antiaffinity (apache#52)

Fixes apache#39 

### Motivation

The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it. 

### Modifications

- Fixed app matchExpression 
- Added option to set the affinity type 
- bumped chart version

### Verifying this change

- [X] Make sure that the change passes the CI checks.

* Allow initialization to be set (apache#53)

Fixes apache#47 

### Motivation
Only create the initialize job on install. 

### Modifications

- Added an initialize value that can be set to true on install, matching the documentation in the README.md

* Bump the image version to 2.6.1 (apache#57)

Signed-off-by: xiaolong.ran rxl@apache.org

Motivation
Follow release process and bump the image version to 2.6.1

* Get OS signals passed to container process by using shell built-in "exec" (apache#59)

### Changes 

- using "exec" to run a command replaces the shell process with the executed process
- this is required so that the process running in the container is able to receive OS signals
  - explained in https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
    and https://docs.docker.com/engine/reference/builder/#entrypoint
- receiving SIGTERM signal is required for graceful shutdown. This is explained in https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html 

This change might fix issues such as apache/pulsar#6603 . One expectation of this fix is that graceful shutdown would allow Pulsar components such as a bookies to deregistered from Zookeeper properly before shutdown. 

### Motivation

Dockerfile best practices mention that "exec" should be used so that the process running in a container can receive OS signals. This is explained in https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
    and https://docs.docker.com/engine/reference/builder/#entrypoint .  Kubernetes documention explains pod termination in https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination : "Typically, the container runtime sends a TERM signal to the main process in each container. Once the grace period has expired, the KILL signal is sent to any remaining processes, and the Pod is then deleted from the API Server ."
Currently some issues while running Pulsar are caused by the lack of graceful shutdown. Graceful shutdown isn't happening at all since the Pulsar processes never receive the TERM signal that would allow graceful shutdown. This PR fixes that.

This PR was inspired by kafkaesque-io/pulsar-helm-chart#31

* add support for multiple clusters (apache#60)

Co-authored-by: Elad Dolev <elad@firebolt.io>

### Motivation

Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store

### Modifications

- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places

### Verifying this change

- [x] Make sure that the change passes the CI checks.

* Ingress optional hostname (apache#54)

Fixes apache#50 

### Motivation
The host option is not required to setup an ingress, so I made it an optional value
### Modifications

*Describe the modifications you've done.*
Made setting the host optional.

* Make forceSync by default as "yes" (apache#63)

### Motivation

* It's not recommended to run a production zookkeeper cluster with forceSync as "no".  This is also mentioned in the forceSync section in https://pulsar.apache.org/docs/en/next/reference-configuration/#zookeeper

### Modifications

* Removed ```-Dzookeeper.forceSync=no``` from ```values.yaml``` as default ```forceSync``` is ```yes```.

* changed publishNotReadyAddresses to (apache#64)

### Motivation

* ```publishNotReadyAddresses``` is a service spec and not a service annotation. This is mentioned in the K8s API docs at https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core

### Modifications

* Modified ```publishNotReadyAddresses``` from annotation to service spec

### Verifying this change

- [x] Make sure that the change passes the CI checks.

* Fix "unknown apiVersion: kind.sigs.k8s.io/v1alpha3" (apache#76)

* Fix "unknown apiVersion: kind.sigs.k8s.io/v1alpha3"

*Motivation*

The api version `kind.sigs.k8s.io/v1alpha3` is not available anymore for kind clusters.
So all the CI actions are broken now. This PR fix the issue.

Additionally it adds a helm chart lint job to lint the chart changes.

* Trigger CI when kind cluster build script is changed

* Upgrade chart-testing-action to 2.0.0 (apache#83)

Signed-off-by: xiaolong.ran <rxl@apache.org>

### Motivation

The lint ci error as follows:

```
Linting chart 'pulsar => (version: "2.6.2-1", path: "charts/pulsar")'
Checking chart 'pulsar => (version: "2.6.2-1", path: "charts/pulsar")' for a version bump...
Old chart version: 2.6.1-2
New chart version: 2.6.2-1
Chart version ok.
Validating /workdir/charts/pulsar/Chart.yaml...
Validation success! 👍
Validating maintainers...
Error: Error linting charts: Error processing charts
------------------------------------------------------------------------------------------------------------------------
 ✖︎ pulsar => (version: "2.6.2-1", path: "charts/pulsar") > Error validating maintainer 'The Apache Pulsar Team': 404 Not Found
------------------------------------------------------------------------------------------------------------------------
Error linting charts: Error processing charts
```

### Modifications

Upgrade `chart-testing-action` to 2.0.0

### Verifying this change

- [x] Make sure that the change passes the CI checks.

* Bump the image version to 2.6.2 (apache#81)

Signed-off-by: xiaolong.ran <rxl@apache.org>

### Motivation

Bump the image version to 2.6.2

### Verifying this change

- [x] Make sure that the change passes the CI checks.

* Local mode for kubernetes object generators (apache#75)

This allows operation in environemnts where direct installation of objects into
kubernetes cluster is not desired or possible. For example when using sealedsecrets
or SOPS, where the secrets are firs encrypted and then commited into repository
and deployed latter by some other deployment system.

Co-authored-by: Jiří Pinkava <jiri.pinkava@rossum.ai>

* Use `.Release.Namespace` by default to handle namespaces (apache#80)

It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces

Fixes apache#66

### Motivation

Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice 
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.

### Modifications

I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both  examples behave as expected
 
### Verifying this change

- [x] Make sure that the change passes the CI checks.

* Bump Pulsar 2.7.0 (apache#88)

Co-authored-by: Sijie Guo <sijie@apache.org>

* change port back

* updates host value in grafana-ingress

* removes ci and workflows

* fix ingress check

Co-authored-by: Sijie Guo <sijie@apache.org>
Co-authored-by: Prashanth Tirupachur Vasanthakrishnan <63665447+ptirupac-tibco@users.noreply.github.com>
Co-authored-by: Rahul Vashishth <rvashishth@users.noreply.github.com>
Co-authored-by: rahul.name <rahul@mail.com>
Co-authored-by: wuYin <wuyinpost@gmail.com>
Co-authored-by: Niklas Wagner <46919593+NiklasWagner@users.noreply.github.com>
Co-authored-by: BaochunLiuBJ <34231437+BaochunLiuBJ@users.noreply.github.com>
Co-authored-by: 冉小龙 <rxl@apache.org>
Co-authored-by: John Harris <jharris-@users.noreply.github.com>
Co-authored-by: Thomas O'Neill <toneill818@gmail.com>
Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
Co-authored-by: Elad Dolev <dolevelad@gmail.com>
Co-authored-by: Naveen Ramanathan <45779883+naveen1100@users.noreply.github.com>
Co-authored-by: Jiří Pinkava <j-pi@seznam.cz>
Co-authored-by: Jiří Pinkava <jiri.pinkava@rossum.ai>
Co-authored-by: Jean Helou <jean.helou@gmail.com>
Co-authored-by: lipenghui <penghui@apache.org>
pgier pushed a commit to pgier/pulsar-helm-chart that referenced this pull request Apr 22, 2022
rdhabalia pushed a commit to rdhabalia/pulsar-helm-chart that referenced this pull request Feb 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants