Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm installed Velero uses restic even when restic is not enabled #73

Closed
gsc-k8s-config-management opened this issue Mar 10, 2020 · 5 comments

Comments

@gsc-k8s-config-management

I'm attempting to get Velero working in our clusters using the helm chart.

Here is my values.yaml, with some values interpolated by our CD process:

##
## Configuration settings that directly affect the Velero deployment YAML.
##

# Details of the container image to use in the Velero deployment & daemonset (if
# enabling restic). Required.
image:
  repository: velero/velero
  tag: v1.3.1
  pullPolicy: IfNotPresent

# Annotations to add to the Velero deployment's pod template. Optional.
#
# If using kube2iam or kiam, use the following annotation with your AWS_ACCOUNT_ID
# and VELERO_ROLE_NAME filled in:
#  iam.amazonaws.com/role: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<VELERO_ROLE_NAME>
podAnnotations: {}

# Resource requests/limits to specify for the Velero deployment. Optional.
resources: {}

# Init containers to add to the Velero deployment's pod spec. At least one plugin provider image is required.
initContainers:
   - name: velero-plugin-for-gcp
     image: velero/velero-plugin-for-gcp:v1.0.1
     imagePullPolicy: IfNotPresent
     volumeMounts:
       - mountPath: /target
         name: plugins

# SecurityContext to use for the Velero deployment. Optional.
# Set fsGroup for `AWS IAM Roles for Service Accounts`
# see more informations at: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
securityContext: {}
  # fsGroup: 1337

# Tolerations to use for the Velero deployment. Optional.
tolerations: []

# Node selector to use for the Velero deployment. Optional.
nodeSelector: {}

# Extra volumes for the Velero deployment. Optional.
extraVolumes: []

# Extra volumeMounts for the Velero deployment. Optional.
extraVolumeMounts: []

# Settings for Velero's prometheus metrics. Enabled by default.
metrics:
  enabled: true
  scrapeInterval: 30s

  serviceMonitor:
    enabled: true
    additionalLabels:
      release: collection

##
## End of deployment-related settings.
##


##
## Parameters for the `default` BackupStorageLocation and VolumeSnapshotLocation,
## and additional server settings.
##
configuration:
  # Cloud provider being used (e.g. aws, azure, gcp).
  provider: gcp

  # Parameters for the `default` BackupStorageLocation. See
  # https://velero.io/docs/v1.0.0/api-types/backupstoragelocation/
  backupStorageLocation:
    # Cloud provider where backups should be stored. Usually should
    # match `configuration.provider`. Required.
    name: gcp
    # Bucket to store backups in. Required.
    bucket: ${serviceVariable.bucket}
    # Prefix within bucket under which to store backups. Optional.
    prefix: ${serviceVariable.cluster}


  # Parameters for the `default` VolumeSnapshotLocation. See
  # https://velero.io/docs/v1.0.0/api-types/volumesnapshotlocation/
  volumeSnapshotLocation:
    # Cloud provider where volume snapshots are being taken. Usually
    # should match `configuration.provider`. Required.,
    name: gcp
    # Additional provider-specific configuration. See link above
    # for details of required/optional fields for your provider.
    config:
      # The GCP location where snapshots should be stored. See the GCP documentation
      # (https://cloud.google.com/storage/docs/locations#available_locations) for the
      # full list. If not specified, snapshots are stored in the default location
      # (https://cloud.google.com/compute/docs/disks/create-snapshots#default_location).
      #
      # Optional.
      snapshotLocation: "${serviceVariable.snapshot_zone}"
      # The project ID where existing snapshots should be retrieved from during restores, if 
      # different than the project that your IAM account is in. This field has no effect on 
      # where new snapshots are created; it is only useful for restoring existing snapshots 
      # from a different project.
      # 
      # Optional (defaults to the project that the GCP IAM account is in).
      project: "${serviceVariable.project}"

  # These are server-level settings passed as CLI flags to the `velero server` command. Velero
  # uses default values if they're not passed in, so they only need to be explicitly specified
  # here if using a non-default value. The `velero server` default values are shown in the
  # comments below.
  # --------------------
  # `velero server` default: 1m
  backupSyncPeriod: 1m
  # `velero server` default: namespaces,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods
  restoreResourcePriorities: namespaces,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods
  # `velero server` default: false
  restoreOnlyMode: false

  # additional key/value pairs to be used as environment variables such as "AWS_CLUSTER_NAME: 'yourcluster.domain.tld'"
  extraEnvVars: {}

  # Set log-level for Velero pod. Default: info. Other options: debug, warning, error, fatal, panic.
  logLevel: info

  # Set log-format for Velero pod. Default: text. Other option: json.
  logFormat: json

##
## End of backup/snapshot location settings.
##


##
## Settings for additional Velero resources.
##

rbac:
  # Whether to create the Velero role and role binding to give all permissions to the namespace to Velero.
  create: true
  # Whether to create the cluster role binding to give administrator permissions to Velero
  clusterAdministrator: true

# Information about the Kubernetes service account Velero uses.
serviceAccount:
  server:
    create: true
    #name:
    #annotations:

# Info about the secret to be used by the Velero deployment, which
# should contain credentials for the cloud provider IAM account you've
# set up for Velero.
credentials:
  # Whether a secret should be used as the source of IAM account
  # credentials. Set to false if, for example, using kube2iam or
  # kiam to provide IAM credentials for the Velero pod.
  useSecret: true

  # Name of a pre-existing secret (if any) in the Velero namespace
  # that should be used to get IAM account credentials. Optional.
  existingSecret: velero

  # Data to be stored in the Velero secret, if `useSecret` is
  # true and `existingSecret` is empty. This should be the contents
  # of your IAM credentials file.
  # secretContents:

# Whether to create backupstoragelocation crd, if false => do not create a default backup location
backupsEnabled: true
# Whether to create volumesnapshotlocation crd, if false => disable snapshot feature
snapshotsEnabled: true

# Whether to deploy the restic daemonset.
deployRestic: false

# Backup schedules to create.
# Eg:
# schedules:
#   mybackup:
#     schedule: "0 0 * * *"
#     template:
#       ttl: "240h"
#       includedNamespaces:
#        - foo
schedules:
  fullcluster:
    schedule: "${serviceVariable.backup_schedule}"
    template:
      ttl: "${serviceVariable.backup_ttl}"

# Velero ConfigMaps.
# Eg:
# configMaps:
#   restic-restore-action-config:
#     labels:
#       velero.io/plugin-config: ""
#       velero.io/restic: RestoreItemAction
#     data:
#       image: gcr.io/heptio-images/velero-restic-restore-help
configMaps: {}

##
## End of additional Velero resource settings.
##

Although I have nothing in here indicating restic should be used, the only way that Velero is attempting to backup persistent volumes is using restic. This fails since the daemonset is not deployed, so backups hang. I want Velero to use gcp snapshots, but cannot find a way to disable restic, which seems opposite of all documentation implying I need to explicitly enable it. Hopefully I am missing something obvious, thanks for any assistance!

@thedevelopnik
Copy link

I accidentally opened this as one of our service accounts, but I swear it's a real person behind it :).

@nrb
Copy link
Contributor

nrb commented Mar 10, 2020

@thedevelopnik What information are you seeing that makes you think Velero is using restic? Could you provide that? I'm guessing it's that a velero backup never completes?

If that's the case, could you post the output of velero backup describe of an affected backup please?

Also, did you label any pods with the backup.velero.io/backup-volumes label?

@thedevelopnik
Copy link

thedevelopnik commented Mar 11, 2020

Thanks for the quick response @nrb !

Yes, I did include that annotation based on a guide I found, is that not necessary for non-restic backups?

Here's a sampling of logs from output of k logs -n velero <velero-pod> | grep restic

{"command":"/velero","kind":"RestoreItemAction","level":"info","logSource":"pkg/plugin/clientmgmt/registry.go:100","msg":"registering plugin","name":"velero.io/restic","time":"2020-03-10T21:38:54Z"}
{"level":"warning","logSource":"pkg/cmd/server/server.go:478","msg":"Velero restic daemonset not found; restic backups/restores will not work until it's created","time":"2020-03-10T21:39:01Z"}
{"controller":"restic-repository","level":"info","logSource":"pkg/controller/generic_controller.go:76","msg":"Starting controller","time":"2020-03-10T21:39:01Z"}
{"controller":"restic-repository","level":"info","logSource":"pkg/controller/generic_controller.go:79","msg":"Waiting for caches to sync","time":"2020-03-10T21:39:01Z"}
{"controller":"restic-repository","level":"info","logSource":"pkg/controller/generic_controller.go:83","msg":"Caches are synced","time":"2020-03-10T21:39:01Z"}
{"backup":"velero/velero-fullcluster-20200310213901","group":"v1","level":"info","logSource":"pkg/backup/item_backupper.go:418","msg":"Skipping snapshot of persistent volume because volume is being backed up with restic.","name":"pvc-9030563d-317a-11ea-abad-42010a9600a7","namespace":"","persistentVolume":"pvc-9030563d-317a-11ea-abad-42010a9600a7","resource":{"Group":"","Resource":"persistentvolumes"},"time":"2020-03-10T21:39:37Z"}
{"controller":"restic-repository","level":"info","logSource":"pkg/controller/restic_repository_controller.go:156","msg":"Initializing restic repository","name":"twistlock-default-fw7mh","namespace":"velero","time":"2020-03-10T21:39:37Z"}

It calls out that it's skipping that snapshot to use restic. That is on the pod where I added the annotation.

Here is output from a describe of the last backup:

Name:         velero-fullcluster-20200311000001
Namespace:    velero
Labels:       app.kubernetes.io/instance=velero
              app.kubernetes.io/managed-by=Tiller
              app.kubernetes.io/name=velero
              helm.sh/chart=velero-2.9.1
              velero.io/schedule-name=velero-fullcluster
              velero.io/storage-location=default
Annotations:  <none>

Phase:  PartiallyFailed (run `velero backup logs velero-fullcluster-20200311000001` for more information)

Errors:    25
Warnings:  0

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Snapshot PVs:  auto

TTL:  72h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2020-03-10 18:00:01 -0600 MDT
Completed:  2020-03-10 19:00:46 -0600 MDT

Expiration:  2020-03-13 18:00:01 -0600 MDT

Persistent Volumes:  0 of 24 snapshots completed successfully (specify --details for more information)

Restic Backups (specify --details for more information):
  New:  1

And here is a sample from the logs on why one of the other 24 volumes failed:

{"backup":"velero/velero-fullcluster-20200311000001","error":"error taking snapshot of volume: rpc error: code = Unknown desc = googleapi: Error 400: Invalid resource usage: 'Invalid value for storage location: us-east4-c'., invalidResourceUsage","group":"v1","level":"error","logSource":"pkg/backup/resource_backupper.go:287","msg":"Error backing up item","name":"redis-data-webapp-redis-qa-slave-1","namespace":"","resource":"persistentvolumeclaims","time":"2020-03-11T01:00:32Z"}

So from this and your comment it looks like we don't need the backup-volumes annotation? And my guess on the storage issue is that it should be us-east4, as a region, not the specific zone?

This is clearly not a helm chart issue then, so happy to close this issue and take it somewhere else to not junk things up here.

@thedevelopnik
Copy link

@nrb that was it, didn't need the annotation and needed to set the region broader, thanks for the assist! This issue can be closed.

@nrb
Copy link
Contributor

nrb commented Mar 17, 2020

@thedevelopnik Sorry for the delayed response - that's correct, you only need that label for restic backups. Persistent volume snapshots will be taken without having to annotate or label anything.

@nrb nrb closed this as completed Mar 17, 2020
ndegory pushed a commit to ndegory/helm-charts that referenced this issue Jul 12, 2021
* Initial chart direectory rename prometheus-operator to kube-prometheus

See prometheus-community/community#28 (comment)

Signed-off-by: Scott Rigby <scott@r6by.com>

* First attempt at data change of prometheus-operator to kube-prometheus

Signed-off-by: Scott Rigby <scott@r6by.com>

* Helm GH Action files as-is from https://github.com/helm/charts-repo-actions-demo

Signed-off-by: Scott Rigby <scott@r6by.com>

* Bump chart-testing action to v1.0.0. See helm/charts-repo-actions-demo#20

Signed-off-by: Scott Rigby <scott@r6by.com>

* Changes to chart testing config file for this repo

Signed-off-by: Scott Rigby <scott@r6by.com>

* Use built-in GITHUB_TOKEN now that GH Actions bug is fixed. See helm/chart-releaser-action#26

Signed-off-by: Scott Rigby <scott@r6by.com>

* Test actions with version bump to prometheus chart

Signed-off-by: Scott Rigby <scott@r6by.com>

* Bump chart-releaser-action and kind-action to v1.0.0

Signed-off-by: Scott Rigby <scott@r6by.com>

* Release from main not master branch

Signed-off-by: Scott Rigby <scott@r6by.com>

* Revert "Test actions with version bump to prometheus chart"

This reverts commit 81c50e5.

Signed-off-by: Scott Rigby <scott@r6by.com>

* Allow requirements lock files

Signed-off-by: Scott Rigby <scott@r6by.com>

* Initial CODEOWNERS file (vmware-tanzu#18)

Signed-off-by: Scott Rigby <scott@r6by.com>

* Prep initial charts indexing (vmware-tanzu#14)

* [stable/prometheus] update prometheus to 2.20.1 and cm reloader to 0.4.0 (#23506)

* updated prometheus to 2.20.1 and cm reloader to 0.4.0

Signed-off-by: André Bauer <monotek23@gmail.com>

* fix xpp version

Signed-off-by: André Bauer <monotek23@gmail.com>

* Deprecate prometheus-operator chart before helm repo index, so that it won't be listed in the hubs

Signed-off-by: Scott Rigby <scott@r6by.com>

* Update prometheus-community/prometheus chart. Needed to update references to stable repo, but took the opportunity to reorganize, fix and simplify README

Signed-off-by: Scott Rigby <scott@r6by.com>

* Add Helm 3 commands before Helm 2. Add helm update command. Reorganize the 'Upgrading Chart' section

Signed-off-by: Scott Rigby <scott@r6by.com>

* Fix header

Signed-off-by: Scott Rigby <scott@r6by.com>

* Fix markdown linting

Signed-off-by: Scott Rigby <scott@r6by.com>

* Add direct links to values.yaml configuration file for easy browsing by end users without the CLI

Signed-off-by: Scott Rigby <scott@r6by.com>

* Remove prometheus chart OWNERS file

Signed-off-by: Scott Rigby <scott@r6by.com>

* Update prometheus-adapter chart README and bump version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus chart: Return updated, working command example for Sharing Alerts Between Services

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-adapter: fix configure command typos

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-blackbox-exporter: Update readme, delete OWNERS file and bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-cloudwatch-exporter: Update Readme with new template, delete OWNERS file, bump chart version, update CHANGELOG

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-consul-exporter: Update Readme per new template and bump chart

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-couchdb-exporter: Update Readme per new template and bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-mongodb-exporter: Update Readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-couchdb-exporter: Fix k8s 1.16 deprecated PodSecurityPolicy in the extensions/v1beta1 API version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-couchdb-exporter: Fix bad YAML indentation. How did this ever work?

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-mysql-exporter: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-nats-exporter: update readme per new template, move specific config note to values.yaml, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-node-exporter: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-postgres-exporter: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-pushgateway: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-rabbitmq-exporter: update readme per new template, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-redis-exporter: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-snmp-exporter: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-to-sd: update readme per new template, remove OWNERS file, bump chart version

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-to-sd: fix deprecated deployment apps/v1beta1

Signed-off-by: Scott Rigby <scott@r6by.com>

* Remove instructions for stable repos from all charts, except prometheus and deprecated prometheus-operator, as only those have dependencies on stable charts

Signed-off-by: Scott Rigby <scott@r6by.com>

* Temporary workaround github API rate limiting

Signed-off-by: Scott Rigby <scott@r6by.com>

* prometheus-to-sd: missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

Signed-off-by: Scott Rigby <scott@r6by.com>

* disable chart testing for prometheus-to-sd. If not running on GCE, will error: "Failed to get GCE config"

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: André Bauer <monotek@users.noreply.github.com>

* Add kube-prometheus chart maintainers to CODEOWNERS after merging main

Signed-off-by: Scott Rigby <scott@r6by.com>

* Un-deprecate chart within renaming to kube-prometheus PR

Signed-off-by: Scott Rigby <scott@r6by.com>

* Change all references to old coreos/prometheus-operator and coreos/kube-prometheus git repos to the new prometheus-operator github org

Signed-off-by: Scott Rigby <scott@r6by.com>

* Remove stray CODEOWNERS rule for charts/prometheus-operator/

Signed-off-by: Scott Rigby <scott@r6by.com>

* Fix typo

Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/hack/README.md

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/hack/README.md

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/hack/README.md

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/templates/prometheus/rules/prometheus-operator.yaml

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/templates/prometheus/rules/node.rules.yaml

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/templates/prometheus/rules/node-network.yaml

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/templates/prometheus/rules/node-time.yaml

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/templates/prometheus/rules/kubernetes-system.yaml

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update charts/kube-prometheus/README.md

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Update dependency location and remove README note about chart source (it's easy to determine chart source from the dependency definition)

Signed-off-by: Scott Rigby <scott@r6by.com>

* Fix markdownlint

Signed-off-by: Scott Rigby <scott@r6by.com>

* Update kube-prometheus per new README template. See vmware-tanzu#14

Signed-off-by: Scott Rigby <scott@r6by.com>

* Remove requirements lock file for now, otherwise if we release the chart before transferring repo ownership the digest will differ. See helm pkg downloader Manager Build() method check for resolveRepoNames()

Signed-off-by: Scott Rigby <scott@r6by.com>

* Non-functional: update commented links to CRD sources

Co-authored-by: Quentin Bisson <quentin@giantswarm.io>

Signed-off-by: Scott Rigby <scott@r6by.com>

* Add GitHub superlinter to lint markdown (vmware-tanzu#26)

* Create linter.yml

Signed-off-by: Torsten Walter <mail@torstenwalter.de>
Co-authored-by: Scott Rigby <scott@r6by.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Add configuration for Stale GitHub app (vmware-tanzu#27)

Signed-off-by: Scott Rigby <scott@r6by.com>

* disabled failing linters and fixed markdown issues (vmware-tanzu#32)

- fixes markdown issues reported by markdownlint
- disabled yamllint as helm templates are never valid
- disabled the other linters as there is a problem with a shell script and some python code
  once that is fixed we could enable them again

Signed-off-by: Torsten Walter <mail@torstenwalter.de>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Rename chart dir

Signed-off-by: Scott Rigby <scott@r6by.com>

* Update CODEOWNERS for new chart dir name

Signed-off-by: Scott Rigby <scott@r6by.com>

* Rename instances of kube-prometheus to kube-prometheus-stack. Take care to leave references to the upstream kube-prometheus project (and related configs) as kube-prometheus

Signed-off-by: Scott Rigby <scott@r6by.com>

* Chart testing needs this repo info to test chart dependencies in the same repo

Signed-off-by: Scott Rigby <scott@r6by.com>

* Auto-sync README from main to gh-pages (vmware-tanzu#41)

* Auto-sync README from main to gh-pages

Signed-off-by: Scott Rigby <scott@r6by.com>

* Only runs on push to main

even if this workflow is copied to a new branch

Signed-off-by: Scott Rigby <scott@r6by.com>

* Improve README for main and gh pages (vmware-tanzu#43)

Signed-off-by: Scott Rigby <scott@r6by.com>

* [prometheus] unify labels and annotations across all deploymens and statefulsets (vmware-tanzu#45)

Signed-off-by: Ondrej Homolka <ondrej.homolka@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* [prometheus-redis-exporter] Add zanhsieh as maintainer (vmware-tanzu#46)

Signed-off-by: zanhsieh <zanhsieh@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* added link to github to readme (vmware-tanzu#51)

Signed-off-by: André Bauer <monotek23@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* Add PROCESSES document (vmware-tanzu#44)

* Add CODEOWNERS

I used this syntax in CODEOWNERS:

```
/chart/<name-of-chart> @maintainer
```

It matches any files in the chart directory at the root of the repository and any of its  subdirectories.
Without the leading `/` it would also match directories found somewhere
else. It's unlikely that those names would be used, but it does not harm
to do it this way.

Part-of: vmware-tanzu#38

Signed-off-by: Torsten Walter <mail@torstenwalter.de>

* sort charts alphabetically

Signed-off-by: Torsten Walter <mail@torstenwalter.de>

* adjust existing CODEOWNERS

Signed-off-by: Torsten Walter <mail@torstenwalter.de>

* link to CODEOWNERS file and fixed spelling

Signed-off-by: Torsten Walter <mail@torstenwalter.de>

* feat: adding issue templates (vmware-tanzu#54)

* feat: adding issue templates

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* feat: PR template and review comments

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* [prometheus-consul-exporter] add gkarthiks as additional maintainers (vmware-tanzu#50)

* adding gkarthiks for additional maintainers

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* fix: new line char

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* adding gkarthiks to codeowners against consul

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* [prometheus] - adds monotek to prometheus maintainers (vmware-tanzu#55)

* added monotek to prometheus maintainers

Signed-off-by: André Bauer <andre.bauer@kiwigrid.com>

* rearrange the new codeowner for prometheus chart

Signed-off-by: Xtigyro <miroslav.hadzhiev@gmail.com>

Co-authored-by: Miroslav Hadzhiev <miroslav.hadzhiev@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* [prometheus-blackbox-exporter] fix linting failure due to deprecated api version (see issue vmware-tanzu#56) (vmware-tanzu#57)

* fix linting failure due to deprecated api version (see issue vmware-tanzu#56)

Signed-off-by: Jorrit Salverda <jsalverda@travix.com>

* use rbac.apiVersion template to set correct apiVersion for role and rolebinding

Signed-off-by: Jorrit Salverda <jsalverda@travix.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* formatted GitHub templates and made minor adjustments (vmware-tanzu#59)

* formatted GitHub templates and made minor adjustments

Signed-off-by: Torsten Walter <mail@torstenwalter.de>
Signed-off-by: Scott Rigby <scott@r6by.com>

* [kube-prometheus-stack] Fix Chart Name and Rm Whitespaces in "NOTES.txt" (vmware-tanzu#60)

* fix chart name in NOTES.txt

Signed-off-by: Xtigyro <miroslav.hadzhiev@gmail.com>

* rm whitespaces in NOTES.txt

Signed-off-by: Xtigyro <miroslav.hadzhiev@gmail.com>
Signed-off-by: Scott Rigby <scott@r6by.com>

* feat: replacing grafana rom stable to its own repo + additional chart maintainer (vmware-tanzu#53)

* feat: replacing grafana own repo

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* fix: trailing white spaces

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* fix: reverting the grafana values

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* adding grafana repo for actions

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* add: adding grafana repo in linter

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* doc(lint): making doc stmt as single stmt

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* revert: reverting the old README statement

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

* feat: adding gkarthiks to codeowners against kube-prometheus-stack

Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>

Signed-off-by: Karthikeyan Govindaraj <30545166+gkarthiks@users.noreply.github.com>

* Add scottrigby as co-maintainer of kube-prometheus-stack

Signed-off-by: Scott Rigby <scott@r6by.com>

* add xtigyro as maintainer for kube-prometheus-stack (vmware-tanzu#73)

Signed-off-by: Miroslav Hadzhiev <miroslav.hadzhiev@gmail.com>

* Revert header for simplicity

Co-authored-by: Cédric de Saint Martin <cdesaintmartin@wiremind.fr>

Signed-off-by: Scott Rigby <scott@r6by.com>

Co-authored-by: André Bauer <monotek@users.noreply.github.com>
Co-authored-by: Manuel Rüger <manuel@rueg.eu>
Co-authored-by: Torsten Walter <mail@torstenwalter.de>
Co-authored-by: hmlkao <ondrej.homolka@gmail.com>
Co-authored-by: zanhsieh <zanhsieh@gmail.com>
Co-authored-by: Karthikeyan Govindaraj <30545166+gkarthiks@users.noreply.github.com>
Co-authored-by: Miroslav Hadzhiev <miroslav.hadzhiev@gmail.com>
Co-authored-by: Jorrit Salverda <JorritSalverda@users.noreply.github.com>

Signed-off-by: Scott Rigby <scott@r6by.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants