Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GCP Pub Scaler should not panic #616

Closed
nachocano opened this issue Feb 6, 2020 · 1 comment · Fixed by #873
Closed

GCP Pub Scaler should not panic #616

nachocano opened this issue Feb 6, 2020 · 1 comment · Fixed by #873
Labels
awaiting-customer-response bug Something isn't working
Milestone

Comments

@nachocano
Copy link

nachocano commented Feb 6, 2020

GCP PubSub scaler panicking upon unmarshalling wrong credentials.

Expected Behavior

When passing invalid credentials to the gcp pub sub scaler, upon creation of the StackDriverClient, the code panics if the credentials cannot be unmarshalled.
See https://github.com/kedacore/keda/blob/master/pkg/scalers/stackdriver_client.go#L29

Actual Behavior

Log an error and return it.

Steps to Reproduce the Problem

  1. Create a ScaledObject with PubSub scaler, with invalid credentials in the trigger metadata...
  2. Publish messages to the topic in question.
  3. The keda-operator pod will start crash looping

Specifications

  • KEDA Version: master
  • Platform & Version: GKE
  • Kubernetes Version: 1.15
  • Scaler(s): gcp pub sub
@nachocano nachocano added the bug Something isn't working label Feb 6, 2020
@tomkerkhove
Copy link
Member

Can you let us know if it's still the case with 1.4.1 please?

@tomkerkhove tomkerkhove added this to the v1.5 milestone Jul 7, 2020
SpiritZhou pushed a commit to SpiritZhou/keda that referenced this issue Jul 18, 2023
Signed-off-by: Tom Kerkhove <kerkhove.tom@gmail.com>
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-customer-response bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants