Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Mattermost] Bot stop responding to commands after a while #201

Closed
shahbour opened this issue Oct 16, 2019 · 17 comments · Fixed by #222
Labels
bug

Comments

@shahbour
Copy link

@shahbour shahbour commented Oct 16, 2019

Describe the bug
I did install botkube in kubernetes and configure it to work with Mattermost .
Every thing is good and working but after a while bot stop responding to commands to ping does not return any pong neither any command but I still receive the notification.

Deleting the pod so it is recreated fix every thing for a while

Logs does not show any thing , how can I increase the verbosity of the logs to see what is going on .

@shahbour shahbour added the bug label Oct 16, 2019
@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Oct 16, 2019

@shahbour could you please post the settings section in the configuration? (Make sure you remove sensitive info before posting)

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Oct 16, 2019

Below is dump of the config map , i just used the default

data:
  config.yaml: |
    communications:
      elasticsearch:
        enabled: false
        index:
          name: botkube
          replicas: 0
          shards: 1
          type: botkube-event
        password: ELASTICSEARCH_PASSWORD
        server: ELASTICSEARCH_ADDRESS
        username: ELASTICSEARCH_USERNAME
      mattermost:
        channel: kuberentes
        enabled: true
        notiftype: short
        team: noc
        token: xxxxxxxxxx
        url: https://mattermost.xxxxxxx.com
      slack:
        channel: SLACK_CHANNEL
        enabled: false
        notiftype: short
        token: SLACK_API_TOKEN
      webhook:
        enabled: false
        url: WEBHOOK_URL
    recommendations: true
    resources:
    - events:
      - create
      - delete
      - error
      name: pod
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: service
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - update
      - delete
      - error
      name: deployment
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - update
      - delete
      - error
      name: statefulset
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: ingress
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: node
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: namespace
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: persistentvolume
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: persistentvolumeclaim
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: secret
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: configmap
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: daemonset
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - update
      - delete
      - error
      name: job
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: role
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: rolebinding
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: clusterrole
      namespaces:
        ignore:
        - null
        include:
        - all
    - events:
      - create
      - delete
      - error
      name: clusterrolebinding
      namespaces:
        ignore:
        - null
        include:
        - all
    settings:
      allowkubectl: true
      clustername: uk
      configwatcher: true
      upgradeNotifier: true
    ssl:
      enabled: true
@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Oct 16, 2019

@shahbour Which version of k8s you are using? You can make logs more verbose by setting --set logLevel=debug while doing helm install or helm upgrade.

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Oct 17, 2019

Ok i will try that , i am using version 1.16

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 1, 2019

I did update to logLevel=debug and i am seeing the debug logs

EBU[2019-11-01T10:06:14Z] Ignoring info to replicaset/gitlab-registry-77fdc75bf.15d301e41e1ec58c in gitlab namespaces
DEBU[2019-11-01T10:06:16Z] Processing delete to pod
DEBU[2019-11-01T10:06:16Z] Processing delete to pod/gitlab-registry-77fdc75bf-94wmk in gitlab namespaces
DEBU[2019-11-01T10:06:16Z] Filterengine running filters
DEBU[2019-11-01T10:06:16Z] Ignore Namespaces filter successful!
DEBU[2019-11-01T10:06:16Z] Object annotations filter successful!
DEBU[2019-11-01T10:06:16Z] Pod label filter successful!

now it stopped replying to my commands and nothing in logs?

Does it support trace or any log message that confirm message is received ?

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 7, 2019

I am assuming the Pod doesn't get crashed or restarted, right?
Could you please provide the steps to reproduces the issue?

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 8, 2019

no it does not , If i do delete the pod it work perfectly for some time .
I is always happening but i can't tell exactly what is triggering it some time it work for 5 hours , or 1 hour .
Nothing special from my side

(⎈ |production-uk:mattermost) shahbour@localhost  ~/Documents/kubernetes/hazelcast   master ●  kubectl get pod
NAME                                                  READY   STATUS    RESTARTS   AGE
botkube-78448d879f-bzkvz                              1/1     Running   0          22d
mattermost-mattermost-team-edition-6bfb4574d4-6pc78   1/1     Running   0          24d
mattermost-mysql-55d66998f9-sfhc6                     1/1     Running   0          25d

as you can see restart is 0

@PrasadG193 PrasadG193 changed the title [BUG] Bot stop responding to commands after a while [Mattermost] Bot stop responding to commands after a while Nov 8, 2019
@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 8, 2019

@shahbour have you setup Mattermost integration with self signed certs or without TLS?

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 8, 2019

I don't recall , how can I check ?

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 8, 2019

for Mattermost I use certificate from lets encrypt

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 8, 2019

I see you have also set ssl.enabled=true in config.

    ssl:
      enabled: true

Can you just verify once is there a secret named botkube-secret in botkube namespace?

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 8, 2019

i don't have botkube-secret

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 8, 2019

Could you please share the helm install command you had used to install the backend?

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 8, 2019

This is the values.yaml I am using

config:
  communications:
    mattermost:
      enabled: true
      url: https://mattermost.xxxxxxxx.com
      token: xxxxxxxxxxx
      team: noc
      channel: kuberentes
  settings:
    clustername: uk
    allowkubectl: true
image:
  repository: infracloudio/botkube
  tag: v0.9.0
logLevel: debug%
@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 8, 2019

You have set ssl.enabled=true in the config you posted earlier. Can you please confirm that
#201 (comment)

@shahbour

This comment has been minimized.

Copy link
Author

@shahbour shahbour commented Nov 12, 2019

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Nov 23, 2019

I am able to reproduce the issue @shahbour . Thanks for reporting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.