Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent server behavior with several replicas (k8s) #331

Open
baznikin opened this issue Dec 27, 2022 · 5 comments
Open

Inconsistent server behavior with several replicas (k8s) #331

baznikin opened this issue Dec 27, 2022 · 5 comments

Comments

@baznikin
Copy link

baznikin commented Dec 27, 2022

Summary

Messages not delivered to client if there is several server replicas running.

Steps to reproduce

Server running using Mattermost Kubernetes Operator.
Version: 7.5.2

  1. scale installation to 1 replica. Mattermost operates nicely
  2. scale installation to 2 or more replicas. Missed messages

Expected behavior

Software operates the same no matter how much replicas run

Observed behavior (that appears unintentional)

On screenshots:

  • right window - account who send messages (web version)
  • left top - account who receie messages (Linux desktop client ver 5.2.2)
  • left bottom - account who receie messages (web version)

1 server replica. All messages delivered:
image

2 serer replicas. Messages delivered to client randomly. In this example I use two clients authenticated to same account. While testing I see situations when both clients receive and "loss" same messages or they "see" different messages (as pictured on screenshot). If refresh all messages shown.
image

Possible fixes

This situation arise week ago, we run 2 replicas server before and didn't suffer from any troubles (or didn't notice).
About same time we have our Kubernetes cluster upgraded from v1.23.x to v1.24.8. We use DigitalOcean managed k8s. Ingress controller - Kong.
Maybe some sort of sticky session will help, didn't try yet

@agnivade
Copy link
Member

I'll ping our cloud platform team to chime in @fmartingr @mirshahriar @gabrieljackson

@gabrieljackson
Copy link
Collaborator

Hey @baznikin, let's see if we can get to the bottom of this. First off, can you confirm that you are running Mattermost with a license? The issue you described is similar to behavior I have seen in the past where Mattermost server clustering wasn't working as expected. If it is licensed, can you review the logs and/or system console and verify that the multiple servers seem to be talking correctly to one another?

@baznikin
Copy link
Author

Hello @gabrieljackson! We run without license; we activate trial license to look around and cancel it couple of weeks later. It was 2 weeks ago and users start to complain 1-1.5 weeks ago. So, yes, it can be correlated.
What messages should I look for? This one, I suppose?..

"This server is not licensed to run in High Availability mode."

@gabrieljackson
Copy link
Collaborator

Yep, that's exactly the log line we are looking for. The behavior you are seeing is what can occur when you are running multiple Mattermost server instances, but server clustering is not activated due to not being licensed or other issues. When clustering is not enabled you will see temporary message "missing" behavior. If you refresh your client/webapp the messages will be there, but you may not receive indicators that they have arrived.

@baznikin
Copy link
Author

Thanks for clarification! However it is very very bad user experience since MM Operator creates 2 replicas by default (also no "red alarm" fires in my head while I read documentation on deploying, validated architecture topics, but possibly its my fault). If we stumbled over it during testing most likely we just drop MM and choose another competitor. I suppose we should propose MM operator to handle this situation and set replicas count to 1 if no license at moment (i.e. trial period ended or smth else).

@amyblais amyblais transferred this issue from mattermost/mattermost Jan 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants