New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deploy services through multiple instances #6932
Comments
hey @wangmiao1002 |
Hello, I am currently using Kubernetes. The components deployed by Pod are similar to Helm Chart, but each pod is a singleton. Can multiple server pods be started to achieve Ui's load? @AndrewChubatiuk |
what do you mean by singleton? each pod contains scheduler, worker and server? |
Only one service was started in a pod, and each service only started one pod. If a service encounters an exception, it will become unavailable |
I'm making a HA production-grade Redash deployment on AWS with ECS Fargate. I'll post when done |
Hello ,I used this configuration to run multiple pod's redash services:
scheduled_worker
server
adhoc_worker
scheduler
redash_woker
https://github.com/getredash/setup/blob/master/data/compose.yaml
version:10.1.0
When users download large files, the server pod may experience lag and affect usage, I have a few questions that I would like to ask for advice
1.Can I limit the number of download lines for users through configuration?
2.Can we only deploy multiple server pods to achieve high availability?
3.Will the scheduled queries set in the system run multiple times when multiple instances of pods are deployed as mentioned above?
The text was updated successfully, but these errors were encountered: