Skip to content

Scaling

Lauri Ojansivu edited this page Jul 11, 2022 · 28 revisions

Wekan

General

Security

Scaling

Migrating

Support priorities for new features and bugfixes

  1. Commercial Support
  2. Community Support
  3. Debugging

Backup

Repair

Themes

Markdown Syntax

Login Auth

Integrations

Time

Features

Email

Logs and Stats

Required Settings

Download

Webservers

REST API Docs

REST API issue

REST API client code

Webhooks

Case Studies

Development

Issues

Clone this wiki locally

Scaling Wekan Snap with Automatic Updates

Recommended specs:

  • Try to add Redis Oplog like this
  • One bare metal server (or VM on server that does not have oversubscription), for example Fastest: UpCloud, Hetzner, Packet.
  • NVME or SSD disk. Speed difference when opening Wekan board: SSD 2 seconds, HDD 5 minutes.
  • Minimum 60 GB total disk space, 40 GB free disk space, Daily Backups to elsewhere, monitoring and alerting if server has low disk space, because disk full causes database corruption.
  • Newest Ubuntu 64bit
  • 4GB RAM minimum. See with free -h is server is using any swap. If it is, add more RAM.
  • some performance optimized CPUs/cores. 2 minimum, 4 is better. See with nproc how many CPUs you have. Look with top or htop is server using max 100% CPUs, if it is, add higher performance optimized CPUs (or more CPUs). But if it looks like Wekan not using some of those added CPUs, then adding more CPUs is not useful.
  • Do not store attachments at database, like uploading file to card. Have markdown links to files, like [Document](https://example.com/files/document.doc). Click Wekan board => => => Board Settings => Card Settings. There uncheck [_] Attachments to hide attachments at card.
  • Click Wekan Admin Panel / Settings / Accounts / Hide System Messages of All Users. If someone needs to show system messages, they can click slider at opened card to show them. Sometime later, if many have manually enabled showing system messages, click that same button at Admin Panel again.
  • Check Webhooks: Do you have Admin Panel / Settings / Global Webhooks (that sends most board actions to webhook) or at each board, per-board webhooks (that sends most one board actions to webhook, more info at wiki right menu Webhooks) at Wekan board => => => Outgoing Webhooks. You also see this with DBGate at port localhost:27019 / database: wekan / table: integrations. Each webhook should immediately return 200 response before processing any data, because otherwise it will slow down Wekan a lot.
  • In future Wekan version will be added as default:

Minimum specs:

  • RasPi3, 1 GB RAM, external SSD disk for Wekan and MongoDB.
  • While it works, it's only for minimal usage.
  • Newer RasPi recommended for minimum use.

Alternatives

At https://wekan.github.io / Download / Kubernetes or OpenShift, etc


OLD INFO BELOW:

Story: MongoDB on bare metal

From Tampa:

Hey,

... (about other tries) ...

Last month I threw all this out, recreated all the boards and connected them centrally to a single instance of mongo running on a dedicated server with custom hardware. This was like stepping into the light almost. Since then not a single machine has sent me a mail that it reached 50% usage. It seems insignificant, but the results speak for themselves.

The cloud instances are all shared 1vcpu, 1GB, 10GB storage, they just run wekan natively piped to the background, no docker, no snap, native install. They are connected to the central DB server sitting in the same datacenter. I stuffed a Raid6 with solid disks in that and gave it a hardware controller with a nice big cache. The latency being below 5ms over the network and the DB server having plenty of IO to go around it almost never has a queue of commits going to it and from the cache and IO use I suspect I could grow this tenfold easily.

With this setup each board essentially runs on half the hardware, in terms of cost anyways, yet it works so much better. There seems to be some magic ingredient here, being really fast IO for mongo that reduces system load of wekan by such a large amount that is can practically run even large boards with 40+ concurrent users on the least hardware most cloud providers even offer. With the central server setting up backups has become so much easier, I no longer need to wait for low usage to do backups either.

Scaling to more users

For any large scale usage, you can:

a) scale with Docker Swarm, etc

b) for big reads or writes, do it on replica

c) for big reads or writes, do it at small amounts at a time, at night, or when database CPU usage seems to be low

Related to docker-compose.yml at https://github.com/wekan/wekan , using Docker Swarm:

How to scale to more users

MongoDB replication docs

MongoDB compatible databases

AWS

Azure OIDC