You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To have a HA cluster we need at least more than one master and several workers, in different availability zones.
With multiple master nodes, you will be able both to do graceful (zero-downtime) upgrades and you will be able to survive AZ failures.
Harden Kubernetes API to be accessible only from alllowed IPs via firewall rules or change the kops cluster config with the kubernetesApiAccess.
Billing Reduction Tips
Opt for EC2 Spot Instances and reserved instances.
Support kubernetes cluster with on-demand instances, which can take up the slack in the event of any interruptions to spot instances. This will improve availability and reliability.
Size of the master node:
1-5 nodes: m3.medium
6-10 nodes: m3.large
11-100 nodes: m3.xlarge
101-250 nodes: m3.2xlarge
251-500 nodes: c4.4xlarge
more than 500 nodes: c4.8xlarge
CPU/MEM usage per service
The memory and cpu allocation are observed at peak time, it represents the maximum memory/cpu that the engine uses during a file scan. We scanned multiple file format as they could trigger different component of the engine to realistically estime these numbers.
Some engines like ClamAV have a daemonized version, those are generally faster because the rules are loaded only once.
Service
CPU Util
Mem Util
Performance
AV Avast
1 core
1260MB
Fast
AV Avira
1 core
200MB
Slow
AV Bitdefender
1 core
600MB
Slow
AV ClamAV
1 core
1700MB
Fast
AV COMODO
1 core
300MB
Medium
AV DrWeb
1.2 core
580MB
Fast
AV ESET
1 core
220MB
Medium
AV FSecure
1 core
420MB
Fast
AV McAfee
1 core
400MB
Medium
AV Sophos
1 core
300MB
Medium
AV Symantec
0.4 core
300MB
Medium
AV TrendMicro
core
MB
Medium
AV Windefender
core
MB
Medium
Tune Kops max pods
Kops set the max pods to 150 which is the default value recommanded by Kubernetes.
However if you're spinning large computer instances, you might hit the max value while you still have enough computer power, if can edit this value by: kops edit cluster :