Skip to content

Prerequisites

Jason Shaw edited this page Mar 8, 2024 · 36 revisions

General Requirements and Prerequisites

NOTE: All Kubeturbo deployment related documentation is now in the official IBM Docs here. This GitHub wiki is no longer being updated, please refer to the official IBM Docs going forward.


The Turbonomic platform gathers information from your Kubernetes / OpenShift environment through the Kubeturbo container image that is deployed into the Kubernetes(k8s)/OpenShift(OCP) cluster you want to manage. This guide assumes that the Turbonomic Server is already up and running with a valid license applied.

The kubeturbo container images are available through the public repo called IBM Container Registry or ICR as of Turbonomic Product version 8.7.5 and higher. Kubeturbo will run as a single pod deployment, with the following resources:

  1. Namespace or Project (default is turbo)
  2. Service Account
  3. Role binding defined
  4. ConfigMap with updated information to connect to the Turbonomic Server
  5. Deployment of kubeturbo (Yaml, Operator, OperatorHub (OpenShift), Helm)

turboNetwork_anyIAASanyK8S.png

Requirements: Network and Deployment

Kubernetes Version Support

  • Kubeturbo can be deployed into many versions of Kubernetes. Supported Kubernetes and OpenShift versions include:
    • OpenShift release 3.11 and Red Hat supported GA versions of OpenShift 4.x,
    • Kubernetes version 1.21 up to latest supported GA version
    • any k8s upstream compliant distribution including any managed Kubernetes environments (for example Rancher, AKS, EKS, GKE, IKS, ROKS, ROSA, etc....)
  • Supported Architectures: x86, Power (ppc64le), LinuxONE (os390)
    • Requires a linux node to run Kubeturbo on.
    • Kubeturbo by default will deploy on any schedulable non-control plane node, such worker / app / agent / compute node roles.
    • Starting with Turbo v8.8.6, container images will not be compatible with dockershim container runtimes

Deployment and Turbo Server

  • One Kubeturbo, like any other Turbonomic probe, can communicate with only one Turbonomic Server. One Kubeturbo pod will be deployed per cluster (control plane). Where an environment has more than one Turbo Server you will want to configure one Kubeturbo instance per server, as defined in the ConfigMap. Multiple Kubeturbo instances can occupy the same namespace, or create separate namespaces.
  • Turbonomic Server is in place
    • Any Turbonomic Server, whether SaaS, OVA based, deployed on any Kubernetes/OCP cluster, or Cisco CWOM.
    • Turbo Server version should be EQUAL to or higher (by N+1) than the Kubeturbo probe version
    • Running Cisco Intersight Workload Optimizer (aka IWO)? Refer only to the IWO Target Configuration Guide - Cloud Native Targets
  • Turbonomic Server criteria: running, with a Trial or Premium license, and gather the following information:
    • Turbonomic Server URL used to access the Turbo UI: https://<TurboIPaddressOrFQDN>
    • Turbonomic username and password
      • administrator or site administrator role
      • AD user supported when Turbo Server is integrated with AD
      • Running on SaaS or using multifactor authentication (MFA)? Turbo user needs to be local
      • NOTE these credentials can be provided in a Kubernetes secret. For details refer to your deployment method.
    • Turbonomic Server Version. To get this from the UI, go to Settings -> Updates -> About and use the numeric version such as “8.3” (No minor version needed)
    • Kubeturbo image tag should match the Turbo Server version. For more details refer to Turbonomic - CWOM - Kubeturbo version mappings
  • Access and Permissions to create all the resources required:
    • User deploying Kubeturbo needs cluster-admin cluster role level access to be able to create the following resources: namespace and cluster role binding for the service account.
    • Kubeturbo pod will run with a service account with a cluster-admin role. Least privileged custom role options are shown here.

Container Repo Access and Network

Refer to the Figure above “Turbonomic and Kubernetes Network Detail”.

  • Instructions assume the node you are deploying to has internet access to pull the Kubeturbo image from the ICR repository, or your environment is configured with a private repo. Refer to more details on working with a private repo here.
    • Kubeturbo probe container image:
      • ICR or IBM Container Registry using icr.io/cpopen/turbonomic/kubeturbo:<version> for version 8.7.5 and higher
      • Images for version 8.7.4 and older are available from either Docker hub or RedHat Container Catalog
    • Kubeturbo operator container image (if applicable):
      • ICR or IBM Container Registry using icr.io/cpopen/kubeturbo-operator:<version> for version 8.7.5 and higher
      • Images for version 8.7.4 and older are available from either Docker hub or RedHat Container Catalog
    • CPU Frequency container image (also known as busybox):
      • ICR or IBM Container Registry using icr.io/cpopen/turbonomic/cpufreqgetter
      • For more information on parameters associated with this job, go to the article here

For details on how to configure your deployment for a private repo, read this article.

  • Kubeturbo pod will require access to the kubelet on every node and to the apiserver
    • Kubelet Network: https + port=10250 (default).
  • Kubeturbo pod will have https/tcp and wss access to the Turbonomic Server
  • Proxies between Kubeturbo and Turbonomic Server need to allow websocket communication.

Requirements - Resources

Kubeturbo deploys by default without limits or requests set. Our recommendation is to use as is.

If you must set limits/requests, then the amount of resources required for Kubeturbo is related to the number of Workloads and the number of pods being managed.

  • Workload is defined as a unique workload controller, such as deployment foo, statefulset bar, etc.
  • When you deploy Kubeturbo, you can identify the number of workloads by going to the Supply Chain for a single k8s cluster, and identify the number in the Workload Controller entity

Use the following table as a guide for setting Memory Limits:

Number of Pods Number of Workload Controllers Recommended memory limit in Gi
5K 2.5K 4 Gi
5K 5K 4 Gi
10K 5K 6 Gi
10K 10K 6.5 Gi
20K 10K 9.2 Gi
20K 20K 12 Gi
30K 15K 13 Gi
30K 30K 16 Gi

Note

  • To avoid throttling, do not set a CPU Limit
  • Memory Requests if needed can be set to 1 GB.
  • CPU Requests if needed can be set to 1 core
  • To configure Kubeturbo container spec limits and requests, refer to your preferred deployment method for details

Next Steps

Pick your preferred deployment options. Click here to view all 4 options, or click on one of the methods below to deploy:

  1. Deploy Resources via yaml
  2. Operator
  3. OpenShift OperatorHub
  4. Helm Chart

There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally