diff --git a/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx b/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx index ed7a98b684..1495b57a7f 100644 --- a/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx +++ b/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx @@ -7,7 +7,7 @@ content: paragraph: This page explains how to deploy Qdrant Hybrid Cloud on Scaleway Kubernetes Kapsule. tags: vectordb qdrant database dates: - validation: 2024-04-16 + validation: 2024-10-21 posted: 2024-04-16 categories: - kubernetes @@ -21,10 +21,10 @@ Qdrant Hybrid Cloud on Scaleway offers a secure and scalable solution that meets Key benefits of running Qdrant Hybrid Cloud on Scaleway include: -- **AI-Focused resources:** Scaleway provides dedicated resources and infrastructure tailored for AI and machine learning workloads, complementing Qdrant Hybrid Cloud to empower advanced AI applications. -- **Scalable vector search:** Qdrant Hybrid Cloud's fully managed vector database facilitates seamless scaling, whether vertically or horizontally. Deployed on Scaleway, it ensures robust scalability for projects of any scale, from startups to enterprises. -- **European roots and focus:** Scaleway's presence in Europe aligns well with Qdrant's European roots, offering local expertise and infrastructure that adhere to European regulatory standards. -- **Sustainability commitment:** Scaleway focuses on sustainability with eco-conscious data centers and an extended hardware lifecycle, reducing the environmental impact. +- AI-Focused resources: Scaleway provides dedicated resources and infrastructure tailored for AI and machine learning workloads, complementing Qdrant Hybrid Cloud to empower advanced AI applications. +- Scalable vector search: Qdrant Hybrid Cloud's fully managed vector database facilitates seamless scaling, whether vertically or horizontally. Deployed on Scaleway, it ensures robust scalability for projects of any scale, from startups to enterprises. +- European roots and focus: Scaleway's presence in Europe aligns well with Qdrant's European roots, offering local expertise and infrastructure that adhere to European regulatory standards. +- Sustainability commitment: Scaleway focuses on sustainability with eco-conscious data centers and an extended hardware lifecycle, reducing the environmental impact. @@ -36,8 +36,8 @@ Key benefits of running Qdrant Hybrid Cloud on Scaleway include: Setting up Qdrant Hybrid Cloud on Scaleway is straightforward, thanks to its Kubernetes-native architecture. -1. **Activate Hybrid Cloud:** Log into your Qdrant account and activate **Hybrid Cloud**. -2. **Integrate your clusters:** Add your Scaleway Kubernetes clusters as a private region in the Hybrid Cloud settings. -3. **Simplified Management:** Use the Qdrant Management Console for seamless creation and oversight of Qdrant clusters on Scaleway. +1. Log into your Qdrant account and activate **Hybrid Cloud**. +2. Add your Scaleway Kubernetes clusters as a private region in the Hybrid Cloud settings. +3. Use the Qdrant Management Console for seamless creation and oversight of Qdrant clusters on Scaleway. For detailed deployment instructions on how to build a RAG system that combines blog content ingestion with the capabilities of semantic search, refer to the [official Qdrant on Scaleway documentation](https://qdrant.tech/documentation/examples/rag-chatbot-scaleway/) or the [Qdrant product documentation](https://qdrant.tech/documentation/). \ No newline at end of file diff --git a/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp b/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp deleted file mode 100644 index 7d90e4c0e4..0000000000 Binary files a/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp and /dev/null differ diff --git a/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp b/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp deleted file mode 100644 index 1ea2e4b0b9..0000000000 Binary files a/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp and /dev/null differ diff --git a/tutorials/k8s-fluentbit-observability/index.mdx b/tutorials/k8s-fluentbit-observability/index.mdx deleted file mode 100644 index d3ef96e669..0000000000 --- a/tutorials/k8s-fluentbit-observability/index.mdx +++ /dev/null @@ -1,240 +0,0 @@ ---- -meta: - title: Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit - description: Learn to configure Fluent Bit on a Kapsule cluster to forward logs and metrics to the Observability Cockpit for Grafana visualization. -content: - h1: Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit - paragraph: Learn to configure Fluent Bit on a Kapsule cluster to forward logs and metrics to the Observability Cockpit for Grafana visualization. -tags: fluentbit grafana kubernetes metrics logs -categories: - - cockpit - - kubernetes -dates: - validation: 2023-06-17 - posted: 2023-06-01 ---- - -In this tutorial you will learn how to forward the applicative logs and the usage metrics of your [Kubernetes Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) containers into the [Observability Cockpit](/observability/cockpit/quickstart/). - -This process will be done using Fluent Bit, a lightweight logs and metrics processor that acts as a gateway between containers and the Cockpit endpoints, when configured in a Kubernetes cluster. - - - - - -- A Scaleway account logged into the [console](https://console.scaleway.com) -- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization -- [Retrieved your Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/) -- [Created a Kapsule cluster](/containers/kubernetes/how-to/create-cluster/) -- Set up [kubectl](/containers/kubernetes/how-to/connect-cluster-kubectl/) on your machine -- Installed `helm`, the Kubernetes [package manager](https://helm.sh/), on your local machine (version 3.2+) - - - - Having the default configuration on your agents might lead to more of your resources' metrics being sent, a high consumption, and a high bill at the end of the month. - - Sending metrics and logs for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) for more information. - - -## Configuring the Fluent Bit service - -Fluent Bit will be installed as a Helm package configured to target your Kubernetes resources as inputs and your Observability cockpit as an output. - -1. Add the Helm repository for Fluent Bit to your machine: - - ```bash - helm repo add fluent https://fluent.github.io/helm-charts - helm repo update - ``` - -2. Create a [values file for Helm](https://helm.sh/docs/chart_template_guide/values_files/) named `values.yaml` that we will use to configure Fluent Bit. -3. Create a first section `config.service` in the `values.yaml` file to configure the Fluent Bit master process: - - ```yaml - config: - service: | - [SERVICE] - Flush 1 - Log_level info - Daemon off - Parsers_File custom_parsers.conf - HTTP_Server on - HTTP_Listen 0.0.0.0 - HTTP_PORT 2020 - ``` - -- `Flush 1`: Collects logs every second. -- `Log_level info`: Displays informational logs in the Fluent Bit pods. -- `Daemon off`: Run Fluent Bit as the foreground process in its pods. -- `Parsers_File custom_parsers.conf`: Loads additional log parsers that we will define later on. -- `HTTP_Server on`: Enables Fluent Bit's built-in HTTP server. -- `HTTP_Listen 0.0.0.0`: Listen on all interfaces exposed by your pod. -- `HTTP_PORT 2020`: Listen to port 2020. - - - You need to enable Fluent Bit's HTTP server for it to communicate with your Cockpit. - - -## Configuring observability inputs - -We will configure Fluent Bit to retrieve the metrics (e.g.: CPU, memory, disk usage) from your Kubernetes nodes and the applicative logs from your running pods. - -Create a new section `config.inputs` in the `values.yaml` file: - -```yaml - inputs: | - [INPUT] - Name node_exporter_metrics - Tag node_metrics - Scrape_interval 60 - [INPUT] - Name tail - Path /var/log/containers/*.log - Parser docker - Tag logs.* -``` - -The first subsection adds an input to Fluent Bit to retrieve the usage metrics from your containers: -- `Name node_exporter_metrics`: This input plugin is used to collect various system-level metrics from your nodes. -- `Tag node_metrics`: The `Tag` parameter assigns a tag to the incoming data from the `node_exporter_metrics` plugin. In this case, the tag `node_metrics` is assigned to the collected metrics. -- `Scrape_interval 60`: The frequency at which metrics are retrieved. Metrics are collected every 60 seconds. - - - Increasing the scrape interval allows you to push fewer metrics samples per minute to your Cockpit and thus, pay less. - For instance, if your application exposes 100 metrics every 60 seconds, these 100 metrics are collected and pushed to the server. If you configure your scrape interval to 1 second, you will push 6000 samples per minute. - - -The second subsection adds an input to Fluent Bit to retrieve the logs from your containers: -- `Name tail`: The tail input plugin is used to read logs from files. -- `Path /var/log/containers/*.log`: The tail plugin reads logs from `/var/log/containers/*.log` which are the log dumps from your containers. -- `Parser docker`: The `Parser` parameter specifies the parser to be used for parsing log records. The `docker` parser is a custom parser that will be defined below. -- `Tag logs.*`: The `Tag` parameter assigns a tag to the incoming data from the tail plugin. The tag "logs.*" indicates that the collected logs will have a tag prefix of "logs" followed by any additional subtag. - -## Configuring logs processing - -The inputs collected by Fluent Bit should be structured before sending them to the Cockpit to enable further filtering and better visualization. - -1. Create a `config.customParsers` section to define the `docker` parser which is referenced by the log parsing input: - - ```yaml - customParsers: | - [PARSER] - Name docker - Format json - Time_Key time - Time_Format %Y-%m-%dT%H:%M:%S.%L - ``` - - This parser expects log records in JSON format. It assumes that the timestamp information is located under the key "time" in the JSON log record, and that the timestamp format is in ISO 8601 date format. - -2. Define a section named `config.filters` to filter incoming log files from the containers: - - ```yaml - filters: | - [FILTER] - Name kubernetes - Match logs.* - Merge_Log on - Keep_Log off - K8S-Logging.Parser on - K8S-Logging.Exclude on - ``` - - This sets up a filter plugin which will be applied to log records with tags starting with `logs.`. It enables log merging, extracts and parses Kubernetes log metadata, and allows log exclusion based on Kubernetes log metadata filters. - -3. Define a section named `config.extraFiles.'labelmap.json'`: - - ```yaml - extraFiles: - labelmap.json: | - { - "kubernetes": { - "container_name": "container", - "host": "node", - "labels": { - "app": "app", - "release": "release" - }, - "namespace_name": "namespace", - "pod_name": "instance" - }, - "stream": "stream" - } - ``` - - This defines a map for various Kubernetes labels and metadata to specific Fluent Bit field names to parse and structure the logs. - -## Configuring observability outputs - -The last step in the Fluent Bit configuration is to define where the logs and metrics will be pushed. - -1. [Create a token](/observability/cockpit/how-to/create-token/) and select push permissions for both logs and metrics. - - - -2. Create a section named `config.outputs` in the `values.yaml` file: - - ```yaml - outputs: | - [OUTPUT] - Name prometheus_remote_write - Match node_metrics - Host <...> - Port 443 - Uri /api/v1/push - Header Authorization Bearer <...> - Log_response_payload false - Tls on - Tls.verify on - Add_label job kapsule-metrics - [OUTPUT] - Match logs.* - Name loki - Host <...> - Port 443 - Tls on - Tls.verify on - Label_map_path /fluent-bit/etc/labelmap.json - Auto_kubernetes_labels on - Http_user nologin - Http_passwd <...> - ``` - -3. Fill in the blanks as follows: -- `Host` from the first subsection: paste your Metrics API URL defined in the **API and Tokens tab** section from the Cockpit. Remove the `https://` protocol. -- `Header`: Next to `Bearer`, paste the token generated in the previous step. -- `Host` from the second subsection: paste your Logs API URL defined in the **API and Tokens tab** section from the Cockpit. Remove the `https://` protocol. -- `Http_passwd`: paste the token generated in the previous step. - -In the first subsection, the `prometheus_remote_write` plugin is used to send metrics to the [Prometheus](https://prometheus.io/) server of your Cockpit using the remote write protocol. -In the second subsection, the `loki` plugin is used to send logs to the [Loki](https://grafana.com/oss/loki/) server of your Cockpit, using the field mapping from `labelmap.json` defined above. - -## Installing Fluent Bit - -Run the following command in the same directory as your `values.yaml` file to install Fluent Bit: - -``` -helm upgrade --install fluent-bit fluent/fluent-bit -f ./values.yaml -``` - -You should see a `DeamonSet` named `fluent-bit` with running pods on all of your nodes. - -## Visualizing Kapsule logs and metrics - -You can find the logs and metrics from your Kubernetes cluster in your Cockpit's [dashboard in Grafana](/observability/cockpit/how-to/access-grafana-and-managed-dashboards/). - -### Exploring metrics - -Grafana has a built-in dashboard for visualizing node metrics. - -1. Go to **Dashboards** in your Grafana instance. -2. Click **New**, **Folder** and name it `Kapsule`. -3. Click **New**, **Import** and paste the following URL in the **Import via grafana.com** field: - ``` - https://grafana.com/grafana/dashboards/1860-node-exporter-full/ - ``` -4. Click **Load** to access the new dashboard named **Node Exporter Server Metrics**. - - - -### Exploring logs - -Your Kapsule logs index can be queried in the **Explore** section of your Cockpit's dashboard in Grafana. In the data source selector, pick the **Logs** index. The Kubernetes labels are already mapped and can be used as filters in queries. \ No newline at end of file diff --git a/tutorials/k8s-kapsule-multi-az/index.mdx b/tutorials/k8s-kapsule-multi-az/index.mdx index af7cfe2dab..7b61a9df08 100644 --- a/tutorials/k8s-kapsule-multi-az/index.mdx +++ b/tutorials/k8s-kapsule-multi-az/index.mdx @@ -11,7 +11,7 @@ categories: - kubernetes - domains-and-dns dates: - validation: 2024-04-15 + validation: 2024-10-21 posted: 2023-04-15 --- @@ -97,7 +97,7 @@ Start by creating a multi-AZ cluster on `fr-par` region, in a dedicated VPC and tags = ["multi-az"] type = "kapsule" - version = "1.28" + version = "1.30.2" cni = "cilium" delete_additional_resources = true @@ -163,12 +163,12 @@ Start by creating a multi-AZ cluster on `fr-par` region, in a dedicated VPC and kubectl get nodes NAME STATUS ROLES AGE VERSION - scw-kapsule-multi-az-pool-fr-par-1-61e22198f8c Ready 89s v1.28.0 - scw-kapsule-multi-az-pool-fr-par-1-8334e772ced Ready 82s v1.28.0 - scw-kapsule-multi-az-pool-fr-par-2-1bcf90f3683 Ready 90s v1.28.0 - scw-kapsule-multi-az-pool-fr-par-2-33265e85597 Ready 86s v1.28.0 - scw-kapsule-multi-az-pool-fr-par-3-44b14b7bbbd Ready 84s v1.28.0 - scw-kapsule-multi-az-pool-fr-par-3-863491657c7 Ready 80s v1.28.0 + scw-kapsule-multi-az-pool-fr-par-1-61e22198f8c Ready 89s v1.30.2 + scw-kapsule-multi-az-pool-fr-par-1-8334e772ced Ready 82s v1.30.2 + scw-kapsule-multi-az-pool-fr-par-2-1bcf90f3683 Ready 90s v1.30.2 + scw-kapsule-multi-az-pool-fr-par-2-33265e85597 Ready 86s v1.30.2 + scw-kapsule-multi-az-pool-fr-par-3-44b14b7bbbd Ready 84s v1.30.2 + scw-kapsule-multi-az-pool-fr-par-3-863491657c7 Ready 80s v1.30.2 ``` ## Nginx ingress controller as a stateless multi-AZ application